Archive for August, 2008

Culture Wars

August 31, 2008

 

            The 1992 Republican Convention was a virtual orgy for the Religious Right.  George Bush spent so much time talking about morality and values that it was difficult to remember that he was not running for Pope.  (He might, of course, have had an easier time if he were.  Certainly the campaign would have been cheaper, aside from the extra transatlantic airfare and the cost of intensive Italian lessons.)  Patrick Buchanan declared “cultural war” on everybody who disagrees with his religious and political opinions.  He proclaimed that this is a “Christian country” because the majority of its citizens are Christians.  [Presumably by the same reasoning, this is a female country, since the majority of its citizens are women.]  This spectacle raises two questions for those of us outside the religious right: does a political party, or a government, have any business telling the American people what values they should espouse? and, if so, are the values dictated at the Convention the ones most in keeping with the founding ethos of the American polity?

 

            My instinctive response to the first question is a ringing no.  I get my values from my religious tradition, my congregation, and my own spiritual search.  As a member of a minority wing of a minority religion, I share none of these with the majority of the American people.  What we Americans do share, in answer to the second question, is a history of flight from persecution and exclusion by majorities in other countries.  Most of us are still committed to not duplicating that behavior here.  We sometimes trip over the practicalities of this commitment, since we find it so easy to believe that all Americans are members of the Protestant mainstream.  Generally, once the multidenominational facts are brought to our attention, we will gladly clean up our act and stop doing things that treat people who are not mainstream Protestants like outsiders or second-class citizens.  Buchanan and his ilk believe that commitment is itself violative of what they define as the American ethos, which they like to call the “Judeo-Christian” ethos, “family values”, or “biblical morality.”

 

“Biblical morality” is not biblical

 

            Two things need to be emphasized here.  First, despite its recurrent invocations of the “Judeao-Christian” and “biblical” ethos, the agenda the religious right is pursuing is not a biblical agenda.  The Bible has absolutely nothing to say about when an unborn child becomes a human being.  It says nothing whatever about birth control except, possibly, the story of Onan being divinely punished for “spilling his seed” and thereby refusing to carry out his divinely-ordained duty to beget sons by his brother’s widow.  Since not even the most literalist sector of the religious right now considers the levirate marriage a divinely-imposed duty, the sin of Onan has become moot in that quarter.  Similarly, the Jewish scriptures are quite clear that male homosexuality is to be punished by death, and the New Testament states that male homosexuals shall not inherit the divine kingdom. So far, no respectable voice on the religious right has advocated capital punishment for sodomy.  But the Bible has nothing to say about whether, if these reprobates are allowed to live, we are obliged (or even allowed) to discriminate against them in housing, employment,  and public accommodations.  It provides us with even less guidance as to the proper treatment of lesbians.

 

           The most politically articulate speakers for the religious right not only are not drawing on the Bible, they have not even bothered to familiarize themselves with the relevant passages, much less to notice how few such passages there are.  With few exceptions, a speaker for the religious right who says “the Bible tells us”, or uses the phrase “biblical morality”, is referring to the morality of his grandparents’ generation, roughly what 19th-century Unitarians would have considered proper morality.  By the standards of biblical Israel, that morality would be appallingly lax in some areas, and outrageously restrictive in others.  The religious right, by and large, finds divorce more acceptable than fornication or adultery, defines adultery as sexual infidelity committed by or with a married person of either gender, and has a really hard time deciding whether abortion is worse or better than unwed motherhood.   None of this has anything to do with the morality depicted and taught in the Jewish scriptures, nor that taught in the New Testament. 

           Biblically literate Christians and Jews know how really difficult it is to extract a coherent moral code, sexual or otherwise, from the scriptures.  It can be done, but it requires a lot more study, subtlety, and ingenuity than the religious right has so far applied to the task.  Whether such a moral code, once extracted, can meet the needs of a technologically advanced and socially fragmented society, is a whole separate question.  The point is, the religious right has not done its homework and is not even trying. 

 

The cultural war is not a religious-vs.-secularist war

 

            Second, by their silence, the religious left and center have allowed  the “cultural war” in which Patrick Buchanan says we are now engaged to be characterized as a dispute between religious people and irreligious people.  It isn’t.  There are religious people on both sides of these controversies.  (There are also secularists on the side of the religious right, like many of Ronald Reagan’s friends.)  There are committed, believing, religious Jews and Protestants who believe, individually and collectively,  that abortion is not merely permissible in some circumstances, but may in some cases be morally and religiously required.   They believe, based on the religious and philosophical doctrines of their own faiths, that the fetus is not a human being until it is viable or born, and that it can, and in some cases must, therefore be sacrificed to the welfare of the mother and the whole family.  Similar religious groups believe contraception not only can but must be taught to young people.  And an increasing number of religious bodies believe that lesbians and gay men have the same divinely-given rights to life, liberty, and the pursuit of happiness as heterosexuals, and that those rights ought to be protected by civil law.  (And, in fairness, many individual Catholics share these beliefs, despite the official position of their church.)

 

A Closet is not a Tent

 

            Indeed, Buchanan’s “cultural war” is not even a conflict between people who engage in homosexuality, get abortions, and commit fornication and adultery, and people who don’t.  The response of the Bushes and Dan Quayle to the question “What would you do if your daughter/ granddaughter decided to get an abortion?”  (roughly, I’d object, but I’d support her), and the surprising number of gay men who turn up on the religious right, tell us that the goal of that coalition is not to stamp out sexual misbehavior, but to make sure that those engaged in it feel guilty and ashamed.  Above all, they want to make sure that “l’homme [femme] moyen sensuel[le]” feels obliged to lie about it and is vulnerable to being blackmailed.  The religious right is willing to accept, support, and even collude with the closeting of upper-class sexual miscreants–like Bush’s hypothetical granddaughter, Quayle’s hypothetical daughter, and the late Roy Cohn.  But it reserves the right to discriminate against or even viciously attack the “lower orders” who stray from the straight and narrow.  They want no laws barring them from doing so.  All of which suggests strongly that the Republican Party is not a “big tent” but a gigantic closet.

 

The “secular realm” is not secular

 

            On the other hand, the religious center and left, and especially religious people who are not mainstream Protestants,  should be clear that secularists do not speak for them.  The courts, and most notably the Supreme Court,  have decided that most of the public observances of mainstream American Protestantism are “secular”–most notably not working on Sundays or Christmas, and Christmas decorations–and therefore can be imported into the public realm without violating the First Amendment’s “no establishment” clause.”  And so, incidentally, can the more pagan-derived observances such as Halloween, Santa Claus, and the Christmas tree, to which even some Christian fundamentalists object.  On the other hand, the courts have also decided that the observances of religions outside the American Protestant mainstream are religious and therefore cannot be accommodated in the public realm (even in the name of the First Amendment’s “free exercise” clause) without setting up an “establishment of religion.” So not only can an observant Jew be required to close his business on Sunday, but he may have to use his “personal” or vacation time to avoid reporting for his job on Saturday or Rosh HaShanah, and can legitimately be forbidden to wear a yarmulke to work, school, or court.      

 

           The religious right is taking the position that there is no neutral ground.  If the public realm is not explicitly suffused with their values and rituals, they claim, it is dominated by the “religion” of “secular humanism.  

            In fact, the public realm sanctioned by the courts is not “secular humanist”–it is mainstream Protestant with a tincture of Nordic/Celtic paganism.  The courts have gone further than many Americans who are not mainstream Protestants are comfortable, in bringing mainstream Protestant observances into the secular realm.  They just haven’t gone far enough for the religious right.

 

Multiculturalism misses a major culture

 

            Similarly, I would not argue that our schools are teaching the “religion” of “secular humanism.”  But they do do a poor job of teaching about the role of religion in American and world history, cultures, and literature.  Likewise, our entertainment media portray a world in which the role of religion in the lives of ordinary people is ignored even more severely than the prevalence of sound marriages and two-parent families.  In their efforts to avoid imposing a morality or a tradition on a public which may not share it, the entertainment and educational establishments have bent too far in the opposite direction.  We Americans, in varying degrees, espouse multiculturalism because we believe people have a need and a right to see themselves, or people like themselves, in the cultural artifacts around them; but by ignoring the role of religion, we are flagrantly violating that right.

 

Not a spectrum, but two opposing creeds

 

             Americans tend to assume that morality is a continuum, from those who believe in no morality, to those who assume an entire catechism of obligations.  We also assume that the more morally observant people may be offended by the conduct of their looser brethren, but that the latter cannot possibly be offended by the uprightness of the former.  Therefore, we assume, we can safely skew public life and morality toward the stricter end of the spectrum, if we wish to offend the fewest people.  The timidity of the religious center and left has a great deal to do with the popularity of these assumptions.  They have allowed themselves to be perceived as less moral than the religious right because they do not share the latter’s moral beliefs and will not attack those beliefs on their merits.  The things the religious right does and believes which are profoundly offensive to the religious left and center never get into the discussion.  A search for “neutral ground” under these conditions will end by defining neutrality as just short of everything the religious right wants.  The only serious countervailing force will be the inexorable demands of the consumer economy, with whom the religious center and left have their own disagreements.

 

Teaching a “Moral Core” in Public Schools

 

            For instance, can we, under these conditions, come together in teaching public school students some core of universal moral basics, like honesty, respect for the rights of others, self-control, and the “golden rule”?  I’d like to think so, but I doubt it.  How would the religious right view “respect for the rights of others” or the “golden rule” in the context of dealing with a gay or lesbian student in a mostly-straight class?  Suicide among gay teenagers is enormously more prevalent than among teens in general, precisely because people at that age have less tolerance of deviance–in themselves or others–than they will ever have in the rest of their lives.  What gay teenagers need from their fellow students is precisely respect for their rights and a good dose of the golden rule. But the religious right would be appalled at the mere thought of teaching such values to their children (or ours.) And many of the rest of us–religious center and left, and secularists, would be appalled at the thought of not teaching and practicing such values.  Similarly, the religious right is likely to believe that “honesty” (and the golden rule, for that matter) impose the obligation to inform sinners and unbelievers that they are headed for hell.  Which is hardly what I want my children subjected to. 

 

           We all can probably agree that public schools have a right and an obligation to codify and enforce a code of student behavior on school grounds, during school hours, and at school functions. Such a code can legitimately forbid physical abuse of students and school personnel, cheating, plagiarism, lying, drug use, drinking, smoking, reading pornography, libel, slander, and theft.  But, while both sides can probably agree on these rules, they would have trouble agreeing on the rationale and authority behind them, and probably ought not to try.

 

Avoiding religious wars–can we?  Should we?

 

            The genius of the American polity has shown itself in our ability to go two hundred years without a religious war.  The only other nations to have done nearly as well are those whose citizens have pretty much quit taking religion seriously, e.g. in Western Europe, China, and Japan.  But Americans as individuals take religion very seriously, far more so than most Europeans, Chinese, or Japanese.  We have kept out of religious wars (so far) by keeping our religious differences out of the public realm.

 

            That tactic may be losing its usefulness.  It can work only when all sides abide by it voluntarily.  The religious right has now ceased to do so.  This puts its non-secularist opponents at an unfair disadvantage.

 

           So far, the opponents of the religious right have argued only that theology and religiously-sponsored morality do not belong in the public realm.  No one has taken the position that the religious right’s theology and morality are theologically and morally wrong 

 

           The secularists, quite reasonably from their point of view, don’t care about the substance of the religious right’s doctrines.  And the religious center and left have allowed themselves to be intimidated from both sides–by the secularists, who oppose any theologically-based discourse in the public realm, and by the religious right, who believe they have a monopoly on such discourse.  Which puts the religious left and center in the position of being secure in their right to promulgate their beliefs only among those who already share them.  

 

           It is time for the religious left and center, and those religious Americans who are not mainstream Protestants, to declare themselves and publicly distinguish their beliefs from those of both the religious right and the secularists.  The public silence of one side can no longer prevent religious wars–it can only guarantee the defeat of the silenced.

 

Jane Grey

 

Advertisements

American Culture and the Law

August 31, 2008

 

Americans generally pass laws as a statement of public morality.  We would be equally horrified to see them repealed or enforced.  Speed limits, for instance, are rarely enforced. When they are enforced, it is either in really egregious cases where public safety is grossly endangered, or purely for the purpose of raising revenue or keeping some group of social inferiors in their place.  The one thing we clearly don’t intend to do by setting a 55-mph speed limit on a highway is to keep all traffic moving at or below 55 mph.

 

Most Americans don’t believe in speed limits.  They will tolerate (just barely) being ticketed for exceeding them by outrageous proportions, but they respond with rage to being forced to obey the limit by being stuck behind some driver doing the exact legal speed.  “Nobody can tell me how fast to drive,” as one driver said in response to a Chicago Tribune poll some years ago.  It seems not to occur to them that, by tailgating slower drivers, or zooming by them, or even sideswiping them, at speeds considerably above the limit, they are telling those slowpokes how fast they have to drive.  We not only have no right to enforce speed limits, we have no right to obey them. 

 

Our drug laws work pretty much the same way. If we really intended to stop all use of “illegal” drugs, and all underage use of alcohol and tobacco, we would have to devote a lot more resources to the problem than we are willing (or able) to do.  There are two alternatives (other than repeal) to a serious enforcement program.  One is to go after only the big traffickers, and the other is to go for the largest number of users and sellers.  We have, clearly, chosen the latter.  It gives us the illusion that we are benefiting society as a whole, when we are actually creating a lot of misery.

 

Similarly, our laws on abuse of spouses, children, elders, and so on, laudable as they are, were necessary only because we rarely bother enforcing our laws against simple assault and battery, or even against the aggravated species, where non-strangers are involved, even though those laws contain no such exception.  Now that we have decided people ought not to be allowed to beat up on those near and dear to them, we have to pass a whole new body of laws to do the job. 

 

Almost every legislative aide, whether on federal, state, or local level, has had the experience of being asked by his/her legislative boss to research and draft a bill dealing with some outrageous current problem about which the boss’ constituents are especially concerned, only to have the research reveal that a law forbidding the conduct in question has already been on the books for several decades, basking in obscurity.  Most often, the boss’ response is not to start a campaign to enforce the old law–s/he will still get better publicity and mileage with the voters by pushing a new law, however redundant.  And, I suspect strongly, most voters really don’t want to encourage the enforcement of obscure statutes already on the books.  Most of us are well aware that we probably violate dozens of such laws every day, and could not possibly afford to pay the penalties for all of those violations.

 

Americans love to pass laws, but we don’t much like obeying them.  For example, next time you see somebody smoking near you in a no-smoking area, try telling him it’s against the law.  Chances are, the smoker will respond, at best, with a sneer, and at worst with outright physical violence.  Now have a confederate come along and ask him/her to stop because it really kicks up the confederate’s asthma.  Chances are, the smoker will gladly comply, and may even apologize.  We are nice people, but we are anarchists at heart. 

 

Americans also mostly don’t like religious rules.  It is the presence of rules and authority that constitute the difference between “religion”, which most Americans are uncomfortable with, and “spirituality”, which we are currently in a desperate search for.  The Augustinian “love God and do what you will” appeals very directly to us.  Most of the people who have fled the Catholic church over the last thirty years have done so in reaction to its rules against contraception, divorce, and eating meat on Friday.  The church never did back down on the birth control ban, but has devised numerous ways around the ban on remarriage after civil divorce, and has totally scrapped meatless Fridays.  Now, interestingly enough, it is trying to reintroduce them, but not (heaven forbid!) as a “rule.”  Now it’s a “spiritual practice,” (which is probably what it should have been all along), and it may make a successful comeback in that capacity. 

 

Interestingly enough, the Europeans always viewed such rules the way Americans view speed limits, and had no problem maintaining some attachment to the church while completely ignoring the rules.  Americans, on the other hand, felt obliged to take them seriously, on the pain of “hypocrisy.”  Generally speaking, we reserve that term for the purely moral realm, for anyone who aspires to be good but has not yet attained perfection.  In other areas of our lives, most notably politics, hypocrisy is not only permitted, it is obligatory.  The Republican Party’s current leaders are all men who uphold “family values” and military service to the country, and almost all of them have been divorced and remarried at least once and managed one way or another to avoid service in the Vietnam War (although they never publicly opposed it.)  The Democrats, on the other hand, are led mostly by men who are still married to their first wives, with varying degrees of fidelity. Some of them served in Vietnam, but even some of them believed at the time, and all of them believe now, that it was a bad idea.   

 

Indeed, the Republicans, while deploring adultery and the movement for gay and lesbian rights, have a remarkable number of closeted adulterers and homosexuals among their number.  The “big tent” is more like an overgrown closet.  And it is the closeting that matters.  The Republicans believe in the civilian equivalent of “don’t ask, don’t tell.”  What people do in their own bedrooms is their own business, they believe–as long as the rest of us are equally free to discriminate against them for doing it, if we find out. 

 

“You can’t legislate morality,” is a hallowed American maxim.  The usual evidence for this statement is the ostensible failure of the Volstead Act.  One of the better-kept secrets of American history is that the Volstead Act succeeded in some very significant ways, such as reducing the number of industrial accidents and of deaths from cirrhosis of the liver.  Because no statistics were collected at the time about either drunk driving or domestic violence, we will never know how much these were also reduced, but we can certainly guess.  What we mean by saying Prohibition “failed” is that it did not change our fundamental views on the moral acceptability of alcohol.  In that sense, the laws against underage drinking and smoking, and the speed limits, have “failed” too.  On the other hand, the civil rights laws have succeeded in those terms.  Most of us now really believe that racial discrimination is wrong.  Even those of us who practice it are embarrassed to admit it and indignant if accused of it.

 

Unenforced and unenforceable laws, like New Year’s resolutions, are mostly just expressions of the people we wish we were, but not to the point of being willing to work at it.  In this capacity, they’re pretty harmless.  But as long as they are still “on the books”, with legal penalties attached, they are dangerous.  They have a pernicious potential for blackmail and discrimination against unpopular and powerless people. 

 

We are never, of course, going to comb through our statute books and rip out any law that has not been enforced within the statute of limitations on it.  That would involve admitting that our aspirations are doomed to failure, and that we will never be the people we want to think we are.  But there are ways around this conundrum.  Moral aspirations fit very nicely in preambles and introductory paragraphs in which legislators set out the reasons and purposes of legislation.  “Sense of the Congress” statements can serve the same purpose.  The Supreme Court could easily take on a case involving a defendant convicted under a law unenforced for decades, and decide that his due process and equal protection rights had been violated.  There is actually a legal doctrine readily available for their use, and discussed in Griswold v. Connecticut, thirty years ago: “desuetude”–a law unenforced for long enough ceases to be valid grounds for prosecution.  We would, of course, denounce the Supremes as a bunch of permissive libertines, but secretly we would be relieved.  We could aspire without guilt, while unimpeded in our customary sins and crimes.  It would in fact be another way to legislate morality–a stroke of the judicial pen would make us all moral.

 

Red Emma

“What You Mean ‘We,’ Paleface?”

August 31, 2008

or: A Klug Zu Columbus

 

            Five hundred years ago, Columbus landed somewhere in the Western Hemisphere and began the first documented continuous settlement of Europeans on this side of the Atlantic.  As has been repeatedly pointed out this year, that was in many ways a Dubious Achievement, bringing Christianity, smallpox, measles, and genocide to the New World;  potatoes, tomatoes, maize, turkeys, chocolate, syphilis, tobacco, and inflation to Europe; and the transatlantic slave trade to Africa.

 

            Revisionists seem to vacillate between seeing Columbus merely as a harbinger and a symbol of the more generalized disasters brought to indigenous peoples by European settlement, and as a personally active agent of its worst aspects, including murder, rape, enslavement, and environmental devastation.  They also have trouble deciding whom they personally can identify with. It may be legitimate to identify Columbus with the Bad Guys and the Taino Indians who first greeted him as the Good Guys, but if I remember correctly, neither Columbus nor the Tainos have any identifiable direct descendants left today.  And they have an even harder time–as do we all–recommending an appropriate atonement for the sins of Columbus, or appropriate parties to perform it.  Should all European-Americans go back to Europe?  Should the cultural institutions of the western hemisphere be remade in the image of the indigenous cultures?  Who is to make these choices, and more important still, who gets to carry them out? Is it possible to unscramble the ethno-geographico-cultural egg?

 

            The problem, in part, is that–especially in a nation of immigrants like our own–very few people have ethnically–or morally–homogeneous ancestry.  We are all mixed-breeds of one sort or another, and most of us number among our ancestors both victor and victim, occupier and native, oppressors and indentured servants or slaves. The language we speak–even to denounce the Europeans who settled this hemisphere–is a European language, whether English or Spanish. Most of us–even the most anti-imperialist–practice religions brought to this continent from Europe (or, in rarer cases, from Asia, Africa, and Arabia) often by missionaries dedicated to wiping out indigenous religion and culture.   

 

            My own ancestry includes Sephardic Jews, Jacobite Scots, Russians, Sicilians, and Brits; I have cousins who are Christian Scientist, Catholic, Jewish, and Buddhist, cousins who speak only English and cousins who speak only Spanish, cousins blonder than I am (which is quite blonde) and West Indian cousins several shades darker than Thurgood Marshall.  I have chosen, for reasons of religious commitment, to identify mainly with the Jewish component of my ancestry–but not to the point of totally ignoring the rest.

 

            Nor, even if I did so, could I identify completely with the innocent victims of history.  The past of every people that has ever populated this planet is littered with both the horrors they have suffered and the horrors they have perpetrated.  It is the task of our several histories to help us mourn the one and atone for the other.  For instance, my Jacobite Scots ancestors were almost certainly exiled to the Carolinas by the English after an unsuccessful rebellion in 1745–but less than a hundred years later, they were probably instrumental in evicting the Cherokees from those states onto the Trail of Tears where nearly half the tribe died en route to Oklahoma.

 

            The revisionists speak scornfully of “settler states”, usually meaning Israel and South Africa.  Even by exceedingly narrow revisionist standards, all the North and South American countries, Australia, and New Zealand have to be included on the list. 

 

            And in the longer context of world history, practically every state is a settler state.  Almost every “indigenous people,” everywhere on the globe, got to be indigenous by displacing some previous occupant.  In most cases we know nothing about the circumstances of the displacement.  We do not know how the Cro-Magnon replaced Neanderthal, or how, as the biblical prophet Amos says, the Philistines were brought out of Caphtor or the Arameans from Kir.  We do have a pretty good idea how the Mongols and the Huns displaced the various “indigenous” peoples in their path.  It involved massacre, pillage, and deforestation on a large scale. The same goes for the Germanic tribes who displaced the Celts and Lapps in northern Europe (which was not all that different from what the Celts did to the Pictish peoples who preceded them.)  Similar patterns of migration, invasion, and displacement also occurred in the Western Hemisphere before Columbus was ever born.  The main difference between the “settler states” colonized within the last 400 years and their more respectable predecessors is that the more recent settler states are still in a position to repair at least some of the damage they have done.

 

            We can respond to this dilemma by trying to wipe out our own origins, and the people who share them–which is pretty much what the Khmer Rouge did in Cambodia, and arguably what the Nihilists tried to do in Russia, the Baader-Meinhof Gang in Europe, Sendero Luminoso in Peru, and the Weather Underground in this country.  We can ignore the reality of our shared past, and base our politics on dishonest history and anthropology.  Or we can accept the fact that each of us individually, and each people and nation that now exists, has a cultural  and a genetic debt to victim and victor, slavemaster and slave.  We can accept the limits imposed on our best visions by our most subtle blindnesses.  (This may be the real symbolism of the crack in the Liberty Bell.)  We may commit ourselves–personally and collectively–to repairing the damage done by conquest, or at least, where it is too late for that, properly memorializing its victims. 

 

            But most important of all, we must begin to explore what it would really mean to say, with all our heart and mind and strength “Never again.”  Because there will always be new worlds to conquer and new peoples to displace, whether on the grand scale of international exploration and exploitation, or nearer home, in the gentrifying of shabbier neighbhorhoods.  And if we are not prepared to recognize and resist the temptation when it presents itself, we will only be scrambling more eggs and passing on the same cruel dilemma to our children.   

 

Red Emma

G-d is My Cleaning Lady

August 31, 2008

A Meditation on Theology and Liturgical Language

 

The essence of Jewishness lies between–between the text and the commentary, between the English (or French or German or whatever) and the Hebrew,  between what we do and why we do it, between what we do and what those before us did, between the body (and what we, as embodied people in a physical world, do) and the spirit. Jewishness is to be found in at least two dimensions, as the most significant distance between two points.  Those who fixate on a single point or dimension are not missing out on half of Jewishness, but on all of it. It is no more possible to make a Jewish life out of a single dimension than to hang a hammock from a single nail.

 

            Liturgical language has all sorts of betweennesses of its own.  All language about God is metaphor, so there is by definition a gap between the words used by the original writer(s) and the reality he/they were trying to convey.  Over time, a further gap develops between the reality perceived by the original writers and the reality experienced by those who use that liturgy one or more generations later, and between the everyday language used by the original writers and that used by later readers.  If the original liturgy is translated into another language, this creates a new set of gaps, especially since later generations are usually more willing to revise translations than original texts, so that a single original liturgical text may generate multiple translations.

 

            There is, of course, a difference between interpretation, translation, and revision, and a wide variety of purposes for doing each of them.  Serious scholars may be seeking a better understanding of the reality experienced by the original writers; other interpreters are looking for a way to fit liturgy to the reality they experience today, which may pertain to the divine, the holy community, or both.

 

            The Jewish liturgical tradition has been under ongoing revision and interpretation, as well as innumerable translations into every language the world’s Jews have ever used.  The liturgy has been revised to reflect changes in theology and in the place of the Jewish people in the world, since biblical times.  Until fairly recently, this process worked mainly by accretion.  The Tradition added–and added, and added; it did not subtract.  As a result, until fairly recently, the various accretions, like the rings on a tree, enabled anyone interested enough to do even a minimal amount of study to read the whole history of the Tradition merely by examining the Siddur.

 

            But more recent revisions by the Reform and Reconstructionist movements have subtracted while they added (sometimes more than they added,) cutting off the reader’s access to certain elements of the Jewish past.  And, especially in the English-speaking world, translations have often departed radically from the Hebrew text in the same siddur; many sections of the text have either been very loosely paraphrased, or have simply not been translated at all.  Generally, these alterations have had two sets of motivation: to reflect a new theological agenda, or to accommodate “modern” needs for a shorter and less repetitive service (at roughly the same time the college football game gets longer, interestingly enough.)

 

            In the context of this perennial process of change, the work of Chavurah and pro-feminist Jews over the last twenty-odd years is less radical than either they or the rest of the Jewish community tend to think.  It has generally pursued two separate but closely linked agendas: making the feminine visible in what has generally been a highly androcentric tradition, and shifting from a theology of hierarchy and transcendance to one of immanence and partnership.

 

            The intricate relationship between these two agendas is sometimes obscured by those who pursue them simultaneously.  Opening the Tradition to the experience and consciousness of women does not necessarily require eliminating hierarchy or transcendance from its theology.  On the other hand, a strong case can be made that the shift to a theology of immanence and partnership does necessarily imply a greater emphasis on the experience and consciousness of women.

 

            The pro-feminist agenda has a historical/sociological and a theological side.  The historical is generally less jarring to most Jews who have previously given little thought to feminist issues.  No historically or biblically literate Jew can deny the crucial place of women in Jewish history and literature.  A case can be made that the book of Genesis is mostly about, and moved by, women.  The place of women in Eastern European Jewish history and literature, and in the history of the Marranos, is also notable.  And the Jewish people who engage in the liturgical enterprise today are 53% female.  An increasing number of today’s Jewish women are uncomfortable with a place in the traditional liturgy ranging from  “the wives of this congregation” to outright invisibility.  Jews have always been male and female, and even the most hard-line traditionalists are increasingly comfortable with public liturgical recognition of that fact.

 

            But it takes the community longer to get comfortable with the fact that–like Moliere’s character who has been speaking prose all his life without knowing it–we have been attributing gender to the Holy One for all of our collective life.  That recognition is the first step to recognizing that–by virtue of operating in an androcentric culture in which the norm is male and the exception female, and an androcentric language in which the root forms of most words have masculine grammatical gender and most feminine-gender words are derivative forms–we have in the process been affirming male human beings as made in the divine image, and consigning female human beings to some unclear but definitely lower status.

 

            Philosophically, the solution ought to be developing a non-gendered  language to talk about the Holy One.  In English, this would be intricate but just barely possible within the limits of current grammar–Arthur Waskow has taken some shots at it, with varying degrees of success.  In Hebrew, it would require creating a whole new grammar.  Most literate Jews have trouble enough with Hebrew grammar in its present state.

 

            Aside from that, if we accept the traditional Jewish belief in a personal God, the philosophical neuter doesn’t quite work anyway.  We are gendered persons.  If we relate to God as a person because of the limits of our personal consciousness, we have to accept the fact that one of those limits is gender.  Personhood, in our experience of God, is a metaphor.  It may be a necessary one.  The same holds for gender.  As Rita Gross says in “Why Jews Need the Goddess,” “I wish those who use traditional god-language were as sure that God is not male as I am that God is not female.”  Quite aside from the equity issues, as Marcia Falk points out, the use of female God-language is no more metaphorical than that of male God-language, and has the advantage of jarring us into realizing that it is metaphorical.

 

            The use of female God-language, then, need not make any theological statement at variance with the most traditional views.  But many of those most actively engaged in its use want to take much more radical positions, undermining the traditional Jewish structure of holiness equated with separateness.  That may scare off some people who might have no problems seeing androcentrism as historically inaccurate and theologically idolatrous.  While moving back and forth between transcendance and immanence, hierarchy and partnership, doesn’t bother me, the people who are bothered by it should know that inclusiveness in our language (whether about the community or the divinity) does not require scrapping the notion of a transcendant or personal God.

 

            Regardless of the theological presuppositions we choose to include in our liturgy, we have the same intellectual/emotional/spiritual problems in confronting that liturgy.  There is a spectrum of ways we can respond to liturgy.  At one extreme are the rare moments when the liturgical communication is so utterly and intensely on-target with the individual’s mind and heart and soul as to literally make the hair stand on end.  I have had perhaps four such experiences in a lifetime, and that is probably more than average.  At the other end of the spectrum are experiences in which the individual is so utterly repelled by a particular liturgical statement as to give up its use, or the religious tradition in which it occurs, or even spirituality itself, altogether.  For obvious reasons, this sort of thing is unlikely to happen more than once in a single person’s lifetime, but it seems to happen to a lot of people.

 

            In between is the whole scale, from hearty assent (“yes, that’s exactly how it is”) to mild affirmation (“that sounds right”) to indifference (saying the words without attending to their meaning) to mild dislike (usually manifested by reading the words without saying them, or reciting them in a foreign language) to strong dislike (usually manifested by omitting the offending passage altogether.)  Jewish tradition allows for some other midway alternatives, such as skipping to the final brachah of a section.  A prayer that grates on the sensibilities and is traditionally considered optional anyway is likely to disappear from liturgical practice (which is what seems to have happened in the Chavurah movement to “Yekum Purkan” with its references to “this congregation and their wives.”)  And the Tradition itself has had a lot to say over the ages about the relationship between liturgical practice and kavannah, ranging from Aryeh Kaplan’s intriguing view of the Amidah as an overgrown mantra for guided meditation, to the Yiddish tehinnot written for (and sometimes by) women who were expected to use them in synchrony with the Hebrew liturgy they were not expected to understand, to Scholem Aleichem’s marvellous paraphrase of the weekday Amidah in the Tevyeh stories.  In short, the Jewish tradition has never required the individual to assent to the theology implicit in any particular liturgical communication in order to take part in that communication.  There have always been alternatives.  This has made life easier for the individual worshipper, of course.  But it has also made fewer demands on the liturgy than some other traditions do.  The distance between the average worshipper’s theology and the theological presuppositions of the liturgy can stretch pretty far before the liturgy is forced to change.

 

            Just as there is a spectrum of possible responses by the individual to the theology of a liturgical statement, there is also a spectrum of responses a particular individual will tolerate before either changing the liturgy, changing congregations, or dropping out altogether.  Some people may be perfectly comfortable paying absolutely no attention to theological meaning, while others may demand absolute consonance between the liturgy and their own current theological position.  Those most nearly indifferent to theological meaning will feel virtually no need to change the liturgy to conform to their own beliefs, and may strongly object to any change because the non-verbal subtext of the liturgy, for them, works.  Others may feel they have no choice but to change or leave.  Most of us in the Chavurah movement, I suspect, are somewhere in between–attracted by change, but not necessarily compelled to it except in a few egregious cases.  Similarly, some of us like being jarred by liturgy into examining its theological presuppositions, and our own (which is perfectly consistent with the traditional view of study as a valid form of worship) while others find it excessively cerebral.

 

            A few fragmentary thoughts: (a) one of the ways I particuarly like using to examine the theological subtext of liturgy is what I call “triangulation”–using two or more versions or translations of a section simultaneously, as a way to get at what original reality could possibly have generated both or all of them. (b) the use of a non-vernacular language in liturgy may be a useful safety valve, enabling people to remain comfortable with a set of liturgical statements long after they might find them abhorrent in the vernacular.  A case could be made that the fall-off in numbers of active Roman Catholic parishioners in the two decades after the demise of the Latin Mass was no accident.  In the more decentralized Jewish community, the process is less clear and may be more self-reinforcing.  Individuals may move to a level of observance requiring less Hebrew because it also requires less commitment.  Once there, however ,they may find themselves turned off by statements in the English liturgy that they had been more or less comfortably ignoring in the Hebrew.  This may kick them onto an even lower level of observance, with even more English liturgy, and so on.

            Anyway, getting down to the concretenesses implied in the title of this essay, our congregation has recently been experimenting with a version of Nishmat which undertakes to change into the feminine gender such descriptions of the Holy One as “ozer”–the one who helps us.  Feminine: ”ozeret.”  Which happens to be the modern Israeli Hebrew word for “cleaning lady.”  It drew a couple of giggles from a congregant who had formerly lived in Israel.  I found it mildly amusing.  But then–as was probably intended by the authors–I was also jarred into thinking about it.  These days, I reflected, it would take an act of divine intervention to get somebody else to help me clean my house–possibly rising to the level of a miracle.  Or, looking at it another way, when we make ourselves available to help each other with the drudgework of everyday life, are we not doing the work of the Holy One?  And isn’t this precisely the kind of reflection a good liturgical communication should evoke, at least occasionally?

 

Jane Grey

AlienNation: Ellen Ripley and the Just-In-Time Workplace

August 31, 2008

Most of us think of the “Alien” tetralogy of films as being “about” the spectacularly ugly and vicious monster of the titles. Its otherness is crucial to the emotional logic of the story. It’s okay for us to hate this beast and want to kill it. We are not quite sure whether she (the beast is indisputably female, though certainly not feminine) is the same identical creature in all four films, or merely one among many matriarchs. But we know she is a killer, and deserves whatever happens to her.

But she is not the only constant element in the tetralogy; there are two others. First is the stalwart, stubborn, ingenious Ellen Ripley, who takes on the monster, time and again, and wins one temporary victory after another. Her relationship with the monster becomes, after a while, familiar and even intimate. She comes to know it better than anyone else can.

The most important recurring character in the tetralogy, however, is the Company. We know little about it. We may not even know its name. For sure we never see its annual report to shareholders, its prospectus, or a list of its board of directors. We are not quite sure what goods and services it deals in, though we are quite clear that space travel and colonization are essential to its business.

The first “Alien” is essentially a shipboard saga. Naming the space freighter “Nostromo” is a pointed link to that genre. And the main point of a shipboard story is that the ship is a world in itself. Usually, the Captain plays God. Here, the Captain lasts only partway through the story, and the role of the divinity is really assigned to the Company, which has created the ship, set it in motion, and planned its mission, including certain components known only to its on-board robotic representative.

In the first installment, the crew’s relationship with the Company is filled with the usual blue-collar griping and premature counting of shares and bonuses. We know that the crew spends most of their interstellar travel time in suspended animation. We know, as the series rolls on, that Ellen sleeps through her own daughter’s death (and presumably most of her life), and, at the beginning of the third installment, we learn that she has also slept though the death of the child she “adopts” in the second installment.

In the fourth installment, we find that Ripley has survived not only the deaths of her real and surrogate daughters, but also her own death. Like the monster, she has become for all practical purposes immortal. But, while the monster keeps dying and being reborn for its own beastly purposes, Ripley is cloned and resurrected by the Company, to make use of her unique familiarity with the monster. Hers is the ultimate case of “owing my soul to the Company store.”

We never see Ripley, or any of the other crew members, awake and off duty. On board or off, they are always either on Company business or in suspended animation. This is the ultimate example of the “just-in-time workforce.” When the company needs them–even if they have died in the meantime–they are available. When it doesn’t, they are dead, or as good as dead, and presumably cost the Company virtually nothing.

This is the scariest aspect of the series, when you think about it. We are unlikely to meet the Monster, or any of her progeny, in the course of our daily lives. But we spend most of our waking hours with the Company, or organizations that aspire to become the Company. And we spend most of our waking hours becoming the “just-in-time workforce,” available when needed and near-dead (isn’t that what unemployment means in this culture?) when not.

On the other hand, this is also the most optimistic aspect of the series. We can’t kill the Monster, but we can start seriously working on dismantling the Company, and it’s about time we did.

Red Emma

Be Kind to Bureaucrats

August 31, 2008

 

These days, the two nastiest things you can call somebody are not “thief” and “murderer”, but “politician” and “bureaucrat.”  “Bureaucracy” has become a synonym for delay, obfuscation, and obsession with procedural detail at the expense of substantial justice.  Virtually anyone who works for a governmental agency–except perhaps a police officer or a firefighter–is vulnerable to charges of being a bureaucrat.  A few astute observers have even noticed that private business and industry have bureacracies, some of them more intricate than anything ever invented by a government in the U.S. 

 

Most of us do not even realize where the idea of bureaucracy came from, what it was meant to accomplish, and, perhaps most important, what the alternatives to it may be.  It all started with the government of the German state of Prussia in the eighteenth century, which originated the idea of government agencies whose purpose was  to get the governmental job done, whatever it might be–rather than to make a particular person richer and more powerful. The Prussians developed the idea of setting standards for the governmental job, so that anyone (inside or outside the agency) could tell when it had been done properly.  They originated the revolutionary idea that such an agency was supposed to do the job the same way for anyone who requested it and was entitled to it, regardless of whether the individual government worker liked the requester, and regardless of whom the requester had voted for or otherwise supported.  The Prussians first invented the notion that a governmental agency must not only do its job, but be able to prove that they had done so—also known as documentation, accountability, or red tape.  And they invented the idea of civil service–hiring people based on objective testing for merit and aptitude, and guaranteeing them continued employment so long as they served honestly and competently, regardless of which politicians might be running the government. 

 

Most people who complain about “waste, fraud, and abuse” at the hands of “bureaucrats” are actually calling our attention to governmental employees who have failed at being bureaucrats,  They have given worse service to some patrons than to others, or failed to give any patron the service required by the applicable regulations, or lost track of a case because of poor documentation. 

 

Other complaints about “bureaucracy” come from a dislike of the ideas behind it.  Some people, for instance, think no employee can be trusted to give competent and honest service except under the constant threat of arbitrary firing.  Job security of any kind, they think, is bad for the employer, even when the employer is the taxpayer.  Others favor patronage because they believe that government workers should respond immediately to any change in political administration–or resign in favor of the new administration’s flunkies–even if this makes long-range planning of governmental programs impossible. 

 

And some of those who complain about bureaucracy are complaining because they believe they personally should be getting better service from a governmental agency than its ordinary patrons.  Most of us, at heart, really want two systems of governmental service–one for ourselves and our friends, and one for everybody else.  We object to bureaucracy precisely because it treats all of us alike.

 

But the real alternative to bureaucracy is government by corruption, patronage, cronyism, guesswork, or intimidation; government which owes nothing to those who pay for it–not even a receipt for taxes.   Of course bureaucracies can be corrupted, or filled by incompetents.  Generally that happens in societies where corruption or incompetence are endemic everywhere–not just in governmental service–and would show up in any system of governmental service. And generally, the cure for a flawed bureaucracy is not the abolition of bureaucracy, but the creation of a better bureaucracy. 

 

Jane Grey

Tax Simplification and Over-Simplification

August 31, 2008

 

Americans have no patience for complexities.  Americans hate taxes, not so much because they think they’re paying too much (though they do–and they aren’t, compared to citizens of industrialized nations), but because they hate the 1040 long form and the Schedule C and the other auxiliary forms, and were even sufficiently unhappy with the 1040 X (short form) to force IRS to come out with the postcard-sized 1040 EZ.

 

Lately, several of the people I correspond with on various computer bulletin boards have been asking things like “Why should we have to file returns at all? Why don’t they just take out the withholding all year, calculated so that everybody comes out exactly even?”  In point of fact, that is the whole idea behind the withholding system–to make everything come out exactly even.  The need for filing a return–and its accompanying complexities–arise out of two different problems.  One, which has always been us, is that we have decided we want to make certain kinds of expenses deductible, either for the sake of fairness (like medical expenses or casualty losses) or to encourage certain kinds of behavior we have decided are desirable (like buying homes, contributing to charity, and saving for retirement.)  Since not everybody has these expenses or engages in these activities, or spends equal amounts on doing so, people have to file returns to claim those deductions.  It’s easy to simplify the system, if we tax all income, including medical expenses, casualty losses, home mortgage  interest, etc., at the same rate.  But the average taxpayer might lose money in the process, at least in a bad year.

 

And the other side of the problem is that an increasing number of taxpayers are getting some or all of their income, not from a single employer who regularly withholds taxes and sends them on to Uncle, but from a multitude of private clients, as independent contractors or consultants.  That trend is likely to get a lot worse before it gets any better. It may never get better, if corporate employers have their way.  In another ten years, we may all be independent contractors. Trying to require every one of the contractor’s private clients–anyone who pays the kid next door to mow the lawn or babysit, anybody who gets his house painted or his car repaired–to deduct taxes and do the paperwork necessary to send them to Uncle on the contractor’s behalf–would merely shift the complexity from the employee to the employer, while turning virtually everybody into an employer.

 

If the complexophobes have their way, presumably the next federal income tax return form (the 1040 BS, for “Bumper Sticker”, which is what it will be printed on) will be entirely blank. The IRS will fill it in with the figures they consider appropriate. Letting the guy with the biggest gun have things his way is the simplest system there is, but rarely is that simplicity enough to compensate us for what he will then be free to take away.

 

Bureaucratic complexity, in short, is almost always the result of two opposing forces–the demand of some powerful entity for money or power over the individual, and the individual’s wish to have that power exerted fairly and/or at minimal cost to the individual.  The complexities of the tax code mostly have to do with reasons we think people should not have to pay taxes on income earned or expended in certain ways–charitable contributions, for instance, or child care.  A simple tax code would just take 20% (or whatever) of everything, including the change in the beggar’s cup. We have decided, as a society, that fairness is more important than simplicity.  Now, for some reason, we have lost sight of the reason why.  Let’s just let the guy with the biggest gun take what he wants, so long as we don’t have to do any simple arithmetic.

 

But, says the local chapter of the Timothy McVeigh Fan Club, who says the government has to be the guy with the biggest gun?  Who says government should get a gun–or our taxes–at all?  They don’t have to, of course.  But if we get rid of them, there are plenty of other people, with far bigger guns than we have, anyway, out there, equally eager to take our money and a lot less interested than the government in providing any kind of goods or services in return.  And they’ll be the first to say, as thugs have said for thousands of years, “Don’t make things complicated. Just give me everything I want.” No deductions, no exemptions.  Just 100% taxation.

 

(Remember the old story about the man who threw his buddy into the lake, after a disagreement over the rules of poker?  The buddy came up, gasping for air, and his pal hit him on the head with a board.   The buddy sank, came up, got hit again, sank, and so on for about five or six go-arounds before he finally drowned.  His pal told the police later “He shore was a fool to keep coming up.”)

 

(Remember the sexist dictionary that defines the difference between seduction and rape as depending on how much effort the woman is willing to expend to postpone the inevitable?)

 

Everybody’s proposing alternatives to the income tax.  Dick Armey is proposing a flat tax, by which he means a simple flat percentage of income, applied to all income from all taxpayers, with no deductions.  Bill Archer (R. Tx) is talking about eliminating income taxes altogether and catching us at the other end, with a consumption tax. 

 

Americans hate all taxes.  After all, the only reason we are the U.S. of A. is that we rebelled against taxation without representation.  We have since discovered we don’t much like taxation with representation either.  According to the Lutheran Brotherhood Insurance Company’s recent study, Americans actually hate sales and real estate taxes even more than federal income taxes–but we’re not too fond of any of them.

 

Rep. Archer’s consumption tax is a fancy name for a sales tax. (The taxes in honor of which the American Revolution was fought were consumption taxes and what we now call user fees.) It is also, by the way, the main revenue-raising method used by the government of the USSR–remember them?  The very model of a successful economy every American would love to imitate?

 

Personally, I think all those guys are barking up the wrong tree.  The simplest, fairest kind of tax we could possibly have isn’t a flat percentage tax–after all, most Americans these days can’t do percentages any more. It’s a flat amount tax.  The first $6,000 (or whatever) of every American’s gross income from any source. No deductions, no percentages.  Most of us still know how to subtract.  The same six grand from the ex-steelworker on the corner selling papers, and from his billionaire ex-boss in the suburbs.  What could be fairer?

 

Oh, yeah.  You want to know what happens to people who don’t make six grand a year, or make just barely that much.  And what about children who aren’t old enough to make any money?  Don’t bother me with that stuff.  Those are complexities.  Ignore them and they’ll go away–that’s the spirit that made this country great. 

 

Or let’s try another simple solution.  The Gingrichites seem to be taking the position that the way to end poverty is to end all government programs that make poverty endurable.  Why not take this position one step further, and pass a law that, after a stated date, anybody with an income too low to pay the flat amount tax will be taken out and shot?  Now we’ve not only simplified the tax system, we’ve turned it into a screening system to get rid of the deadwood.  That should be a great incentive to get poor people on their feet. Oh, yeah. What about amputees and other people who don’t have any feet?  Are you suggesting some sort of complex exemptions for people with disabilities?    You mean they have to fill out a form that says “I have [1, 2, 0] feet.”?  Yeccccch. No way.

 

 

Red Emma

International Politics as if People Mattered

August 31, 2008

There’s an old story about a town in the area of Eastern Europe that changed hands several times between 1850 and 1940 between Russia and Poland.  A couple of men met on the street there, and one of them told the other, “Ivan, I hear we’re about to become part of Poland again.”  “Thank heaven, Boris,” said the other. “I don’t think I could have stood another one of those Russian winters.”

 

Which is a good starting point for any examination of how the nation-state, and the relations between nation-states, affect ordinary people who happen to live there.  Another good starting-point was contributed by my former teacher Marshall Hodgson, who in discussing the wave of decolonization that swept the Muslim world after WWII, pointed out that freedom for a nation does not necessarily mean freedom for its people, at least not all of them.

 

The model we have been operating under since the end of feudalism is that a nation-state consists of a specific piece of territory and the people on it.  The government of each state promulgates and enforces the laws by which the people live. Ideally (that is, in a democratic state), the government is chosen by the people (or at any rate, by a majority of the people.) But we accept any government as the legitimate sovereign head of a sovereign state, as long as it gets to be the government by a procedure consistent with the law in effect in that state at the time of its accession, whether that procedure involves majority vote by universal suffrage, lottery, single combat, or a hot game of spin the bottle.

 

“‘Sovereignty’,” as the science fiction writer Robert Heinlein points out, “lies between ‘sober’ and ‘sozzled’ in the dictionary.”  Normally, it includes control of relations with other sovereign states, and complete control over “internal affairs” within the state’s territory.

 

Or does it?  The Nuremberg trials and the various international conventions and treaties opposing genocide and supporting human rights have somewhat eroded the legitimate power of even a legitimate government over its subjects.  Can a sovereign state simply place an entire class of its subjects in a state of outlawry, strip them of citizenship, the right of residence, property, liberty, and life, as long as it obeys its own laws in doing so? 

 

It depends.

 

Such behavior is certainly violative of several different treaties and conventions.  Many but not all of the nations whose governments engage in such behavior are signatories of some or all of these treaties.  So, if there were some uniformly dependable enforcement mechanism for such treaties, it could be invoked at least against those signatories.

 

That’s a major ‘if.’  In a discussion of recent events in the Balkans, a friend of mine analogized the position of the United States to that of the biggest guy in a bar, in which a gang of bikers is beating up on some little guy.  It is our job as the biggest guy in the bar, said my friend, to defend the poor helpless victim.  My immediate response was “the biggest guy in a bar has exactly the same obligation as everybody else in that bar–to call the cops.”  Which is what a uniformly dependable enforcement mechanism for human rights, with jurisdiction over all violations of international law, would be.  At the moment, of course, in the forum of nations, there are no cops.

 

Before we start wishing for the establishment of such a police force, let us remind ourselves that the real police (even in societies where the police force is impeccably honest and efficient, and has the full support of the surrounding culture and most of the citizens) still have–and use–the discretion not to act.  No police force is required, or willing, or (probably) able to act against every violation of every law.  The best police forces exercise their discretion based on such criteria as: can we be spending our time and resources enforcing some more important law? Preventing more serious harm?  Are other social mechanisms available to solve this problem as well as the police can, or better?  The less admirable will ask such questions as: can we be protecting more important people (or their property)?  Can we be arresting less important people?  Can we protect the people most likely to vote us a raise?  Any global police force would probably have to retain the same discretion not to act, perhaps subject to some more explicit criteria.  So any proposal to set up such a police force ought to include the criteria by which they may choose to act or not to act, or the mechanism by which their involvement will be triggered.

 

Which brings us back to the present.  The United Nations as presently constituted can’t function as the global police force, because it can act only through the Security Council, which can be immobilized by the veto of any one of its members.  (Imagine your city council unable to mobilize the cops except by the consensus of every local ethnic and religious pressure group, plus the NRA, the local street gangs, and the Chamber of Commerce.)   Aside from the obvious–no nation will cooperate in calling the cops on itself–there are also more complex relationships: no nation will call the cops on its historic allies.  Which means the cops will be called, under current conditions, only on relatively weak and friendless nations. (Come to think of it, this is not too different from the way the real cops function in many localities.)  So if you are a citizen of a Big Nation (or a Little Nation with a Big Ally), and your government is depriving you of liberty, property, or cultural autonomy, or even threatening your health and life, you are strictly out of luck if you expect any help from the UN or its various agencies.

 

That’s Problem Number One: while we now at least have international agreements forbidding genocide and other human rights violations, we have no uniform and reliable mechanisms for enforcing them.

 

What we do have, instead, (here comes Problem Number Two) are the various ways nation-states interact with each other, for better and for worse.  At their most civilized, nations can make deals. Contracts.   Leases, even.  And can fulfill those deals peacefully.  One of the more visible such arrangements reached its culmination a few years ago when the Territory of Hong Kong, leased by Queen Victoria from the then-emperor of China for 100 years, reverted to Chinese rule at the end of that period.  The principals in the deal–the governments of China and Great Britain–behaved with scrupulous regard for each other’s rights.  But the people who happened to live on the piece of real estate in question had virtually nothing to say about it.  Many voted with their feet, to become citizens of more congenial countries.  They had to do this using their own resources, after finding their own destinations and getting official acceptance there, an arrangement likely to be unattainable by Hong Kong’s least affluent citizens.  Neither their previous nor their prospective landlord offered them any help in moving out.  They had not been parties to the original lease (a contract between an absolute monarchy on one side and a constitutional monarchy on the other, neither of whom felt any obligation to the local residents) and they were not parties to its termination. 

 

Yet another way sovereign nations can interact is by war and the threat of war.  Consider the dispute, culminating in war, in the South Atlantic in the early ’80s.  The subject was an inhospitable bunch of islands called the Falklands by the Brits (who had settled them) and the Malvinas by the Argentineans (who were geographically adjacent and claimed to own them.) The islands in questions are distinguished by some of the world’s worst weather, and a population of human residents greatly outnumbered by sheep and penguins.  That human population, to a man/woman, wished to remain British.  Their wishes were as irrelevant to the Argentineans and their allies as the wishes of the penguins and the sheep.  The matter was decided, not by majority vote of the people most directly affected, but by superior British military force–to whom, again, the wishes of the local residents were irrelevant, although in this instance they won out.

 

Of course, disagreements between nation-states can also be fought out by such relatively non-violent methods as trade sanctions and embargoes.  The apartheid government of South Africa was undermined and ultimately overthrown with considerable help from 20 years of such sanctions.  The Sandinista government of Nicaragua (a much smaller country with far fewer resources of its own) suffered the same fate after a much shorter period.  The longest-lasting embargo–by the US against Castro’s Cuba–has crippled the latter country, but seems to have had little effect on the power of its government.  The same can be said, so far anyway, for the government of Saddam Hussein in Iraq.  What is clear in all these cases is that the people felt the impact of such sanctions before the government, and felt it more severely.  You have to do a lot of damage to the people of a nation before its government will even notice.  The less democratic a nation is, the more damage you have to do to the people to get rid of the government or change its behavior or even attract its attention.  That, of course, is the whole point of being a member of the ruling class–relative immunity to the problems of the lower orders.  If being Head Honcho doesn’t get you that, whatever else it does get you isn’t worth the trouble of showing up at the Oval Office every day.

 

Which is even truer of outright ongoing war.  The only way to battle a country into submission is to break stuff and kill people.  Most of the stuff, and the people, simply by the law of averages, will not belong to the ruling class.  The odd American law specifically forbidding attempts to kill foreign heads of state as such is not even necessary to achieve this result (although it seems especially strange that dropping a bomb on Muammar Khaddafy’s limousine would be illegal, but wiping out the entire town in which he resides would be legitimate warfare, precisely because more people would be killed.  Got that? It’s okay to aim at tens of thousands of people, but not at any particular one of them, especially not the one you actually want to get rid of.)

 

There have been instances in which the ordinary people most at risk from attack on the government nonetheless welcomed it.  The ANC supported the trade sanctions against the apartheid government of South Africa from the beginning.  Certainly Jews and other victims of the Nazis welcomed the victories of the Allies.  The latest information from Kosovo seems to indicate that the Albanians still residing there welcome the NATO bombing.  This is a heroic posture, analogous to calling in a bombing strike on one’s own position to wipe out the surrounding enemy.  One cannot expect it, still less demand it, from ordinary people in every situation that might require it. (any more than the police can expect the enthusiastic cooperation of local civilians in the “War on Drugs” in American cities.)

 

Problem Number Three: Aside from economic sanctions and military action, what else can be done to protect people from their own governments?  Accepting and aiding refugees is the most directly useful response.  People who are willing and able to leave their homes and their countries may be able to find sanctuary elsewhere.  This is scarcely a universal solution, however.  It not only does not prevent “ethnic cleansing” and similar human rights violations, it encourages them.  A ruler who wants to get rid of a particular group of people can steal their property and throw them out without even having to worry much about his reputation in world opinion.  Once the “untermenschen” are gone, their plight ceases to be an issue.  People who insist on reviving old grudges from safe haven elsewhere–the Armenian nationalist groups, for instance–are viewed as irredentists, revanchists, and crazies.  Even the Jewish demand for economic reparations from the beneficiaries of the Holocaust is viewed as greedy and irrelevant.  If I take your wallet, I’m a thief and may go to jail.  If I take all your property and throw you out of your house, I’m a home invader and will almost certainly go to jail.  But if I take all the property of hundreds of thousands of people and throw them out of their country, I will probably never see the inside of a prison, and I may even die a respected head of state.

 

That, of course, assumes that safe haven exists for each group of refugees.  There is at least some evidence that Hitler’s original intention was simply to evict the Jews from the German homeland, and that the more murderous side of the Holocaust developed only when it became apparent that most  European Jews had nowhere else to go.  Since then, the UN has made itself useful by establishing refugee camps and facilities wherever people fleeing (or evicted) from the depredations of their own government can get to.  Many of these camps have become permanent fixtures in other countries, with highly visible effects on local politics and economies.   The residents of such long-term camps cannot reasonably be expected to forgive, forget, and get on with their lives.  They don’t have to be crazy to go on carrying old grudges and demanding return to homes that no longer exist.  And the countries that host such encampments cannot be blamed for being unhappy about it.  The UN may subsidize the camps and their operation, but they can do little about the social and economic impact on the vicinity.

 

Permanent resettlement of refugees usually works well for the refugees themselves and most of the nations that receive them.  But, as previously noted, it is also an ideal solution for the original oppressor, and can only encourage those who seek to emulate him.

 

Short-term resettlement of refugees in camps has historically tended either to return the refugees to the same murderous circumstances they left in the first place (as in several African genocides in the last 20 years) or to shade into long-term arrangements with all the drawbacks previously noted.

 

The real problem (Number Four, if you’ve been keeping track), of course, is: what do the nations of the world really want to do about governmental human rights violations?  Protect the victims in their own homes?  Get them out of harm’s way?  Temporarily or permanently?  Force the perpetrating government to behave decently?  Or get rid of that government?  And what can actually be done by the organizations available and willing to do anything?

 

What the UN does–when the Security Council will allow it do to anything–is (1) care for refugees, and (2) provide peacekeeping forces where a peace agreement of some sort has already been reached.  What various individual nations and alliances (such as NATO and the ad hoc grouping that carried out the Gulf War) do is fight wars–break stuff and kill people.  If all you have is a hammer, everything looks like a nail.  So the Kosovo problem, like innumerable others before it, is “solved” by refugee camps and bombs.   The people on the ground in Kosovo may protest as the bombs fall around them, or they may cheer.  But their attitude toward NATO will not affect their chances of being hurt or killed by those bombs.  A sword makes a lousy shield.   In this situation, even the best offense is useless as a defense.

 

What, if anything, do the techniques of nonviolent action and resistance have to contribute to the situation?  Gandhian activist Ibrahim Rugova, who was first detained by the Yugoslav government and then released to a gilded exile in Italy, is apparently thoroughly discredited among his compatriots back home, who view him as naïve at best and a Milosevic tool at worst.  But Quaker and other peace groups are, as always, widely respected for their work with the refugees.  Some such groups are attempting to aid the people of Kosovo on their home ground with food and medical care.  This is obviously work worth doing, but it does nothing to stop ethnic cleansing.

 

From other side, from the point of view of nations trying to stop or punish Milosevic, the debate raises yet another set of questions.  Does any nation outside Yugoslavia have any legitimate national interest in Kosovo?  There is no oil or other valuable resource in Kosovo.  The other nations in the area may have a strong interest in preventing or reversing the flow of refugees from Yugoslavia to neighboring countries, where they can upset the fragile ethno-political balance in Macedonia, or the marginally functional economy of Albania.  But nobody further away than Greece and Turkey is likely to be affected by even the farthest ripple of repercussion (unless, of course, somebody drops a bomb on their embassy.)

 

The debate in the US Congress is the same one we have heard over and over since the end of the Cold War: if the US has no “legitimate national interest” in Kosovo, or Somalia, or Haiti, or Cambodia, or any other part of the world where there is no oil and in which none of the warring parties is communist, is there any other valid reason for us to take any kind of action–military or otherwise–against local genocide or human rights violations? The Gulf War was a no-brainer.  There is oil in Iraq, and Kuwait, and Saudi Arabia, all either directly involved or directly threatened.  But the other places hit us squarely in our most ambivalent nerve.  Does a government have the right to commit its resources to defend or assist the citizens of some other country merely because they are being attacked or threatened by a tyrant? 

 

The negative answer we often reflexively give to this question comes from two different places.  The first, of course, is the ghost of Vietnam, which in turn is the ghost of “plucky little Belgium” in World War I.  After the end of World War I,  citizens of countries on both sides became aware that many of the “atrocities” alleged to have been committed by Germany and its allies in Belgium and France had been either highly exaggerated or actually manufactured from whole cloth.  Public opinion became understandably skeptical about “atrocity stories” after that.  Which is one of the reasons the nature and extent of Nazi atrocities against the Jews and other “untermenschen” beginning in the 1930s received so little credence in the Allied nations even after the information was widely available.    

 

By the 1960s, we had no trouble believing atrocity stories.  But the transparent lie on which the Gulf of Tonkin Resolution was based, and the badly crafted communist atrocity stories were still hard to buy.  The atrocity stories we were most likely to believe were the ones in which American and South Vietnamese troops were the villains.  And we had the same bitter taste in our mouth from having been fooled into an expensive, stupid war by a deceptive government’s propaganda after Vietnam as most Europeans had had after World War I.  When we said “never again”, we meant that we would never again allow ourselves to be made fools of.

 

The second source of our discomfort at committing American resources to rescuing foreign victims of tyranny is a more abstract one, which has an analog in our view of the responsibilities of business corporations.  We used to expect corporations to be “good citizens” of wherever they were located–to support local charities and civic activities and the arts.  Increasingly, corporate boards take the position that a corporation’s primary, or even sole, responsibility is to make money for its shareholders.  If corporate “good citizenship” can be subsumed into the public relations budget as one more way to increase sales, the board will accept it.  But only as one more way to make money for the stockholders.  If the stockholders want to make charitable contributions, they can and should do so individually out of the dividends the corporation provides them.

 

Charity, our business philosophers increasingly believe, not only begins in the individual home, but should end there.  Only the individual has the duty, or the right, to give away his own resources without recompense.  Any aggregate of individuals can legitimately act only for its own–their own–selfish interest.  If John or Jane Doe is concerned about the plight of the Albanians in Kosovo, s/he can contribute to the Red Cross or UNICEF or, presumably, the KLA.  A country as large as the US, with citizens from as many different backgrounds, cannot (in this worldview) properly have a foreign policy at all, except for the purpose of making America safer or richer, a goal we can presumably all agree on. As a result, many of the debates in Congress seem almost perversely directed toward disguising altruistic motivation as some kind of more broadly defined self-interest.

 

But all too often, the alternative is to do the opposite–to disguise self-interest as altruism.  We are always more willing to go to war for the protection of people who have large numbers of compatriots and relatives living–and voting–in this country, or for people who look like us, or live like us, than for the starving dark-skinned strangers in Somalia and Sierra Leone.  Does that mean we should decline to fight for or contribute to the Kosovar Albanians or the Bosnian Muslims because our motives are insufficiently pure?  Or does it mean that we should take the claims of the Somalis, the Sierra Leonians, and the Haitians more seriously?  Should we demand consistency, insist on defending everybody or nobody?  Or can we continue to make ad hoc judgments for the flimsiest of reasons, because defending somebody is still better than defending nobody? 

 

That’s a relatively brief (honest!) statement of the problem. Is there a reasonable and feasible solution?  Ultimately, I think the only possible solution is a real, impartial, effective global police force, whose members and commanders would give up their citizenship in any individual nation, presumably in exchange for some really good employee benefits.  We have been edging closer to such an apparatus throughout this century.  It took World War I to create the League of Nations and World War II to create the UN.  Will it take another world war to create a law enforcement system with compulsory jurisdiction over all governments?    How about an invasion from Mars, against which all the nations of the world could unite and really mean it?  Could some home-grown threat do the same job (an epidemic, for instance)?  How about an ecological crisis, like global warming? 

 

And, once we have a global police force, what methods should it use to do its job?  The tactics of local police are being seriously questioned in this country these days, largely because of some glaring incidents involving brutality and apparently unjustified shootings of unarmed civilians.  If we cannot train our local police to do their jobs with a decent respect for the rights of the people who pay their salaries, what can we expect of a larger force with more powerful weaponry?  There have already been incidents of assault and rape committed by “peacekeeping forces” in many parts of the world.  The core of the problem is Acton’s old axiom: all power corrupts.  If the only counterweight to the abuse of power within a nation by its government is more power applied from a supra-government, who is to keep that power from being abused?

 

Marx, of course, would be amused but unsurprised at this situation, in which every solution seems to generate a new problem.  Gandhi would view the problems as purely short range; nonviolent techniques, properly applied, he would aver, will eventually prevail. Any deaths suffered in the meantime should be regarded as “acceptable casualties”, just as civilian and military casualties in a war would be, except that the casualties of nonviolent action are likely to be considerably fewer.  And building a community among the conflicting parties after the end of the conflict is likely to be considerably easier.

 

I personally find the Gandhian approach attractive. But whatever the techniques the “community of nations” decides to apply to solve these problems, it is absolutely clear that the current ad hoc reactions now in use are at best a waste of resources and lives.  A general conversation needs to begin, among nations and within nations and among and within ethnic and religious groups and other communal organizations, about how intercommunal violence and governmental abuses of human rights can best be controlled.  Every candidate for political office or communal responsibility should be expected to take a serious part in this conversation, and to be answerable to those s/he represents for that participation.  It is up to us as the represented parties to hold them responsible, beginning with the 2000 election in this country.

 

Jane Grey 

Mommas, Don’t Let Your Daughters Grow Up To Be Housewives

August 31, 2008

Once again, the social critics are decrying the dire consequences of the movement of women (and especially mothers) into the paid labor force. Lack of maternal attention causes crime, illiteracy, and unemployability.  Children are growing up without adult supervision or role modeling. Or not growing up at all.

 

We don’t, of course, need the social critics to inform us that no parent can be in two places at the same time, that it is not physically possible for the same person to nurture and educate a child and earn the money to provide for  her.  Most of us can figure that out for ourselves. I share the critics’ concern for the welfare of children who are raised with a patchwork of care from relatives, sitters, schools, and day care centers for most of their waking hours.

 

But the fact is that most employed women work for the same reasons most employed men do–first and foremost they need the money to support the lifestyle they consider proper for their households, and secondarily, they get psychological benefits from the status, socialization, and variety of paid employment, and the independence derived from earning their own incomes.

 

How far should a family be willing to reduce its living standards to keep an adult at home during the children’s waking hours?  Obviously, a single parent has very little choice in the matter, except that provided by the welfare system.  But in a two-parent family, how little should the primary earner be earning before a second earner gets recruited?  Poverty level? Modest but adequate? Neither of these provides for home ownership, private schooling, or college education for the children.  Must we conclude that these are unrealistic dreams for the good family? That the presence of an adult at home can do more to improve the children’s prospects in later life than private schooling and college?  Should we write off the American dream and concentrate on hearth and home?

 

All of these questions are purely economic.  But real people make economic decisions based on a mixture of economic and non-economic motives.  Men work even if they can afford not to, because in our culture a job is as much a requisite of  being a male as pants that zip in the front.  They work because that’s the only way they can have regular contact with other adult males.  But above all, they work because being self-supporting is one of our culture’s most basic moral  values.  We firmly believe that people who are not self-supporting are parasites.  A person who is too young, too old, or too sick to work will be spared our scorn, but may have to endure our pity or patronization instead. Women work for the same sorts of reasons: to have regular contact with other adults, and to feel like responsible, respectable adults.

 

We are very specific in our definition of “self-supporting.”  It involves being paid money for providing services or making goods.  Usually, it involves going outside the home, acquiring a boss, and doing what the boss tells us. Any job that doesn’t include all these elements is at least suspect.  And any work, no matter how arduous, that doesn’t involve making money simply isn’t a job at all. 

 

As a divorce lawyer, ten years ago, I used to see a fair number of women clients whose husbands had forbidden them to seek employment outside the home.   When they got to my office, it was generally because life had now switched the rules on them.  Either their husbands had abandoned them, or they had left their husbands (often impelled by several years of physical abuse.)  With luck, their husbands would be affluent enough and the court generous enough to allow them a few years of “rehabilitative maintenance” (as if having been a homemaker were a crippling injury) during which they could acquire some job skills.  That was Transitional Stage One.

 

Now, apparently, we are in Transitional Stage Two.  It began with the fall of the Public Assistance system into disrepute.  When the system began in the 1930s, under the New Deal, it was a way to support mothers whose husbands had abandoned them or (more often) died, so that they could  stay at home and raise their children in dignity.  These deserving widows were not expected to enter the labor force, partly because their job at home was deemed more important and partly because, like Social Security retirees, they were less of a burden on a depressed economy if they could be kept out of the already overcrowded work force. 

 

But over the succeeding twenty years, a larger proportion of Public Assistance recipients was divorced or even never-married, rather than widowed.  A larger proportion was non-white.  And the labor force was no longer trying to keep people out; on the contrary, it was expanding wildly and eager to take new workers in. So the myth of the welfare mother was born.  Her work at home was not worth public subsidy, if only because (legend had it) she never did any.  Her housekeeping was slovenly, her children ran wild, and she spent her money on booze and fancy clothes and her time on conceiving more children so as to collect more benefits. She was being paid to do nothing and to raise another generation of do-nothings to absorb public money in their turn.

 

The myth of the Welfare Mother was supplemented by the Myth of the Alimony Drone (who collects enormous sums of her ex-husband’s hard-earned money for nothing other than a few years of marital bliss, and does nothing with her time but sit around at poolside and seduce the yard-man), and by the Myth of the Lazy Housewife, who sleeps until noon every day and then spends her afternoons eating chocolates and watching soap operas until the kids (if she has any) come home from school, or her husband comes home from work.  Her house, of course, is filthy, and her children totally undisciplined and unsupervised.

 

I don’t know any women who believe the Lazy Housewife myth (except, in a few cases, about their daughters-in-law.) I don’t know anybody on alimony who believes the Alimony Drone myth.  I don’t know any woman who has ever been on Public Aid who believes the Welfare Mother myth (except, sometimes, about welfare recipients of another race.) But I know very few men who don’t (at some level) believe all of them. They will make exceptions for the women they know and are not currently angry at, rather like Archie Bunker and his Black neighbors.  But even the most well-intentioned man will unthinkingly use expressions like “playing housewife” and “Susie Homemaker.” And I would  certainly never trust any man–even that rapidly decreasing number who have encouraged their wives to stay home with the children–not to invoke the ugly spectre of the Lazy Housewife when his marriage breaks up.

 

In fact, in the last five years of my practice of divorce law, I think I have seen a total of one wife of an American-born man who was forbidden to take a job outside the home during the marriage. (Immigrant men still operate by older standards.) I have long since lost count of the number of women I have represented whose husbands demanded that they take jobs, or who punished them really vindictively for quitting (for even the best possible cause) or being fired or laid off or having to quit for reasons of health or pregnancy. Often, the wife’s unemployment was one of the factors precipitating abuse or divorce.

 

Judges, even the most compassionate, now tend to assume that both parties in a divorce will be working full-time.  They award alimony only for short periods, in small amounts, except to women who are really physically incapable of working.  On one hand this means an employed wife is in no danger of losing custody of her children because of her vocational duties. But on the other hand, it means that the stay-at-home mother will get little or no credit from the court for her choice, and no compensation for its economic consequences.

 

Some full-time homemakers apparently blame Betty Friedan and the feminist movement for this state of affairs.  It is certainly true that one of the basic premises of Friedan’s ground-breaking book The Feminine Mystique is that most of the time,  most full-time middle class suburban homemakers do not actually have enough work to do to constitute a full-time job. The same could be said of a good many government and even corporate employees, of course. It is not, in and of itself, sufficient reason to persecute the underemployed. Friedan was  probably not  saying that it was. Certainly she was writing to an audience of middle-class women with commodious housing, lots of electrical housekeeping appliances and very few children. As a result,  she grossly underestimated how much time it takes a working-class woman to care for an under-equipped and under-sized home and a large brood of children. But the women’s movement in general has been sympathetic to the situation of the full-time homemaker.  There is even a small but vocal segment of the movement which advocates the payment of wages for the work women do in managing their own households and rearing their own children.

 

Most of us, of course, including most stay-at-home mothers, find that idea mind-boggling. At heart even the most pro-domestic-woman of us believes that, while the work involved in raising one’s own children and managing one’s own household may be crucially important and meaningful, it is still not the kind of work that should be paid for.  We have bought into the paradoxical fundamental American idea that certain kinds of work can be essential without being important. They should be done either by people who are not needed as primary wage-earners, or (if done by a primary wage-earner) around the edges and during the breaks in her “real” job. We even have trouble with the idea of providing decent wages and working conditions for those whose paid work consists of taking care of other people’s homes and children.

 

Even those feminists who support wages for housework probably feel pretty much the way most other feminists do, the way I do about full-time homemaking.  We honor the women who have chosen this way of life. We will defend to our last breath their  right  to make this choice.  But there is no way on earth we would advise our own daughters to do the same.   The woman who makes this choice is choosing dependency. Worse still, she is choosing to depend on something increasingly undependable–the willingness  and ability of a man to  support an entire household single-handedly. She cannot even be certain that the man who encourages her to make this choice will continue to be willing or able to pay for it.  She can be quite sure that, even if he does, she will find her self-respect severely undermined by the patronization or scorn of most of the other people she encounters.

 

Earlier, this essay referred to Transitional Stages One and Two.  What are we transiting to? I suspect the next stage will be the near-equalization of the average male and female wage (mainly through the elimination of highly-paid male-dominated jobs in heavy industry.) This will at least end the current double bind many women are in, which encourages a woman to be the family member most likely to take time out of the paid workforce if any member must do so, because her wages will be a smaller loss to the family economy–and then punishes her for doing so by further diminishing her earning capacity and increasing her burdens in the home.

 

It may be too utopian to envision, as the step after that, men becoming as likely as women to leave the labor force to care for their young children. It is certainly utopian, at this point, to hope that employers will be willing to provide sufficiently decent wages and benefits to part-time workers that both parents can spend some waking hours with their families and still support them decently.  At that point it will actually be possible for both parents to care for their children, while neither is wholly dependent on the other, nor on the state.  Surely that is the family we should be striving to form.

 

Red Emma

Bread and Circuses II

August 30, 2008

I get most of my information on this sort of subject matter from Mr. Wired, who has just informed me that the Great Bandwidth Robbery will not only wipe out our current broadcast TV frequencies, but may very well seriously impair FM radio reception.

 

A bit of history here:  Does anybody recall when cell phones first emerged into our technosphere?  Shortly afterward, we discovered that, when not properly tweaked and expensively insulated, they could cause interference with medical gadgetry, most notably cardiac pacemakers.  Got that?  A techno-toy, if not properly manufactured, could conceivably cause death by heart attack to numerous of our citizens who depend on another gadget for their very lives! 

 

What did Our Leaders do about it?  Did they immediately order the recall of all cell phones, to be retrofitted with whatever it might take to keep them from interfering with lifesaving technology?  Did they mandate that all cell phones bear warning stickers on them, saying something like, “Do not use in medical facilities or around anybody over 60 with whom you are not closely enough acquainted to know they do not have a pacemaker”?  Did they mandate a fix to pacemakers that would automatically cause any cell phone in their vicinity to go dead? 

 

As you probably remember, the answers are no, no, and are you kidding?  That would have caused inconvenience and expense to an emerging but very big business.  Instead, hospitals and doctors’ offices have been protecting themselves from liability, and their patients from physical harm, by posting signs requesting people not to use cell phones on the premises. That’s right, requesting.  Well, since then, cell phones have been refined to the point where most of them probably won’t cause serious interference with either medical technology or airplane gadgetry, but that’s only within the last couple of years.  It was not any kind of urgent priority with the Wireless Conspiracy, but they did eventually get it done, more or less.

 

So, returning to our own day, the FM radio band is located right between Channel 6 and Channel 7 on the television band.  The television band that has just been sold off to the Wireless Conspiracy.  The FM band itself has not been sold.  But the likelihood that all those cell phones in the frequencies immediately around it will interfere with it has not been explored at all.  We don’t know what will happen to FM reception after February, 2009.  We will find out the hard way.  And maybe ten years later, the Wireless Conspiracy will come up with a fix.  In the meantime, of course, we will all switch to satellite radio, which generally carries better programming anyway, but is outrageously expensive and about to get more so, now that its two providers are involved in merging.  Ya think the satellite radio folks are in bed with the Wireless guys?  And the FCC is in bed with all of them?  Naaaaaaaaaaaaaah.

 

Red Emma