Archive for June, 2009

The Anti-Ugly League

June 26, 2009

When I was in college, I dated a city planning major for a while.  A friend of his belonged to the Anti-Ugly League, an organization that had its roots in Britain but apparently was trying to branch out into the US.  Its mission was to oppose ugly architecture, either by written criticism or by public demonstrations such as picketing with signs like “This is an ugly building”, and occasionally by throwing eggs at really ugly buildings.  I just tried googling the league, with no results, so I can only conclude that they have shut down, probably because there was just too much work for them to handle.  Apparently, in the UK, Prince Charles has taken over some of their job, as we see from That Other Blog.  The Chicago chapter, if we ever get around to starting one, should probably be called Friends of Donald Delgade (he being the hapless ex-mental patient who drove his car through the glass walls of the Thompson Center in Chicago in 1999.)

Since my brief relationship with The Planner, I have become something of an architecture buff myself. In college in the early 1960s, I audited a course on the history of American architecture.  A few years later, Mr. Wired and I moved to Chicago, where roughly half of the buildings we had covered in the course had been built. By that time, half of them had been torn down.  But I actually had the privilege of working in two of those remaining, and living two blocks away from a Frank Lloyd Wright house.  (And my favorite niece is an architect.)

All of which has led me to a top-of-the-head classification scheme for architecture. There are Great Buildings, and there are Good Buildings.  Great Buildings are impressive to look at from the outside, in isolation or in their geological and architectural setting. What they mainly impress the viewer with is the importance and greatness of whoever or whatever commissioned the building in question.  The ultimate Great Building is the Great Pyramid of Giza.  Note that the Great Pyramid is not only not intended to be lived in, it is in fact a tomb.  Once completed, it was never intended to be seen from the inside at all.

As opposed to Good Buildings, which are judged by how well they suit the people and entities that live and work inside them.  The major architectural thinkers tend to do most of their thinking about Great Buildings, perhaps because it’s easier and pays better.  By definition, after all, anyone who can commission a Great Building can afford to pay for it.  Whereas most of the people who will be living and working inside buildings can’t.  Working out ways to get Good Buildings paid for is a major economic discipline in itself.

Some horrible examples of attempted Great Buildings at their worst: in the 1950s and 1960s, it became fashionable to construct major public buildings, such as schools and colleges, out of poured concrete, and with flat roofs.  I had the misfortune to teach in several such buildings in Chicago.  Over the following 20 years, all of them developed leaks and cracks, and ultimately crumbled.  All of them have since been either completely rehabbed, or abandoned and rebuilt altogether.  The big secrets are that (a) flat roofs don’t work in cold wet climates like Chicago’s; moisture doesn’t pour off them as it does from pitched roofs, so, since it has to go somewhere, it is likely to end up inside the building, or worse still, inside the walls of the building. (Flat roofs are for deserts!) and (b) poured concrete is susceptible to expansion and shrinkage in extremes of temperature, which sooner or later leads to cracking, leaking, and crumbling.  As an added bonus, buildings that leak sooner or later attract mold spores and become medically dangerous to those who live and work in them.

(In a supreme irony, one of the law students who worked for me a few years ago had been an undergrad architecture major at the Illinois Institute of Technology, which prides itself on most of its buildings—sorry, Great Buildings–having been designed by Mies Van Der Rohe—flat-roofed glass boxes.  My student’s opinion of Van Der Rohe was seriously impacted by the fact that one of those buildings, in which he had his studio, leaked all over a major set of his drawings and nearly cost him his degree.)

Even the Great Buildings I really like, such as Frank Lloyd Wright’s stuff, have the same problems.  Falling Water is, well, falling.  The Robie House (two blocks from where I live) is in a constant state of rehabbing.  One would think that the first requirement for Greatness in a building would be that it stays up without undue effort and keeps out the weather.

Architecture, in addition to aspiring either to Greatness or Goodness, has politics.  Totalitarian architecture, like that sponsored by Hitler and Stalin, is readily distinguishable from democratic architecture like the Acropolis.  The easiest way is to look for doors and windows and other “envelope penetrations,” as engineers now call them.  The Acropolis has them all over the place. Admittedly it is situated in a mostly warm and dry climate.  But fascist architecture—regardless of climate–has as few as possible, often no windows at all, or no street-level windows, and only one or at most two doors.  This enables The Authorities to monitor and control who comes in and goes out.

You may have noticed an increasing proliferation of fascist buildings, both public and private, especially since 9/11.  Older buildings have for many years been caught in the crossfire of a war between architects (who try to design buildings with doors in all the most convenient places for ingress and egress) and administrators (who then lock all but the single least convenient one.)  Additionally, in the years since the time of Hitler and Stalin, more and more public buildings have been erected with their only entrances in places other than on a pedestrian-accessible street—one floor up from an entrance kiosk, down in a parking garage, or around the corner in a parking lot. And that was before metal detectors.

It’s easy to talk about Great Buildings and their drawbacks.  It’s harder to define Good Buildings, much less analyse them.  Christopher Alexander has taken some useful stabs at it.  Stewart Brand, in How Buildings Learn, talks about the ways a building can change, for better or for worse, throughout its lifetime. In the process, he says some useful things about how Good Buildings change for the better.  Aside from that, not being the Prince of Wales, I haven’t had the time to construct a reading list on the subject, much less read everything on it.  But we should be encouraging underemployed royals, and anybody else with the time and the inclination, to think about this stuff seriously, since there isn’t enough money in it for the rest of us to do it for a living.  Buildings shape the lives of the people who live and work in and around them, just as they are shaped by those people.

Jane Grey

Advertisements

Health Care Revisited Once Again

June 16, 2009

What Am I Missing This Time Dept.

The Prez came back to Chicago yesterday to present his health care program to a major set of stakeholders.  Aside from tying up traffic long enough to make me 20 minutes late for a client appointment, what he accomplished remains to be seen.  But some interesting observations come to mind:

  • NPR, at least, now feels it has to explain to its listening public that the American Medical Association is “a major group of doctors.”  This probably reflects the fact that the AMA, which was once a universally known name brand, now represents no more than 19% of practicing physicians.  Do other MSM types feel the same need to explain who they are?
  • The docs are worried about unfair competition from a government medical plan.  I find this baffling.  Aren’t most of them the same guys who believe government can’t do anything right, and government medical plans (most notably those in Canada and the UK) are a total disaster?  Why on earth would the savvy consumers in the American public choose a total disaster over the sunny vistas of today’s health care system?
  • Obama is talking about cutting Medicare costs, at the same time that he is touting Medicare as the model for the government option plan.  This also doesn’t exactly make sense.  Most people I know who have Medicare are more or less happy with it, but docs generally think it wildly undercompensates them.  OTOH, that was not a part of yesterday’s discussion at the AMA convention.

CynThesis

Required Reading

June 15, 2009

The New American Militarism

Andrew J. Bacevich

Thirty-plus years ago, I sat in somebody else’s suburban living room and heard Daniel Ellsberg say that we weren’t on the wrong side in Vietnam, we were the wrong side.  At the time I thought it was hyperbole, though I found a lot of the other things he said that night very persuasive.  Like “if every American who was against the war had been willing to lose his job to stop it, it would have been over long since.”

Now I have found myself reviewing a lot of what Ellsberg said then.  I just finished Andrew Bacevich’s book, The New American Militarism, and it puts a lot of things into a different light.  It was written in 2005, two years before the author’s own son was killed in action in Iraq.  Bacevich has been both professional soldier and academic, and now a Gold Star father.  This impressive life has resulted in several impressive books.

Bacevich is a historian, and he starts the story of American expansionist militarism where it pretty much began, with Woodrow Wilson, who got elected to keep us out of World War I and ended by dragging us into it (sound familiar?), and then into a peace that almost inevitably led to World War II, all to “make the world safe for democracy.”  (Bacevich neglects to mention that the kind of democracy Wilson had in mind had no place for citizens with darker skins than his own; among his other dubious achievements, Wilson re-segregated Washington DC.)

Bacevich goes on to describe the oscillating fortunes of American militarism through the 20th century and into the 21st.  After World War I, the military establishment shrank back almost to its 19th-century size, as the Depression and the mistreatment of World War I veterans soured the public on foreign wars.  With the exception of more-or-less illegal leftist participation in the Spanish Civil War, that sourness lasted until Pearl Harbor, when the military sprang back with a vengeance.

Bacevich, like many revisionist historians on all sides, has taken to re-numbering the World Wars. After World War II came the Cold War, which he prefers to call World War III.  Its early years were both expansionist and beneficent.  It kept communism out of Western Europe with the cornucopia of the Marshall Plan and the shield of several hundred thousand American soldiers on bases all through the “free world.” (This was when, in keeping with this idealistic mindset, the War Department became the Defense Department.) In Asia, Africa, and the Middle East, it didn’t do so well.

Which brings us to Vietnam.  Back in 1962, when most Americans didn’t even know where Vietnam was, the upper reaches of the Kennedy administration were the scene of a great debate pitting deterrence/massive retaliation/nukes against counterinsurgency. (I was a distant witness of that debate, in the Stuart Hughes vs. Ted Kennedy senatorial campaign in Massachusetts.)  In Vietnam, the counterinsurgency buffs won out.  That was where the Ugly American came from—Burdick’s novel about the good-hearted American trying to win the hearts and minds of the Vietnamese, and save Vietnam from the evil communists.  And of course, the counterinsurgency buffs failed, either because theirs was the wrong strategy, or because they lacked the courage of their convictions in implementing it.

Present-day analysts of that war like to find ways of blaming it for all our current problems, from all possible sides.  Did we lose because we were wrong to be there in the first place?  Or were we wrong to be there because in the end we lost?  The orthodox military historians consider the American defeat the result of political interference in the military’s business.  So, of course, did Rambo.  Bacevich points out that the original American ideal was civilian (i.e. political) control of the military.  The civilians (politicians) were to set forth the goals and the military would then supply the means.  But that relationship has always been an uneasy one, especially since Americans have a habit of electing military leaders to civilian political office, and furthermore don’t much like civilian politicians. After Vietnam, it broke down completely for a while. The American civilian public repudiated the military leaders who had organized the war and the grunts who had fought it.  (Bacevich doesn’t mention this, and may well not have known it, but for the first ten years after the Vietnam War ended, the only American civilians who gave a flaming damn for the welfare of Vietnam veterans were all in the peace movement.)

[Sidenote: I don’t mean to diminish the value of Bacevich’s work, especially since so far, this is the only book of his that I’ve read.  If I do him an injustice when I point out things he doesn’t mention in it, I apologize deeply, because in general this book knocks my socks off.]

The Cold War/WWIII ended with the disintegration of the Soviet Union.  It was popularly considered a victory for “our side.”  It might more accurately be viewed as the culmination of a potlatch, that fascinating institution of the Northwest Pacific Indians, in which a person or a group gains power, status, and dominance by winning a contest to see who can destroy or give away more of what he values most.

Afterward, the Vietnam debacle ultimately gave rise to the Powell Doctrine, enunciated first by one of the younger graduates of that school of hard knocks: we don’t enter a war except to protect America’s vital interests; the war must have concrete, achievable objectives; it must have the full support of the American people; it must have an exit strategy set up at the very beginning; and we must approach the task with “overwhelming force”—not merely sufficient, but preponderant.

The First Gulf War was the model for this doctrine (and the opening salvo of what Bacevich calls World War IV.)   Indeed, the First Gulf War, in a few short months, completely rehabilitated the reputation of the American military and of American militarism.  It was short, cheap (in both casualties and finances—Bacevich doesn’t mention that one of the reasons everybody liked it was that we fought it mostly on other people’s money), popular (at home and abroad, which is how we managed to get other people to pay for it), and effective.  It was even preceded by a stirring and impressive congressional debate, probably the most serious public discussion of Article One, Section Eight of the Constitution in more than fifty years.

But it was the very opposite of the Wilsonian ideal.  The American army stopped well short of Baghdad and left Saddam Hussein in power.  The United States paid minimal honor to our promises to the Kurds and the Sunnis, who had relied on us when they rose up against Saddam; to protect them, we created a batch of no-fly zones, policed regularly from the air.  And the international community imposed economic sanctions on Iraq which reduced it from its previous highly-industrialized status to a part of the Third World. We changed Iraq, but not by democratizing it.

And it was followed by what Bacevich portrays as a perfect storm of neoconservative politics, newly-politicized evangelical religion, a newly-professionalized officer corps, and a “crusade theory of warfare.”  It was no longer enough to set limited military goals and accomplish them.  “Containment” was once again a dirty word.  The United States has been divinely chosen to rule and impose its values on the world.

We all know what happened next. First came 9/11.  Conspiracy wackos like to think it was the work of either the Elders of Zion or the CIA.  What matters is that, if Osama bin Laden hadn’t set up 9/11, the Bush government would have had to, to accomplish its own ends.  ( If the Reichstag fire had been caused by improper use of smoking materials, who would know the difference today?)

At first, Bush responded more or less appropriately, by dropping bombs on the region from which Al Quaeda had plotted the attack.  But then, he turned his glance back on Iraq.  And at first, even that seemed to follow the Powell doctrine.  The troops went straight to Baghdad, wiped out most of the Iraqi army, and floated the “Mission Accomplished” banner.

A peripheral note here on karma: during the First Gulf War, Saddam decided to pull the rest of the Arab world into the war on his side by dropping some bombs on Israel.  Israel, of course, was in no way a party to the war on either side.  The US had asked them to stay out, and they complied.  But Saddam figured, logically enough for an Arab politician, that bombing Israel for no reason whatever was an activity all the other Arab governments would want to get in on.  It didn’t work, partly because too many Arab governments worried that Saddam did not play nicely with others, and that his Arab “allies” might end up meeting the same fate as Kuwait.  But Saddam himself became the victim of precisely the same kind of maneuver from Bush several years later—Bush decided that, if he couldn’t count on overthrowing Osama bin Laden, he could at least reconstitute the old “coalition of the willing” by overthrowing their old adversary, who had in no way been a party to 9/11.  That didn’t work either, except on the UK and a few representatives of “the new Europe.” But one has to admire the symmetry of what happened to Saddam.

Bacevich ends with a sheaf of recommendations for amending our national life that include restoring the primacy of the legislative branch in warmaking decisions, restoring the ideal of the “citizen soldier” by attaching the promise of a free college education to national service, pulling US military bases out of those parts of the world long since capable of defending themselves, giving the State Department the budget and teeth to make realistic foreign policy, and setting realistic limits on the military budget.  It’s a breathtaking panorama, and in a recent book-signing at the bookstore down the block from my home, Bacevich seemed to acknowledge that the Obama administration was no closer to implementing it than Bush had been, not yet anyway.  Conventional wisdom calls Bacevich a paleoconservative.  He may in fact be preaching that old-time political religion established by the Framers.  One hopes that the new administration is paying serious attention to it.

CynThesis

Who’s Flying Your Plane?

June 10, 2009

That’s today’s headline in the local paper. It’s about, of course, the commuter plane crash near Buffalo, a few months back, in which all passengers and crew died. The crew included two pilots with minimal experience and low pay, lousy test scores, long commutes, and almost no chance to rest between flights. Apparently many small commuter airlines have similar staffing problems. They pay their starting pilots between $16,000 and $30,000 per year. The airlines in question piously hope that publicizing this kind of information won’t make the public reluctant to fly small airlines, whose staffing is just as good as that of the rest of the industry.

Right. Just as good as Sully the Miracle Man who, during the same period, with all engines stopped by bird strikes, managed to save all passengers and crew aboard his plane by landing it in the middle of the Hudson River at rush hour in New York. He, of course, has been flying for U.S. Airways or its predecessors for nearly 30 years, plus military flying service. We don’t know his income, but we can reasonably assume it’s well into the six figures. He obviously deserves every penny of it. But an airline spokesman says there is no connection between pilot pay and flight safety. Yeah, right.

Which raises the question—most of us fly at most a couple of times a month, and more often a couple of times a year. While the professionalism of our pilots on those occasions is an essential concern, it isn’t a constant concern. Unlike, say, the question of who’s caring for your toddlers, or your parents, or your disabled family member.

According to the Service Employees International Union, the average home health care worker earns between 6 and 8 dollars an hour, rarely works a full week of 40 hours, and gets no benefits whatever. And no, these not teenagers working their way up to better things; most of them are over 45, and many are over 65. For them, this is as good as it gets. Many of them have disabilities of their own, which they cannot afford to attend to.

While a pilot is responsible for a lot more lives, s/he also shares that responsibility with a co-pilot and an engineer. Even the cheapest of the regional airlines examined by the Chicago Trib pays $78 per hour in training and salary per crew member for its flight crews, or roughly $250 per hour total. That’s well over 30 times the hourly wage of a home health care worker, who probably cares for three or four clients over a week. If one of those clients is a member of your family, are you sure this makes sense?

Let’s get back to the issue of connection between pay and safety, either in the cockpit or behind a wheelchair. The main reason workers get paid at all is to enable them to maintain, day to day, their own ability to work. If they don’t get paid enough to maintain stable housing (note that an increasing proportion of homeless people have jobs, and that one of the pilots in the crashed Colgan flight had spent the previous night on a couch in the staff lounge), that will be reflected in the quality of their work.

The other reason workers get paid, of course, is to motivate them to show up and do their jobs competently. Most economic historians have concluded that ante bellum slavery in the American South, lacking this motivation for its workers, was grossly inefficient and might well have died on its own in a few decades, had the Civil War not intervened.

Unlike the airline industry, the home health care “industry” lacks any governmental statistical oversight. So we don’t really know much about the risks to client health and safety caused by poorly trained, underpaid, overworked home health care workers. But while you’re on the ground, gentle reader, you should have time to stop worrying about whether your pilot has been properly trained, housed, and rested. Why not use that time to worry about whether the person who takes care of your mother-in-law, or your nephew, or who will someday be taking care of you, is able to do the job safely.

Red Emma

Why Natural Law is a Bait and Switch

June 7, 2009

Mr. Wired and I have had these discussions over many years, but this seems a good time to export them into the blogosphere.  I’m a lawyer, he’s not, but he is a law buff, and also a computer maven with a very logical mind.  He looks at law as a tripod upon which any complex society must rest:

Leg #1:  the basic legal code.  This is the set of rules without which no group can function for more than a few years without falling apart.  You know them as well as anyone:  don’t murder, don’t steal, don’t rape, all of that good stuff.

Note that these words are terms of art with definitions that depend on the social context. “Basic” does not mean “too obvious to need explanation.”

Thus,  “murder” is not coextensive with “kill.”  In different societies, some kinds of homicide are permitted and other kinds are forbidden.  A forbidden homicide is a murder. In most societies today, killing in war, self-defense, and defense of others is not murder.

Similarly, “steal” is not coextensive with “take.”  Different societies have different concepts of what can be owned and therefore what can be stolen.  The difficult relationship between the Native Americans and the European colonists resulted from the fact that, in many Native American cultures, land could not be owned, or stolen.  In the pre-Civil War South, a runaway slave was stealing himself from his master/owner.  Obviously that is no longer part of our basic legal code.

And “rape” is not coextensive with “non-consensual sexual intercourse.”  In most advanced countries, even consensual sexual intercourse, if it involves a minor  (how old is a minor? Depends on the social context.  In the US it varies by state), is rape; on the other hand, until very recently, a wife whose husband had intercourse with her against her will was not considered the victim of a rape.  You get the idea.

The point is, a group has to agree on at least the basic principles in order to survive as a group at all.  This means that they have to agree both on not murdering/stealing/raping, and on the definition of murder/theft/rape in the context of their group.

Leg #2:  ethical code—these are the refinements to the basic legal code that make social functioning easier and smoother.  For instance, if you damage somebody (even unintentionally), you should compensate them enough to restore them to the position they were in before the damage, or as close to it as possible.  There may be many ways to accomplish this goal, and different societies may choose different means.  In general, there is likely to be one set of ethical mechanisms per social group, even though the group may recognize that there are other ways to accomplish the same goal, in other groups.  For instance, the common law legal system presumes that such compensation is made with money damages.  In other societies, restitution may actually involve physical objects or even physical labor.  But the common purpose is easily discernable.

Leg #3:  (and this is the really tricky one) the moral code.  These are generally formulated by religious or sub-cultural groups within a larger society.  So that society is likely to have multiple moral codes, which can and often do conflict.  Phylogenetically, each one probably arose as a basic legal code in a smaller and less complex society.  As those societies got amalgamated into larger states and regions, they had to figure out how to coexist with their neighbors.  “Who is my neighbor?” became the basic question long before the Gospel of Luke raised it.

Thus, observant Muslims do not drink alcohol.  Can they forbid their non-Muslim neighbors from making, selling, and drinking booze?  Observant Jews do not work on Saturdays.  Can they require their Christian neighbors to shutter their businesses on that day?  Observant Catholics who have been civilly divorced cannot remarry.  Can they impose the same strictures on their Protestant neighbors?

Note that this system is not a hierarchy.  All three of these legs of the tripod are necessary, and equally necessary, to civilized life within a 21st-century culture.  Trying to figure out which heading a particular rule or law or regulation belongs under is a great college-bull-session or cocktail party game with occasionally useful results.  But right now, the impact of morality on law and ethics is the most salient and dangerous aspect of this system.

There are actually two sets of questions. First, how can a multi-cultural or multi-religious polity decide whose morality (if any) to enforce on all of its citizens?  The First Amendment of the US Constitution says, essentially, it can’t, and shouldn’t try.  The Supreme Court has done an end run around this rubric, by allowing certain provisions of moral codes to reinvent themselves as ethical codes.  Thus, it has ruled repeatedly that local jurisdictions can prohibit or restrict businesses from operating on Sunday, not because Sunday is the Christian day of rest (that, obviously, is a moral code) , but because everybody should be able to get one day a week off from work, and it’s more convenient for everybody if we use the one that is common to the largest number of people.  That’s an ethical rule. Therefore, says the Supreme Court, a civil society can impose it on everybody.  So far, the Supremes have felt free to ignore the collateral damage done to members of groups that are obliged to refrain from working on some other day too, and therefore have to compete with workers and businesses who can put in one more day of work per week.  At the moment, in most of the US, it doesn’t matter too much.  Most employees get two days off per week, and most localities have given up on Sunday closings anyway.

And second, how does a religious or cultural group, governed by its own basic legal code, become part of a larger polity and allow its code to become one of many group moralities within that polity?  There are scholars who claim that Islam, with a few isolated exceptions, has never made that transition at all.  Certainly there are schools of Islamic law that forbid a Muslim to live in a state ruled by non-Muslims.  There are also orthodox Hindus who claim that any Hindu who leaves India automatically becomes unclean.  On the other hand, the Bible (both the Hebrew and the Greek scriptures) presumes that its protagonists live in a world full of Others.  Much of its legislation deals (often in contradictory ways) with proper relations with those outsiders.  Arguably, that means that most of our models for manageable relations between basic legal codes and subgroup moralities come from Christian and Jewish history.  Which leaves Muslims free to complain that “we” are imposing “our” legal models on “them.”

You may reasonably ask where the “natural law” of the title comes in.  The phrase turns up in most philosophical writing beginning as far back as classical Greece.  We Americans are most familiar with it in the Declaration of Independence, where Jefferson alludes to “the laws of Nature and of Nature’s God.”  But probably the other place it turns up most often these days is in Catholic theology.  It is the purported source of the Catholic prohibition on birth control, and on abortion.  Natural law is another way of saying “everybody knows.”  As Mark Twain pointed out long since, much of what “everybody knows” isn’t true.  But as long as the Catholics can claim that their version of natural law is obvious to everybody, including non-Catholics and even non-Christians, then they can also argue that everybody is bound by it.  It is not merely the moral code of a religious subgroup, but governs all of humankind.  They came to this conclusion back when anything derived from Aristotle (like the notion of natural law) was believed to be universally applicable.  In fairness, it was easier to believe this back in the late Middle Ages, when Aristotelian texts found their way into European universities by way of Jewish and Muslim translations and commentaries.

The battle over birth control has subsided to a minor skirmish over the last 40 years. But the battle over abortion, as Dr. Tiller’s murder at the door of his own church tragically underlines, is still raging.  Equating abortion with murder works only if you equate the fetus with human life.  But what kind of decision is that?  It isn’t a scientific decision. A scientist can tell you what stage of life a fetus is in at pretty much any given moment, and can tell you that its genome is human. That’s not the same as being able to tell you whether, or when, it is a “human life” with all of the rights and obligations included in that definition.  That’s a political decision, and varies from one religion or sub-culture to another.  But a scientist can certainly tell you, without a moment’s hesitation, that an acorn is not an oak, and an egg—even a fertilized egg—is not a chicken.  Anyone who tries to define an acorn as an oak, or an egg as a chicken, is operating on some agenda other than science.

Hardline feminists have concluded that the “pro-life” agenda is mostly about oppressing women.  Certainly that is one consequence of enforcing anti-abortion moral rules as if they were part of the basic legal code. There are plenty of other ways to look at the pro-birth program.  Arguably, it’s anti-sex, with all the psychological and artistic corollaries of that lifestyle.  And pro-overpopulation, with all of the economic and ecological consequences that entails. And, judging from what happened in Rumania during Nicolae Ceauşescu’s reign, when the laws forbidding contraception and abortion were very strictly enforced, but the economy made child-rearing prohibitively expensive for most working people, it is anti-child.  Most of the children produced by this evil confluence of political and economic forces ended up vegetating and ultimately dying in orphanages.

An increasing number of liberals, especially among religious thinkers, are trying to raise the anti-abortion rules to the level of an ethical code.  President Obama seems to take this view.  Reduce the demand for abortions, by reducing unintended pregnancies and by making childrearing less costly to working parents, he suggests.  I might even consider going further, turning the ethical approach into a basic legal one:  thou shalt not unduly burden the parents of the next generation, if thou art at all serious about having one.  Give it a thought.

CynThesis