According to the Richard Dawkins Foundation for Reason & Science, some brilliant new scientific research has demolished the Christian Right and the Creationists.
The Christian right’s obsessive hatred of Darwin is a wonder to behold, but it could someday be rivaled by the hatred of someone you’ve probably never even heard of. Darwin earned their hatred because he explained the evolution of life in a way that doesn’t require the hand of God. Darwin didn’t exclude God, of course, though many creationists seem incapable of grasping this point. But he didn’t require God, either, and that was enough to drive some people mad.
Hatred is perhaps too strong a word to apply to many Christians’ feelings about Charles Darwin, but many Christians certainly do not approve of his theory, judging it to be a direct attack on their faith. They shouldn’t feel that way. Darwin’s theory of evolution is an attempt to explain the development and adaptation of organisms to their environment. Like every other scientific hypothesis, evolution has nothing to say about any deities. Questions about the existence of God belong to the realm of metaphysics, not physics. To say evolution dispenses with the hand of God is a metaphysical rather than a scientific statement. One might just as well say that Newton’s theory of gravity or Einstein’s theory of Relativity dispenses with the hand of God. An Atheist may believe this but a Theist would see the hand of God behind evolution or gravity.
Darwin also didn’t have anything to say about how life got started in the first place — which still leaves a mighty big role for God to play, for those who are so inclined.But that could be about to change, and things could get a whole lot worse for creationists because of Jeremy England, a young MIT professor who’s proposed a theory, based in thermodynamics, showing that the emergence of life was not accidental, but necessary. “[U]nder certain conditions, matter inexorably acquires the key physical attribute associated with life,” he was quoted as saying in an article in Quanta magazine early in 2014, that’s since been republished by Scientific American and, more recently, by Business Insider. In essence, he’s saying, life itself evolved out of simpler non-living systems.
I hope that there are not many Christians who would make the argument that because we do not know, at present, precisely what natural processes were responsible for the beginning of life on Earth, that no natural process could have began life and therefore life had to have a supernatural origin. This attribution of divine intervention for things that we do not understand is called the God of the gaps argument. God is held to be active in areas scientific research has not yet penetrated. This is a very bad argument because the gaps are always shrinking. It also does not give God enough credit. God is not active just in matters that we cannot explain, but is present and active in the whole world. The observations we make about the motions and relationships of the objects in the universe and which we call natural laws are all ultimately manifestations of the divine will. The hand of God is everywhere and there are no gaps in His providence.
For this reason, I have never been very impressed with the argument that the origin of life on Earth is so statistically unlikely that only divine intervention could explain it. When God created the universe he also created the natural laws by which the universe operates. If God wanted life in the universe, why would He design it in such a way that the formation of life would be very unlikely, even impossible? It seems to me that the idea that God had to step in to correct the natural course of events makes for a rather clumsy and bumbling God. I believe, rather, that God specifically designed the universe to make the formation of life not just possible but likely and even inevitable. Thus, I do not see Jeremy England’s hypothesis, for it cannot yet rise to the status of theory, as any particular challenge to my faith, but as a sort of confirmation how I believe God interacts with the natural world, provided that the hypothesis is found to be supported by data and research.
Peter Beinart defends President Obama’s use of the term violent extremism rather than Islamic terrorism in an article in The Atlantic. I think he makes a few good points but missed the reason there is a problem with Obama’s refusal to name the source of the problem.
Sometimes we overlook the obvious. For weeks now, pundits and politicians have been raging over President Obama’s insistence that America is fighting “violent extremism” rather than “radical Islam.” Rudy Giuliani calls the president’s refusal to utter the ‘I’ word “cowardice.” The president’s backers defend it as a savvy refusal to give ISIS the religious war it desperately wants. But, for the most part, both sides agree that when Obama says “violent extremists” he actually means “violent Muslim extremists.” After all, my Atlantic colleague David Frum argues, “The Obama people, not being idiots, understand very well that international terrorism possesses an overwhelmingly Muslim character.”
For Obama’s critics, and even some of his defenders, this is the president being “politically correct,” straining to prove that terrorists, and their victims, hail from every group and creed in order to avoid stigmatizing Muslims. But the president’s survey is fairly representative. Peruse the FBI’s database of terrorist attacks in the United States between 1980 and 2005 and you’ll see that radical Muslims account for a small percentage of them. Many more were committed by radical environmentalists, right-wing extremists, and Puerto Rican nationalists. To be sure, Muslims account for some of the most deadly incidents: the 1993 attack on the World Trade Center, Egyptian immigrant Hesham Mohamed Ali Hedayat’s shooting spree at the El Al counter at LAX in 2002, and of course 9/11. But non-Muslims account (or at least appear to account) for some biggies too: the Unabomber, the Oklahoma City bombing, the explosions at the 1996 Olympics in Atlanta, and the 2001 anthrax attacks.
If you look more recently, the story is much the same. Between 2006 and 2013, the University of Maryland’s Global Terrorism Database (GTD) logged 14 terrorist incidents in the United States in which at least one person died. Of these, Muslims committed four: a 2006 attack on the Jewish Federation of Greater Seattle, a 2009 assault on a Little Rock recruiting station, the 2009 Fort Hood shooting, and the 2013 Boston Marathon attack (which the GTD counts as four separate incidents but I count as only one). Non-Muslims committed 10, including an attack on a Unitarian church in Knoxville in 2008, the murder of abortion doctor George Tiller in Wichita in 2009, the flying of a private plane into an IRS building in Austin in 2010, and the attack on the Sikh temple that same year.
Not all European terrorists are Muslim either. According to the Center for American Progress’s analysis of data from Europol, the European Union’s equivalent of the FBI, less than 2 percent of terrorist attacks in the EU between 2009 and 2013 were religiously inspired. Separatist or ultra-nationalist groups committed the majority of the violent acts. Of course, jihadists have perpetrated some of the most horrific attacks in Europe in recent memory: the 2004 Madrid train bombings, the 2005 attacks in the London subway, and, of course, last month’s murders at Charlie Hebdo and Hypercacher. But there have been gruesome attacks by non-Muslims too. Right-wing extremist Anders Behring Breivik’s 2011 assault on a summer camp near Oslo, for instance, killed far more people than the recent, awful attacks in France.
Why does this matter? Because the U.S. government has finite resources. If you assume, as conservatives tend to, that the only significant terrorist threat America faces comes from people with names like Mohammed and Ibrahim, then that’s where you’ll devote your time and money. If, on the other hand, you recognize that environmental lunatics and right-wing militia types kill Americans for political reasons too, you’ll spread the money around.
We’ve already seen the consequences of a disproportionate focus on jihadist terrorism. After 9/11, the Bush administration so dramatically shifted homeland-security resources toward stopping al-Qaeda that it left FEMA hideously unprepared to deal with an attack from Mother Nature, in the form of Hurricane Katrina. The Obama administration is wise to avoid that kind of overly narrow focus today. Of course it’s important to stop the next Nidal Malik Hasan or Dzhokhar Tsarnaev. But it’s also important to stop the next Timothy McVeigh or Wade Michael Page. And by calling the threat “violent extremism” rather than “radical Islam,” Obama tells the bureaucracy to work on that too.
Instead of assuming that these threats are the same, we should be debating the relative danger of each. By using “violent extremism” rather than “radical Islam,” Obama is staking out a position in that argument. It’s a position with which reasonable people can disagree. But cowardice has nothing to do with it.
I think that Mr. Beinart is correct in saying that it would be unwise to concentrate on the threat from Islamic radicals to the exclusion of any other potential threat.There are many sources of danger in the world, both natural and man-made and it is prudent to maintain at least some vigilance in as many ways as possible. I think that he does not understand that the terrorist threat from radical Islam is greater than from any other source, either foreign or domestic. Beinart concedes that the attacks from Islamic terrorists, while fewer in overall numbers, have been more deadly, but the greater danger is not because attacks by violent Muslims tend to kill more people.
Timothy McVeigh, Anders Brevick, the Unibomber, and others like them were demented loners. While their actions were dangerous and deadly they acted alone or with one or two accomplices. They had no large network of supporters to give them aid and no one applauded their actions. The environmentalist and right-wing terrorists Beinart mentioned are very much isolated and marginalized, even among supporters of the causes they espouse. While there may be some few people who approve of their violent actions, the number of people willing to give any sort of material support is very low. These sorts of demented loners and extremist splinter cells can be handled by law enforcement.
Islamic terrorists such as the late and unlamented Osama bin Laden and the Islamic State are not demented loners or small groups of isolated extremists and we practice a dangerous self delusion if we believe that they play as insignificant role in in the Islamic world as Earth First! does in the West. These militants are not a small group of extremist who have perverted a peaceful religion. Their actions and ideology are far closer to the mainstream of Islam than our political leaders are willing to admit.
Consider the numbers. There is something like 1.6 billion Muslims in the world. If only one percent are willing to give at least moral support to terrorists, that is 16 million supporters. If only one percent of that number is willing to support the cause materially, than there are 160,000 people in the world willing to help with acts of terrorism against the West. There are not hundreds of thousands or people willing to actually commit acts of terrorism, even most Muslims who think that such acts are justified would rather live their lives in peace, but this should suggest the size of any potential base of support an Islamic terrorist group might be able to exploit. This is a base far greater than any other cause that a terrorist might support. Law enforcement is not enough to handle this problem. We must be willing to admit that we are at war. They certainly believe that they are at war with us and unlike us, they are fighting to win, while we do not even want to name the enemy.
I do not want to suggest that military action is the only, or even the best, option for dealing with the problem of radical Islam. I do not know what the best option is, but I have a feeling that it will require a variety of approaches including military action, law enforcement, diplomacy and others,just as we used a wide variety of tactics to bring down the Soviet Union. But first we have to admit to ourselves the nature of the threat we face. We cannot defeat an enemy we make no effort to understand.
It is a commonly held viewpoint in our times that history moves in only one direction, from the benighted past to the enlightened present. This viewpoint is justified in the fields of science and technology. We obviously have much greater knowledge of the natural world and far better tools and machines than our ancestors could have dreamed of. This progressive view of history is less justified in politics and culture. In those fields it is less clear what really constitutes progress and whether history is really moving in a straight line toward some end. What I am trying to get at is that our ideas about what is right and wrong, or true and untrue, or desirable and undesirable are not necessarily superior to the ideas of our ancestors nor is it certain that we are forever moving in a certain direction toward the truth or the good, etc.
I mentioned, in passing, in a recent post that the idea of our time being uniquely liberated in its sexual mores while all past ages were repressed and puritanical is not really true. These sorts of cultural movements seem to go in cycled. A similar idea is held about the status of religion in society. It is often believed that religion is a relic of past ages in which people were ignorant and superstitious. In our more enlightened times, in which we have solved many of the mysteries of the universe, religion is no longer needed. As people become more educated, the influence of religion must fade. Europe is held as an example of this phenomena. The continent has become steadily more secular over the last two centuries and surely before long the people of Europe will be entirely free of religion. The fact that the United States is just as advanced as Europe in science and technology but has remained consistently more religious than Europe may seem to disprove the rule that societies become more secular as they advance, but the US is, in some ways,culturally backward compared to Europe, especially in the Red States. After all, those ignorant Americans still don’t have nationalized medicine or strict gun control. In twenty years, the US will be just as secular as Europe. After all, that is the way history is moving. So goes the argument.
But, perhaps not. Religious observance too tends to run in cycles. Periods of great fervor,even fanaticism in religion alternate with periods of laxity and skepticism. Atheism is by no means a new phenomena. There were atheists in ancient Greece and Rome, and curiously enough, they used the very same arguments against religion that the so-called New Atheists use. The current period of secularism in Europe may be followed by a religious period and there is no reason to believe that the US must inevitably follow in Europe’s footsteps.
On a recent Sunday, my family and I only showed up 10 minutes early for Mass. That meant we had to sit in fold-out chairs in the spillover room, where the Mass is relayed on a large TV screen. During the service, my toddler had to go to the bathroom. To get there, we had to step over a dozen people sitting in hallways and corners. This is business as usual for my church in Paris, France.
I point this out because one of the most familiar tropes in social commentary today is the loss of Christian faith in Europe in general, and France in particular. The Wall Street Journal recently fretted about the sale of “Europe’s empty churches.”
Could it be, instead, that France is in the early stages of a Christian revival?
Yes, churches in the French countryside are desperately empty. There are no young people there. But then, there are no young people in the French countryside, period. France is a modern country with an advanced economy, and that means its countryside has emptied, and that means that churches built in an era when the country’s sociological makeup was quite different go empty. In the cities — which is where people are, and where cultural trends gain escape velocity — the story is quite different.
This is not an isolated phenomenon. My wife and I now live in an upper-crust neighborhood with all the churches full of upwardly-mobile professionals. When we were penniless grad students, we lived in a working class neighborhood and on Sunday our church was packed with immigrant families and hipster gentrifiers.
It was only recently that I was struck by the fact that, imperceptibly, the majority of my college and grad school friends who were Christmas-and-Easter-Catholics when we met now report going to Church every Sunday and praying regularly. On social media, they used to post about parties; now they’re equally likely to post prayers for persecuted Middle East Christians or calls to help the homeless over the holidays.
My friends live all over town; some of them are young singles who move around a lot; all of them report looking for those mythical “empty churches” we hear so much about — and failing to find them. In fact, it’s closer to the other way around: If you don’t show up early, you might have to sit on the floor — and people are happy to do it.
The massive rallies in France, underwritten by the Catholic Church, against the recent same-sex marriage bill stunned the world: Isn’t France the poster child for sexually-easygoing secularism? Perhaps more than a million people took to the streets, and disproportionately young ones, too. (Compare Britain’s “whatever” response to its own same-sex marriage act, passed around the same time.) But they forgot that a century of militant secularism didn’t kill the Old Faith — it merely drove it underground. And perhaps by privatizing faith, the secularists unwittingly strengthened it; after all, the catacombs have always been good to Christianity.
There is more.
I hope that this is really the case, that there is a revival of Christianity in France and ultimately Europe, with the difference that there will be no more state sponsored churches. The melding of church and state that took place in the late Roman Empire and afterwards has been very bad for Christianity. Most of the bad behavior attributed to Christianity, which has served to discredit the church in the eyes of many, has been the result of an institution backed by the state, and employing coercion. Whatever form a possible revival of Christianity in Europe might take, it would certainly be better than the alternatives. I believe that secularism is a dead end. Man does not live by bread alone. He needs something higher to believe in. If people do not have religion, they will find something else, or they will cease to live. As it is, Europe is dying.
The are many who believe that the future of Europe is in Islam. They project a future in which thanks to a higher birthrate and conversions, the Muslim population of Europe will come to be a majority and impose their culture and values on Europe. I am not so certain of this, myself. It is unwise to take current demographic trends and project them in a straight line into the indefinite future. People do react to events and it may be that the Europeans will wake up to the threat to Islamization. Whatever happens, the influence of Islam is not a good one, and the less such influence Islam has on Europe and the world, the better. Secularism cannot really counter Islam. You can’t fight something with nothing. If the Europeans do not want to descend in the poverty and barbarism of the Islamic world, they will have to find a competing ideology, and what better than their Christian heritage.
I read this open letter on the rejection of the proven, life saving technologies vaccination and genetically modified organisms.
Dear Every American Who Doesn’t Believe in Science:
I know you are smart. I know you care about your kids, your family, your pets. I know you are a basically decent human being who wants to do right and contribute to society. And because I know these things, I’m going to try very hard to understand why you refuse to believe in scientific fact, rather than berate you and call you names.
The funny thing is, I actually think I’m reasonably good at seeing the other side of any issue. There are a few issues where I struggle, but even then, if I’m honest with myself, I can intellectually understand the other side of the issue and why my friend or colleague has positioned himself on that side.
Regarding immunizations and genetically modified organisms, I can’t.
Yes, I view these two issues – though they are definitely in different industries – as intertwined. Why? Because the people who are anti either of them have a blatant disregard for science and I just don’t understand that.
Scientific consensus on both of these issues is that both are safe. Immunizations are safe for the vast majority of people. GMOs are safe for everyone.
Do you understand what scientific consensus is, my friend? That means that most of the scientists (maybe even those who don’t usually agree) believe the safety of GMOs and immunizations to be fact. It’s beyond dispute. The data has proven safety beyond a shadow of a doubt so that scientists no longer squabble over this issue.
I appreciate what this writer is trying to do and agree with her positions, yet I cannot help but consider that her arguments are somewhat flawed, or perhaps insufficient is a better way to put it. Basically, her argument is that Science has decreed that vaccines and GMO’s are safe because there is a consensus and all the scientists say they are safe. In my view, this is a misunderstanding of what science really is and how it should work.
Science is not a body of lore handed down on stone tablets at Mount Sinai by God or some famous scientist. Science is a method of inquiry used to learn facts about the natural world. It does matter what Einstein or Newton or some other famous scientist says, no matter how great their contributions to science. They can be wrong. It does not matter what the consensus is. The consensus could be mistaken. Not so very long ago, the scientific consensus was that disease was caused by imbalances of bodily humors and bleeding was the most effective treatment. The only thing that matters, or should matter in science is the observations that are made and the logical inductions that are made from those observations Ideally, scientist should be interested in “just the facts”. I think the best arguments on any subject are those based on just the facts.
So, what are the facts about vaccination. Before the widespread introduction of vaccination, people fell sick and even died from a variety of infectious, contagious diseases’ smallpox, measles, whooping cough diphtheria, to name just the ones that spring immediately to mind. These diseases have been virtually wiped out since vaccines for them have been developed. Smallpox, the deadly disease that people feared, is now extinct. Only in backward regions, filled with ignorant and superstitious people, such as the darkest regions of California do these diseases continue to plague humanity.
There have been no credible studies linking vaccination with autism or any other chronic illness. The one study that did propose such a link has been discredited and retracted. This does not mean that there isn’t such a link.There could well be one that has not yet been discovered. But, consider the fact that millions of children have been vaccinated with no ill effects. There may be some danger in being vaccinated, nothing in this world is completely safe, but the dangers associated with not being vaccinated are far greater and more certain. Any rational consideration of the risks and benefits of vaccination must come to the conclusion that the benefits outweigh the risks. If you do not get your children immunized, you are putting them at risk of catching preventable diseases that could cause permanent damage to their health, or even death. Those are just the facts.
I have noticed that US history textbooks tend not to spend a lot of time on the Colonial Period. Generally, there is a chapter on Columbus and the Spanish Conquistadores, followed by a chapter on Jamestown and the Pilgrims. By the third or fourth chapter, they are at the Boston Tea Party and the Revolution, effectively skipping over the hundred and seventy or so years of the English colonies in North America. At least that was the situation when I was in school. Today, I suppose the textbooks teach about the evil whites who oppressed and exterminated the innocent Native Americans who lived in harmony with the Earth and each other.
This habit of skipping over so much of the Colonial Period is unfortunate, I think, since quite a lot happened during that time. The almost two centuries before Independence was the time in which the English colonists became Americans and learned the arts of self-government that served them so well during and after the Revolution. The colonists were forced to learn to govern themselves because England mostly neglected its North America colonies until the French and Indian War. Unlike the Spanish and the French, the English government did not exert much control over the internal affairs of its colonies and didn’t limit colonisation to approved populations. The English thought of their colonies as a source of resources, a place for adventurers to get rich and a dumping ground for undesirables. The royal governors who were appointed tended not to be the best and brightest of the English aristocracy.
The colony of New York seemed to have the worst luck with its governors. Probably the worst of the lot was Edward Hyde, the Third Earl of Clarendon. Hyde was reputed to be corrupt, incompetent, dissolute and a cross dresser. Hyde was appointed to be the Royal Governor of the colonies of New York and New Jersey by Queen Anne from 1701 to 1708. He was not a popular governor. According to some accounts, Hyde took bribes and stole from the public treasury, and he dressed in women’s clothes.
There are several stories about Hyde’s cross dressing. According to one, a constable noticed a woman loitering in one of the seediest parts of New York and arrested her on suspicion of being a prostitute only to discover he had arrested the governor. Another story, has Hyde addressing the New York Assembly in 1702 in a gown reminiscent of the style Queen Anne preferred. When questioned about his choice of attire, he replied that in his capacity as Royal Governor he represented the Queen, a woman, and so he ought to represent her as faithfully as possible. When his wife died in 1707, Hyde is said to have attended her funeral dressed as a woman. There is even a portrait purported to be of the the governor in drag.
There is, of course, some question over whether this is really a portrait of Hyde. One might think that since any politician wouldn’t allow himself in our more liberated times to be photographed in drag, surely no one in the more restrictive eighteenth century would sit in front of a painter to have his portrait done while wearing a dress.
Actually, the idea that our times are more sexually liberated while all past eras were prudish and puritanical is not really true. The truth is that periods of relatively liberal sexual mores alternate with more restrained times. The eighteenth century happened to be one of the more libertine centuries, at least among the aristocracy. The more prudish Victorian nineteenth century was a reaction against the looser morals of the previous century, just as much of the twentieth century has been a reaction against the Victorians. In fact, there was even a lively gay subculture in London and perhaps other large cities of Britain, complete with gay bars, which they called “molly houses” In eighteenth century slang, a “molly” was an effeminate, perhaps homosexual, man and a molly house was where they could congregate for companionship and sex with their more masculine lovers. They would dress as women and take on feminine identities. They even held mock marriages just as homosexuals today have mock marriages. These marriages were, of course, not recognized by the state as such mock marriage often are today. In that respect, the people of the eighteenth century were saner and had a better grip on reality. You must not think that homosexuality, or cross-dressing, was in any sense tolerated, though. Sodomy was a crime punishable by death. Most of what historians know about the molly houses is from court documents of trials persons captured in raids and the testimony of undercover police.
So, was Edward Hyde a molly? Did he frequent the colonial equivalent of a molly house, if any existed? Probably not. There is no reason to believe that he was a homosexual, and really no reason to believe the stories of his cross dressing. Upon closer investigation, the stories seem to have originated from his political enemies, of which he had made many, and to have dated some time after his tenure as governor. They always seem to have been something someone else had seen or heard about the governor. Even the supposed portrait of the governor is more likely to have one a painting of a woman with masculine features. The label on the frame of the portrait may only date to 1867. Even if Hyde did wear women’s clothing, he was probably heterosexual. Contrary to what is still often believed, most cross dressers are straight, and Hyde seems to have been genuinely fond of his wife. The stories of his corruption may also have been exaggerated by his enemies.
Edward Hyde was recalled to England in 1708 and promptly put into debtor’s prison, until his father died the following year and he inherited the title and properties of the Earl of Clarendon. He died in obscurity in 1723 and since his son had already died, the title passed to a cousin, Henry Hyde, Fourth Earl of Clarendon. The title died with his son, Henry Hyde, who died childless in 1753, but it was revived in 1776 with a son of a daughter of the Fourth Earl. Edward Hyde’s descendants include the present Earl of Clarendon, Sarah, Duchess of York, and the actor Cary Elwes. Edward Hyde himself is only remembered for his alleged cross dressing, perhaps not the legacy he might have wanted, but how many colonial Royal Governors are remembered at all?
Today is Valentine’s Day, or St. Valentine‘s Day. Who was Valentine and why does he get a day named after him? The truth is, nobody really knows.Valentine or Valentinus was the name of an early Christian saint and martyr. The trouble is that nothing is known of him except his name. He may have been a Roman priest who was martyred in 269. There was a Valentine who was bishop of Terni who may have been the same man. St. Valentine was dropped from the Roman calendar of Saints in 1969 because of these uncertainties but local churches may still celebrate his day.
It is also not certain how Valentine’s day became associated with love. Some have speculated that the holiday was a Christian substitute for the Roman festival of Lupercalia. However, there is no hint of any association of Valentine’s Day with romance until the time of Chauncer. The holiday seems to have really taken off with the invention of greeting cards.
The First World War was the single most important event of the twentieth century. Every event that followed that war, all the other wars, the great movements and revolutions, and even the scientific discoveries and inventions, began in some way, direct or indirect, from the great and terrible happenings of the years 1914-1918. A world in which that war had not occurred would be a very different and perhaps better world.
Despite the importance of World War One, I have never known very much about it. I had some knowledge of the general outlines, which countries fought on which side, and which side won. I knew the names of some of the battles, the Somme, Verdun, but nothing in detail. I had some familiarity of the conditions of the Western Front but knew almost nothing at all about the Eastern Front, save that Russia ended up losing. I do not think that I am alone in knowing so little about World War One. The First World War tends to be overshadowed, in contemporary minds, by the still greater and more catastrophic Second World War. Yet, had the first war not been fought, it is very unlikely the second war would have broken out. At first the nature of the combatants would have been different, no Nazis in Germany and no Communist Soviet Union. In the United States, World War One tends to get little attention because we only entered the war in its last year. While the US contribution was crucial to the Allied victory, the war did not hurt us as badly as it did the European powers that fought it. Unlike France, Germany or Britain, America did not lose much of a generation in the fighting.
To learn more about this war, I turned to The First World War by the eminent military historian John Keegan. I am happy to report that Mr. Keegan does a truly marvelous job in relating the course of the war, from its beginnings, in the plans by the military staffs of the various combatants to fight the next war, to the assassination of Franz Ferdinand that sparked the war, through the years of trench warfare when massive armies butted heads to no avail, all the way to the last desperate attempt by the Germans to knock Britain and France out of the war before fresh American soldiers arrived to reinforce them. He seems to pay equal attention to both the Western and Eastern fronts. I learned quite a lot about the fighting between Russian and the German-Austrian alliance, not to mention the fighting in the Balkans where the war started.
Keegan mostly dwells on the military aspects of the war and has relatively little to say about the domestic politics of the European nations. He does go into some detail about the diplomatic maneuverings the nations of Europe engaged in during the Balkan crisis that led up to the war. It is somewhat poignant to learn that neither side really wanted a general war in Europe, but no one seemed strong enough to end the crisis. Keegan speculates that if Austria-Hungary had launched an immediate invasion of Serbia in retaliation of their support for terrorist activities, the crisis would have ended before it had grown out of control. As it happened, Austria-Hungary waited for support from Germany, and the wait proved fatal for Europe.
Keegan challenges some myths and ideas that have grown up about the war. He argues that the various generals were not as incompetent or unconcerned about casualties as is often supposed. As he points out, they tried to fight the war as best they could, but technological development was at an awkward phase for fighting a war. Barbed wired and the machine gun made defended positions nearly impregnable, while the technologies that would have aided the offensive, tanks and airplanes were only beginning to be developed. Improvements in transportation, especially trains, made it possible to send many thousands of men into battle, but the generals had no way to keep in contact with their armies once battle had begun. It was no longer possible for generals to lead their men in person; the battles were too large for that. Telephone and telegraph wires were easily cut. Radio was still in its infancy. The generals were removed from the battlefields because they had no choice. They sent their men to be slaughtered because wars cannot be won without attacking the enemy and attacking the enemy’s positions killed thousands.
I enjoyed learning about World War One from John Keegan’s book and I think it serves as an excellent introduction to the war. It covers all the major battles and aspects of the war without getting bogged down in details. Best of all, it can be understood easily even by the reader not familiar with military affairs. I can highly recommend The First World War.
At a prayer breakfast recently, President Obama admonished his audience that any religion, including our own, can be twisted into promoting the most atrocious behavior. As an example, the President cited the Crusades and the Inquisition.
I don’t think that there can be much doubt among those who have actually studied history that holy war has been much more typical of Islam than Christianity. The Crusades were a response to centuries of Islamic aggression against Christendom and the idea of a holy warrior was always somewhat controversial among Christians. In Islam, on the other hand, jihad is an integral part of the faith. Inquisitions seem to have been more of a Christian problem. One never hears of any Muslim Inquisition in history. Yet there was an organized Inquisition at least once in Islamic history.
In general Christianity lends itself more to the formation of something like an Inquisition and the punishment of heretics than Islam. In part this is because in Christianity salvation is not obtained by correct behavior as in Islam, but holding correct doctrine. What a Christian believes about God can have eternal consequences. This is not as unreasonable as it might seem to the modern mind. A doctor with incorrect information about the practice of medicine might kill his patient. A lawyer mistaken about the law cannot serve his client. In like manner, to the Medieval Christian, a priest or preacher who taught incorrect theology placed the souls of his flock in danger. We do not punish heretics, but we do prosecute people who make fraudulent claims and we can punish professionals for malpractice. The Medieval Inquisition, then, pursued cases of theological malpractice. I do not want to defend the Inquisition here, but I do think it is important to try to understand why such an institution was thought to be necessary during the Middle Ages.
The Muslims never absorbed the Greek passion for hair splitting philosophical discussion to the extent that the Christians did, so there is no equivalent in Islamic history to the furious debates over to what extent Christ was God or man or the precise relationship of the persons of the Trinity to one another. Islamic theology is relatively simple and straight forward compared to Christian theology, so there is less scope for heresy in Islam. Most disputes between Muslims have involved differences in legal jurisprudence or the correct succession to the Caliphate rather than fine points of doctrine. This is not to say that Islamic authorities were more tolerant of heresy. Denying a fundamental doctrine of Islam such as the existence of God or proclaiming oneself to be a new prophet with revelations that supersede those of Mohammed was always a good way to lose your head.
Another reason why there haven’t been Inquistitions in Islam is that unlike Christendom, church and state have never been separate entities. There has been no organized institutional church with a hierarchy of clergy as a separate source of political power and moral authority,to a greater or lesser extent opposed to the state in the Islamic world. The Caliph was always a religious leader as well as a political leader and laws were made by religious scholars based on Koranic principles. Among the functions of the state were the promotion of virtuous behavior and the propagation of the faith. Heresy could be punished by the state and there was no need for a separate ecclesiastical institution for that purpose. Nevertheless, as I said there has been at least one Inquisition in Islamic history, though it was sponsored by the Caliph and its purpose was as much the suppression of his political opponents as the eradication of heresy. This Islamic Inquisition was called the Mihna and only lasted from AD 833 until 848.
This Mihna, the word means trial or testing in Arabic, was instituted by the Caliph al-Ma’mun for the purpose of imposing the beliefs of the Mutazilite school of philosophy on his government officials and judges. The Mutazilites or Rationalists were those philosophers and scholars who had studied Greek philosophy and sought to reconcile the teachings of such philosophers as Plato and Aristotle to the precepts of Islam. In particular, they adopted Greek ideas that the world is a rational place ruled by natural, logical laws that could be discovered through the use of reason. They even went so far as to teach that the nature God could be discovered by reason, supplemented by His revelations. To more orthodox or conservative Muslim thinkers, already suspicious of pagan learning, the idea that God could be known at all seemed close to blasphemy. A world ruled by natural laws seemed to infringe on the divine sovereignty of God.
The particular issue on which the Mutazilites and their opponents contended was whether the Koran was created by God or is the untreated, eternal Word of God. This may seem to be a trivial cause for argument, but the controversy helped to determine the course of Islamic theology and philosophy. If the Koran was created by God, than it does not necessarily possess the entirety of God’s perfection. Not every word of the Koran need be the literal Word of God. Some verses could be allegorical or influenced by some historical or cultural context. If the Mutazilites had prevailed, it is possible that the Islamic view of the Koran would be closer to the view held by many Christians on the Bible, inspired by God but with not every verse interpreted literally. On the other hand, if the Koran is uncreated and eternal, then, in a sense , it partakes of the essence of God. There can be no historical or cultural context. Verses which seem to relate to Mohammed’s life existed before Mohammed was born or the world created. Divine laws promulgated in the Koran are for all times and places.
The Mutazilite school was a movement of the intellectual elite rather than a popular movement and much of its influence came from the support of the Caliphs, especially al-Ma’mum who reigned from AD 813-833. In the year 827, al-Ma’mun using his authority as Caliph, proclaimed that the Koran was created. In 833, al-Ma’mun instituted the Mihna to compel acceptance of his proclamation. The Mihna continued after al-Ma’mun’s death the same year, through the reigns of his successors al-Mu’tasim and al-Wathiq. The Caliph al-Mutawakkil ended the Mihna two years into his reign in the year 848. The Mihna, then, was not a permanent institution as the various European Inquisitions were, nor was its effects as immediately far reaching. The Mihna was primarily directed at government officials and Islamic scholars in the Caliph’s capital of Baghdad. Muslims out in the provinces and among the common people were not affected by this inquisition. The Mihna was still unpopular, however, since the men targeted by it were widely respected religious scholars and jurists, including Ahmad ibn-Hanbal, one of the most famous Islamic theologians and founder of the Hanbali school of jurisprudence. Like many such persecutions, the Mihna was a failure. The men targeted became martyrs and heroes of the faith. The Caliphs responsible were reviled as tyrants.
In the longer run, the effects of the Mihna were devastating for the Mutazilites and perhaps for the Islamic world as a whole. The Mutazilites were seen, somewhat unfairly, as the sponsors of the Mihna and their faction and its teachings were increasingly discredited afterwards. By the year 1000, the Mutazilites were universally viewed as heretics, a judgment that has continued to this day. More unfortunately, Greek philosophy,with its emphasis on the use of reason was also discredited, which may have been a leading cause in the decline of science in the Islamic world after around 1000. In order to do science, the thinker must believe that the world is a rational place, governed by rational laws that can be discovered by the human mind. If one believes that the world is governed by the arbitrary dictates of a deity beyond human understand, then it may be possible to make chance empirical discoveries, but there is less motivation to try to fit such discoveries into consistent, logical world view.
It is strange that the Islamic Inquisition ended up doing more damage to Islamic progress than the longer lasting and more extensive Christian Inquisitions did to progress in Europe. The history of the Islamic world seems to be full of these sorts of wrong turns and I have to wonder whether there is something in Islam, perhaps less tolerant of free thought than Christianity ever has been, even at its worst. Or, perhaps the backlash against the intolerance of the Islamic Inquisition ended up being greater intolerance, while the backlash against the Christian Inquisitions was to ultimately discredit the idea of religious coercion. Such questions, perhaps, are unanswerable.
Growing up, you might have heard your mother or father saying something like that when you wanted some expensive toy. Maybe you listened to them and learned something about where money does come from. The progressives who are pushing for minimum wage increases do not seem to have listened to their parents. At least it doesn’t seem to occur to them that if the government creates an increase in the cost of business, such as raising taxes or requiring higher wages, the money to pay for the increased costs has to come from somewhere. Either a business must pass on the increased cost to its customers by increasing prices, adjust its practices to reduce impact of the higher costs, perhaps by employing fewer workers, or accept a reduction in profits. For many of the unthinking, the last option is the most desirable, since it is all too commonly believed that profits are somehow selfish and evil. They do not realize that a business’s profit is what the owners of that business get to meet their own expenses and is the repayment for the expenses and risks of starting and running the business. This is especially true for the small business person who is the sole owner of his business, but it is also true for the stock holders of a major corporation. It a business cannot make a profit it must eventually cease to operate and close its doors. It really doesn’t require a PhD in economics or business administration to understand all of this, only the ability to think things through, an ability sadly lacking in all too many. Consider this example, brought by ABC News, of a bookstore in San Francisco, closing due to an increase in the city’s minimum wage.
Independent bookstores have faced tough times for quite a while. In San Francisco, neighborhood businesses have been passionately protected, so it’s hard to believe that an initiative passed by voters to raise the minimum wage is driving a Mission District bookstore out of business.
San Francisco’s minimum wage is currently $11.05 an hour. By July of 2018, the minimum wage in San Francisco will be $15 an hour. That increase is forcing Borderlands Bookstore to write its last chapter now.
When actor Scott Cox took a job at Borderlands Books he didn’t do it for the money.
“I’ve been a longtime customer of the store,” he said. “I love the people, I love the books.”
The work let him squeak by while nourishing his passion for sci-fi and fantasy.
“Everyone who works here does this because they love books, they love stories, and they love being booksellers,” said book store owner Alan Beatts.
That’s why store owner Beatts found it so tough to post a sign in the front window that the store is closing. “We’re going to be closing by the end of March,” he said.
Borderlands was turning a small profit, about $3,000 last year. Then voters approved a hike in the minimum wage, a gradual rise from $10.75 up to $15 an hour.
“And by 2018 we’ll be losing about $25,000 a year,” he said.
Money doesn’t grow on trees. Alan Beatts cannot simply go to his money tree and shake off a few extra bills. He must come up with the money to pay the higher wages somehow. He cannot increase his prices. Small, independent book stores have long been squeezed by large chains such as Barnes & Nobles who are now being squeezed by Amazon, so any increase in prices will simply drive customers away. I doubt it his bookstore is so overstaffed that he can afford to let many employees go. He cannot continue to run his bookstore if it loses money, so the bookstore must close.
The next part of this article is priceless.
It’s an unexpected plot twist for loyal customers.
“You know, I voted for the measure as well, the minimum wage measure,” customer Edward Vallecillo said. “It’s not something that I thought would affect certain specific small businesses. I feel sad.”
I would say that Mr. Vallecillo wasn’t thinking at all, but then neither were the people in San Francisco’s Board of Supervisors when they decided to let people vote on increasing the minimum wage.
Though it’s caught a lot of people off guard, one group that wasn’t completely surprised was the Board of Supervisors. In fact, they say they debated this very topic before sending the minimum wage to the voters.
“I know that bookstores are in a tough position, and this did come up in the discussions on minimum wage,” San Francisco supervisor Scott Wiener said.
Wiener knows a lot of merchants will pass the wage increases on to their customers, but not bookstores.
“I can’t increase the prices of my products because books, unlike many other things, have a price printed on them,”
Wiener says it’s the will of the voters. Seventy-seven percent of them voted for this latest wage hike.
“Borderlands Books is an phenomenal bookstore, I was just in it yesterday,” Wiener said. “I hope they don’t close. It’s an amazing resource.”
But Alan Beatts said he can’t see a way to avoid it.
Mr. Wiener should have thought of that before, unless they repeal the increase in the minimum wage, Borderlands Books will have to close. The voters voted for the increase. Now, they will have to deal with the consequences.
Anyone who shaves on a regular basis owes King Camp Gillette a debt of gratitude. King Camp Gillette, yes that was actually his name, was the founder of the Gillette Safety Razor Company in 1901 and the inventor of the disposable safety razor. Before this invention, men shaved using a straight razor that had to be sharpened on a leather strop. These razors were expensive, needed sharpening often and were not especially safe or easy to use. There had been attempt to create safety razors out of forged steel in the nineteenth century but they were also expensive and hardly disposable.
King Camp Gillette was a salesman for the Crown Cork and Seal Company, which made bottle caps for soft drink bottles, and he noticed that people would throw away the bottle caps after opening the bottles. He thought that if bottle caps could be disposable, why not razors? Working with two machinists, Steven Potter and William Emery Nickerson, Gillette designed a cheap, disposable safety razor using stamped steel. The razor was an immediate success and since Gillette’s portrait was on the packets of the razor blades, he became recognizable all over the world. Gillette’s big break come with America’s entry into World War I. Gillette contracted with the government to provide razor kits for American servicemen. Despite his success, King Camp Gillette died in poverty in 1932. He lost control of his company to a fellow director, John Joyce, though the company retained the Gillette name. Gillette spent much of the money he gained from the sale on property and when the Great Depression struck, the shares of the company lost their value.
The Story of King Camp Gillette could be read as a great American success story or a rags to riches to rags story. What I find most intriguing about Gillette, however, are his social and political views. The Wikipedia article about Gillette describes him as a “Utopian Socialist” who wrote a book in 1894, advocating that industries should be nationalized and controlled by a single corporation owned by the public. It may seem incongruous for a capitalist to argue for socialism, but Gillette believed that capitalists were the natural choice to run the nationalized industries, since they already had the necessary experience. Gillette, then was a democratic socialist rather than a Marxist. He wanted a socialism that benefited everyone in the nation, not a class struggle and revolution.
Gillette’s views may seem radical, but this kind of democratic, corporatist socialism was very popular at the time. In 1888, Edward Bellamy (cousin of the Francis Bellamy who had devised the Pledge of Allegiance) had published a utopian novel titled Looking Backward, in which a man from 1887 falls asleep Rip van Winkle style and wakes up in the socialist utopia of 2000. In his novel, Bellamy had advocated the same sort of corporatist socialism as Gillette and many others. Looking Backward was a best seller and almost immediately after its publication “Nationalist” clubs sprang all over the country hoping to enact such policies. Ultimately some form of this National Socialism was adopted by Benito Mussolini in Fascist Italy and certain aspects of Franklin Roosevelt’s New Deal.
I have to wonder how otherwise intelligent men could imagine that creating a publicly owned monopoly to control an entire nation’s industry could possibly be a good idea or, in any way compatible with any idea of a free country. One of the major concerns of the late nineteenth and early twentieth centuries was the growth of monopolies and trusts owned by such men as John D. Rockefeller or Andrew Carnegie. Many observers believed that such men practiced unfair and anti-competitive policies which gave them a disproportionate influence over the American economy and ultimately of the government. It seemed obvious that economic power should not be concentrated in the hands of a few men. Why then, was the solution to this concern considered to be the concentration of economic, political, and legal power in the hands of a few. The National Corporation that Gillette and others envisaged would be owned by the public, but the public wouldn’t be administrating the corporation on a daily basis. There would have to be some sort of committee of directors with perhaps a sort of CEO. Such directors would have far more control over the economy and the government than any private businessman. They would effectively own the whole country, even if nominally it was owned by the public. Even the most benevolent saint would be tempted to abuse such power, to benefit his friends, and the people who would aspire to such positions would not likely be saints.
I have similar reservations about the people who seem to believe that a bigger, more expansive government is the solution to all the nation’s problems, the sort of people who are always proposing new laws and regulations believe that a new government program is always the answer. I can understand that giving more power to the state will allow it to do more good for everyone, but why can they not see that it will also allow the state to do more evil. Given the defects of human nature, which inclines more to evil than to good, my personal preference is to leave the good undone rather than risk the evil that will certainly be done.