Tuesday, July 24, 2007

On the Limits of Science

Before I make it to bed, I have a few random comments on the limits of science and scientists.

Religion and Science: Are they Insoluble?

Jeff Jacoby in the Boston Globe wrote a clever piece here. The relevant quote:
DID YOU hear about the religious fundamentalist who wanted to teach physics at Cambridge University? This would-be instructor wasn't simply a Christian; he was so preoccupied with biblical prophecy that he wrote a book titled "Observations on the Prophecies of Daniel and the Apocalypse of St. John." Based on his reading of Daniel, in fact, he forecast the date of the Apocalypse: no earlier than 2060. He also calculated the year the world was created. When Genesis 1:1 says "In the beginning," he determined, it means 3988 BC.

Not many modern universities are prepared to employ a science professor who espouses not merely "intelligent design" but out-and-out divine creation.
The teacher turns out to be Sir Isaac Newton. Jacoby is making the case that if Dawkins and Harris have their way, scientists with religious beliefs will be dismissed or not hired and that science itself could suffer. Predictably, the article calls for each side acknowledging the other.
To be sure, religious dogma can be a blindfold, blocking truths from those who refuse to see them. Scientific dogma can have the same effect. Neither faith nor reason can answer every question. As Newton knew, the surer path to wisdom is the one that has room for both.
The question I want to ask is what does faith bring to the table? How did Sir Isaac Newton's religious writings advance religious knowledge? How was Christianity bettered by his life and work? In what ways did Newton's faith in the Bible improve our understanding of the age of the earth? When the religious answer questions about our world using ancient texts they accept on faith, how do we objectively evaluate one text or one interpretation in relation to others if science isn't to interfere?

I will guess that "scientific dogma" refers to the assumption of materiality underlying scientific methodology. When has this failed science or humanity? When has religion needed to correct scientists in their models of reality? From my admittedly limited knowledge of the history of scientific thought, it seems that the fact checking scientists have done for each other has yielded greater results than the feedback received from religious thinkers.

What Jacoby appears to want is for science to leave religion alone, to leave some questions to religious faith. He's not the only one asking for this.

Science and Mystery

Over at ScienceBlogs, Wyatt Galusky has written about science and mystery.
Perhaps few would credit a mystical explanation over a more antiseptic scientific one, especially if one had designs on reproducing or controlling such a phenomenon. But, still, don't we stand to gain with the retention of mystery? Or, rather, don't we lose when we forget that, no matter how powerful our conceptual schemes and how finely parsed our analysis, mystery remains? Let me point to some coalescence of thought on the subject.
The post includes an anecdote about a doctor who was struck by lightning and afterwards experienced an increased appreciation of and desire for music. The post concludes:
In Sacks' article, he notes that, when offered the chance to have neurological tests done on his brain to suss out a neurological basis for his musicophilia, Cicoria (an orthopedic surgeon by training) demurred, preferring to see his new found musical love as a mystery, and an act of grace.
What was gained by leaving this mystery alone (if it is indeed a mystery)? Did Cicoria or Galusky consider that submitting to the tests might one day help a patient with the opposite problem, the inability to perceive or be moved by music? Or is that a mystery that shouldn't be touched as well? If scientists are to purposely leave gaps in our knowledge so that we can step back and admire the mystery and/or let religion, mysticism, or aesthetics find answers unhindered, how is knowledge or the human condition improved?

If I disagree with Harris and Dawkins, it is in their methodology. There is excellent work being done in the sociology and psychology of religion, and society could only be improved, I think, were this knowledge to spread. If Newton were to apply for a job in astrophysics today and expressed the views he held in his life, he would rightly be denied a position. The case for the time frame of our universe's birth and growth is too overwhelming to consider such radical alternative theories to be on the same grounds. Likewise, I hope respectable universities would not hire a philosopher of religion who believed all religions are subject to cultural change and environmental pressures except for their own particular faith.

In my personal experience - and this appears to be the case for some other scientists - my appreciation of the world is only increased as I learn more about it. Many of my fellow linguists are full of anecdotes about their young children struggling with language. This doesn't interfere with their love for their children (I hope!), but it is simply a expression of that amazement each linguist has for the complexities of human language. Despite two centuries of work in linguistics, the field has enough mystery left that I doubt I'll ever be out of a career.

I suspect that one reason many psychologists and sociologists studying religion have not joined the campaign to eliminate respect for religious beliefs in the public sphere is that these scientists picked their subject because of a deep fascination for it. They likely recognize that religion and society interact in such complex ways that the broad generalizations made by both sides are inherently faulty.

Monday, July 23, 2007

Cain't Got No Time: Grammaticality Judgments at the Edge of Acceptability

After avoiding listening to her for months (I might read Rolling Stone when I have a free copy lying around, but I tend to be selective in acting on its recommendations), I was eventually put into a situation where I found myself listening to Amy Winehouse. Specifically, her song "Rehab." While the music appeased the musical ear, something flagged down the attention of my linguist's ear. See if you can hear it just after 0:36 in the YouTube link, and again at 1:23.

Unless I'm mistaken, Winehouse sings
I cain't got seventy days.
And later
I cain't got the time
All of the lyrics printed online that I can find transcribe both lines with ain't, but I hear a velar obstruent before the ain't. I wasn't aware that the "southernism" cain't was recognized as a "Americanism" across the pond. If any British English speakers have insight on this, please let me know.

Now I know those of us from Northern Indiana like to think we speak "plain" English, but the fact of the matter is my dialect has as much in common with southern US dialects as it does with more eastern dialects. So it's not the word cain't that caught my ear but its use. For those unfamiliar with the word, a quick Google search on "cain't" produced the following examples.
I'm just a girl who cain't say no.
Oh you cain't getta man with a booooook!
Poor Bill, He Cain't Help It.
I CAIN'T QUIT YEW!!!
Cain't you do nothing 'bout them weeds?
Make 'em an offer they cain't refuse.
I Cain't Get No Wireless.
Cain’t yew afford no gas?
It just ain't fair if you cain't cheat!
Ya cain't get thar from hee-yah!
Why cain't we get the FDA to label food made in China?

Prussian Blue: Them thare girls cain't sing.
The word means essentially the same thing as can't but perhaps more forcefully negative. Some of the results returned by the search clearly mock the word (and by extension, its users) and some results are even references to other song lyrics (Rodgers and Hammerstein's "I Cain't Say No"). Did Winehouse appropriate the word cain't to add authenticity to her fake soul vocals only to use it erroneously? I thought a few Google searches would prove that hunch.

The only search result I could find of "cain't got" was on a German LiveJournal page.
you cain't got no chance with cupid
In contrast, "cain't get" returns 715 results as of this posting, one of which was quoted above. Why the difference? A speaker of more prestigious dialects might point out that got is not an infinitive whereas get is, but the got here is being used as a verb denoting possession. As in "I got five weeks left." In at least my dialect, some negative auxiliary verbs can appear before this verb: don't got, ain't got, haven't got. Others cannot: *can't got, *won't got. Could there be a statistical effect intervening, flagging the construction as ungrammatical when what it means is "I haven't heard this before"?

I gathered some quick google stats on the likelihood of cain't appearing with verbs of possession in comparison to other negative auxiliaries.


don't can't ain't cain't didn't couldn't wouldn't Totals
got 428,000 13,000 2,240,000 1 96,700 687 630 2,779,018
% 15.40 0.47 80.60 0.00 3.48 0.02 0.02
get 84,100,000 46,700,000 28,800 715 17,500,000 6,420,000 2,250,000 156,999,515
% 53.57 29.75 0.02 0.00 11.15 4.09 1.43
have 340,000,000 2,930,000 49,600 45,900 44,300,000 3,070,000 17,100,000 407,495,500
% 83.44 0.72 0.01 0.01 10.87 0.75 4.20
own 2,130,000 142,000 523,000 5 444,000 25,300 47,000 3,311,305
% 64.33 4.29 15.79 0.00 13.41 0.76 1.42
hold 2,370,000 2,020,000 1,990,000 4,910 727,000 1,120,000 433,000 8,664,910
% 27.35 23.31 22.97 0.06 8.39 12.93 12.93
buy 7,060,000 2,160,000 2,870,000 3,930 2,070,000 443,000 1,050,000 15,656,930
% 45.09 13.80 18.33 0.03 13.22 2.83 6.71
find 2,170,000 51,000,000 6,340,000 13,400 2,580,000 4,060,000 547,000 66,710,400
% 3.25 76.45 9.50 0.02 3.87 6.09 0.82
Avg % 41.78 21.25 21.03 0.02 9.20 3.92 3.93

Based on these numbers, there appears to be a dis-preference for cain't got, cain't get, and cain't own. Whether that difference is significant remains to be determined with better data. Only the one of these strikes me as sounding ungrammatical. This raises interesting questions for the ways in which statistical feedback informs grammaticality judgments. I doubt the differences of the chart can be accounted for solely by semantics.

One interesting result is that ain't got appears much more frequently than would be predicted from the other patterns. This might be a matter of informal style favoring aint' over don't. The possessive meaning of got is less likely in more formal styles. Another factor could be that ain't got is preferred over ain't have. Perhaps for formality reasons.

Sunday, July 15, 2007

Principles and Faith

Recently, other blogs have attempted to define faith and scientific knowledge in ways that demonstrate the two are not the same. (See "You don't need faith to believe the principle of evolution") An example of how scientific principles are developed occurred to me just a few minutes ago, which I thought I'd share.

We begin with a proposition, principle, theory, assumption, or even a belief if you like, since we'll assume the speaker believes it: "Whatever one puts into a refrigerator will be there when one opens it again."

This statement would be an excellent hypothesis for a scientific investigation because it is readily testable. One can open the door, and see that the items are either there or not. The hypothesis is readily generalizable as well, one can test with a variety of items, a variety of refrigerators, and a variety of individuals. Several experiments can be done to test for all sorts of conditions and combinations.

Now let's say the hypothesis is shown to be wrong in at least one experiment. The refrigerator is opened and a specific item is missing. Possible hypotheses might include the item turning invisible or disappearing on its own, but these would require significant alterations of well established principles. After doing some investigative work, we might determine that a flatmate removed the item. When more items go missing, we might discover that every time it was another human agent who removed the item and that they always removed it between our placing the item in the refrigerator and opening it again (assuming, for sake of argument, that we have extraordinarily honest or sloppy flatmates). Eventually we will amend our hypothesis: "Whatever one puts into a refrigerator will be there when one opens it again unless an entity has removed it during the interval."

Now suppose we place green leaf lettuce in the refrigerator and forget about it for a month. When we open the vegetable drawer again, we will find black spotted and wilting lettuce instead of the delicious and crisp lettuce that we placed there. More experiments will demonstrate that many items left unattended for long lengths of time will slowly undergo change and that the rate of change is dependent on the temperature setting and type of item. We can amend our hypothesis again: "Whatever one puts into a refrigerator will be there when one opens it again unless an agent has removed it during the interval. Items are subject to continual degradation depending on conditions including temperature, the type of item, and the air-tightness of the item's container."

This theorizing sounds painfully obvious to adults, but remember that most humans learn this theory of conservation at a young age. You can actual watch children progress through stages when first they realize that objects are permanent (do not disappear when they cannot be seen) and later that the volume of a liquid does not change with the shape of its container. (Many adults still have trouble thinking in terms of conservation as the term is used in the physical sciences, but that's another matter.)

The progression of our hypothesis demonstrates why scientific theories are subject to continual revision and why a good hypothesis is one that makes predictions that can be tested. If our hypothesis had been "whatever is found in the refrigerator will be a subset of items placed therein" it would have lacked explanatory power. That is, it fails to explain the hows and whys of the changes taking place. If our original hypothesis was "Whenever we open the refrigerator, what will be inside will be what Zeus wills to be inside", we would never have professed beyond that formulation unless we were given unmitigated access to Zeus's will and then began to investigate that (but, of course, Zeus's will is beyond human comprehension). Even if our flatmate had taken some food, we could still argue that that had been Zeus's will and not need to add additional clauses. Or we could argue with equal validity that our flatmate had violated Zeus's will and then feel justified in taking punitive measures.

This is the problem many of us have with beliefs held on faith alone. The religious begin with a proposition that they accept on faith (Yeshua was the legitimate messiah, Paul was divinely inspired, Mohammad was divinely inspired, the Hadith of the Cloak is valid, Zeus causes lightning, Kuan Yin refused to enter Nirvana). When required, evidence is selected to prove the statement of faith and contrary evidence is explained away (Zeus willed Melissa to take the last pita. Zeus only wills milk to spoil after a few weeks.). Very rarely will the proposition itself be modified (e.g., the emergence of Deists who believed Jesus was wise but not a miracle-worker).

When some IDers/Creationists argue against evolution, they often display a lack of understanding about the difference between a hypothesis and a statement of faith. No living biologist would expect everything Darwin wrote about evolution to be true. Scientists do not end debates by quoting him (though they make look toward his writings in search of inspiration or to wonder at how much he predicted on weak evidence was actually proven with the developments of genetics), nor are there active schools of philosophers debating the proper interpretation of his writings. If a fossil or living creature was found that was half-dog/half-grass and did not fit into our current biological classification morphologically or genetically, then the current theory of evolution would be thrown for a loop. Its possible that explanations could be found that explained the odd hybrid and preserved much of the current theory, but it would not be without considerable effort, testing, and modification of our knowledge about the process we call evolution.

On the other hand, religious faith is not subject to the same kind of revision (though it certainly changes, those changes often resemble other cultural changes and not changes in scientific knowledge). If a religion predicted that it was the only valid method for approaching an absolute source of morality, one might make the prediction that its adherents should be more moral by their own standards than adherents of other religions. If that observation was not born out in the data, the faithful could invent all sorts of justifications that may or may not have been elements of the original faith. From Christianity, one frequently hears that even Christians are sinners and therefore won't necessarily be more moral, but still insist that they have personally felt the holy spirit's touch at important moments or that atheists are by nature immoral.

The same is true for historical events that the religious accept on faith. That there is no independent evidence corroborating the story would not shatter their faith. They would argue (quite logically) that one cannot prove a negative in such a situation. But they might hypocritically deny similar stories originating in other religions have occurred as described on the basis of "common sense" or their faith.

This is the point where those who approach the world from a non-religious perspective reach impasses when we argue with the religious. A proposition held on faith and not subject to revision or review based on evidence is a proposition that is difficult to disprove. One can argue that the proposition is unlikely or unnecessary to explain the data (as most "evangelical" atheists are content to do), but is not the same thing and the faithful know it. When the interpretation of Zeus's Will is subject to extreme disagreements, it becomes even more difficult for the non-religious to suggests tests of those principles as there is likely to be some group that disagrees with the interpretation selected for testing.

In the scientific community, when two groups support theories that contradict one another, the outcome is determined by the strength of the theory in predicting new evidence. More than occasionally scientific communities will adopt theories that turn out to be erroneous, but it is most often those same scientific communities who discover the faults in the theory. Among the faithful, beliefs that contradict each other are typically determined by the strength of each side in converting others, military or colonial campaigns, trade radiating from key economic or cultural centers, and other means of cultural dispersion. Religious faiths have had difficulty in spreading when there is no social or economic advantage to their adoption. (To be fair, the adoption of science as a methodology frequently spreads in a similar fashion, but because science is based on logic applied to the natural world, there is nothing preventing scientific methodology from being developed twice except time and the dwindling numbers of pre-industrial societies. It is less likely that a religion identical to Christianity would develop independent of old-world denominations.)

Where the faithful (sometimes) find fault in scientific principles is that they are always an incomplete picture. In the real world, we would have to make an assumption that a human agent stole our food because human agents are not always honest. When it comes to matters like this, scientists must assume what Dr. House is fond of arguing: patients lie, but symptoms never do. This is why scientists in many fields often seek a variety of lines of inquiry into a problem to determine independently that some principle is true. For example, we would use video recordings to watch the refrigerator or an analysis of the kitchen trash can's contents in addition to conducting interviews of flatmates. We cannot date fossils through direct observation, but by using a wide variety of independent dating methods and finding that the majority converge at similar dates, one can establish the likeliest time frame.