Archive for the ‘Understanding Modernity’ Category

h1

Frank Raymond Leavis 1 — Anon

July 23, 2014
'In his youth', noted The Times' obituarist, 'he had shown prowess at cross-country racing and the loneliness of the long-distance runner adhered.

‘In his youth’, noted The Times’ obituarist, ‘he had shown prowess at cross-country racing and the loneliness of the long-distance runner adhered.

Frank Raymond (F R) Leavis (1895-1978) is now recognized as one of the most influential literary critics and teachers of his time and among the major intellectual figures of the 20th century.

**************************************

Frank Raymond Leavis was born in Cambridge on 14th July 1895 and attended the Perse School there. He went up to Emmanuel College in 1914, where (resuming studies after the Great War) he read History and English, the latter being then new as a university discipline at Cambridge. He would recall those early years of the English tripos in his 1967 Clark Lectures (published in 1969 as English Literature in our Time and the University), evoking vividly the pioneering spirit of the new venture.

He served in the war in the Friends’ Ambulance Unit, carrying a pocket Milton throughout the ordeal. Though he rarely spoke of them, his wartime experiences affected him deeply and remained with him for the rest of his life. He would much later recall carrying buckets of cocoa along the roofs of ambulance trains (without corridors) ‘to men who would have died without it’ and ‘the innumerable boy subalterns who … had climbed out and gone forward, playing their part in the attacking wave, to be mown down with the swathes that fell to the uneliminated machine guns.’

Early  Intellectual Influences
In another autobiographical passage he remembered ‘those early years after the great hiatus’ when he had ‘struggled to achieve the beginnings of articulate thought about literature’. The figures who ‘really counted’ then were George Santayana (though ‘not fundamentally congenial’) and Matthew Arnold, to be followed soon by T.S.Eliot: he bought The Sacred Wood when it came out in 1920. (Eliot’s paradoxical distinction would preoccupy him for much of his life.) Along with these went the influence of Ford Madox Ford’s (or Hueffer’s) English Review to which Leavis had subscribed as a schoolboy in 1912.

It was here that he first came on the writing of D. H. Lawrence (‘the necessary opposite’, as he would later call him, in relation to Eliot). Leavis was impressed by Ford’s recognition that in the ‘irreversible new conditions’ of modern industrial civilisation the concern for ‘the higher cultural values’ must reside with a small minority, while at the same time that concern must concede nothing to ‘the preciousness, fatuity or spirit of Aestheticism’. That view was to be a cornerstone of his own periodical Scrutiny (1932-53). An important aspect of the Scrutiny ‘manifesto’ also, in a Marxising era, would be its freedom from organised ideology: a ‘space’ for disinterested intellectual enquiry founded in the ‘autonomy of the human spirit’.

In Mansfield Forbes, one of the early lecturers for the tripos, Leavis found an inspiring example of critical and teaching method. He also found stimulation in the early work of I.A.Richards (though he would part company with him when Richards developed interests in semiology). In 1924 he took one of the earliest PhDs in the School with a thesis on the periodical literature of the eighteenth century with particular reference to Addison’s Spectator. He retained a lifelong interest in the sociology of literature and a profound concern for cultural continuity. His wife would exemplify similar interests in her classic study (which grew out of her PhD thesis), Fiction and the Reading Public (1932). He collaborated with Denys Thompson on a small primer for schools aimed at encouraging critical awareness: Culture and Environment (1933).

He also admired The Calendar of Modern Letters edited by Edgell Rickword, a quarterly which ran from 1925 to 1927. Leavis was to see its failure to win a sufficient public as an index of cultural decline. Its concern with the maintenance of critical standards was to be an important inspiration behind Scrutiny. The Calendar ran a series of intelligent deflations of what it saw as the exaggerated reputations of such contemporary figures as H.G.Wells, J.M.Barrie, G.K.Chesterton and John Galsworthy (the Galsworthy critique was written by D.H.Lawrence): these articles were later collected by Edgell Rickword under the title Scrutinies.

In 1933 Leavis published a selection from The Calendar, with an appreciative introduction, under the title Towards Standards of Criticism (re-printed in 1976 with both the original and a new introduction – in effect a retrospect – by Leavis). It contains one of his most important and original formulations: a reference point for the many subsequent assaults he made on the problem of value-judgement:

Literary criticism provides the test for life and concreteness; where it degenerates, the instruments of thought degenerate too, and thinking, released from the testing and energizing contact with the full living consciousness, is debilitated, and betrayed to the academic, the abstract and the verbal. It is of little use to discuss values if the sense for value in the concrete – the experience and perception of value – is absent.

Teaching at Cambridge
By 1925 he was doing some part-time teaching at Emmanuel. D.W.Harding, who was later to be a fellow editor of Scrutiny, recalled his qualities as a teacher when, looking back fifty years in a broadcast symposium in 1975, he said:

He was really superb. I remember the feelings with which this other man and I would come away. We would be partly exhilarated and partly a bit subdued and rueful, perhaps. Exhilarated because of the new insights and the fine discriminations he had made, and sobered because he kept such extremely high standards in insight and one just realised how unskilled one was as a reader. At the same time, there was no feeling that he belittled you in any way – if you had difficulties or raised objections, then he met you on those. He could scrap what he was going to say and just meet you on whatever you were interested in.

Another pupil, William Walsh, recalled:

One always had the feeling that one wasn’t simply discussing what was there on the page. This was taking place, of course, but the discussion was deeply rooted and far-reaching, dealing with all that one felt was really important in life … Leavis’s teaching always seemed to engage both these facets: one’s personal life, and the life of the mind – the search for the significance of life itself.

In 1929 he married the vivacious and prodigiously clever Queenie Dorothy Roth, whom he had supervised at Girton. The next few years brought a wonderful harvest of critical work culminating in the annus mirabilis of 1932 when Leavis published New Bearings in English Poetry (with its perceptive discussions of Yeats, Pound and Eliot), Q.D.L. published Fiction and the Reading Public, and the quarterly periodical Scrutiny was founded.

It is sometimes suggested that that Scrutiny in its later years was indifferent to contemporary literature, but it is worth recalling that Leavis in his earlier years was in the vanguard. He incurred the displeasure of the public authorities by lecturing on the banned Ulysses in the mid-1920s. As to the teaching of contemporary work in the 1930s, Muriel Bradbrook recalled Leavis’s interest in the poetry of I.A.Richards’s pupil, the ex-student of mathematics, William Empson. She recalled: ‘It cannot be very often that undergraduates are taught the poetry of a fellow undergraduate, but we were taught about some of Empson’s poems by Leavis.’ He was also writing on Eliot and on Lawrence in the 1920s and early ’30s.

Leavis had enemies in the English Faculty, however; his outstanding abilities and the Scrutiny project did not enable him to obtain a permanent Faculty post (the latter may even have militated against him). In 1936, however, (the year in which Revaluation appeared) he was made a Lecturer (though on a part-time salary), at the age of 41, after having been a Probationary (or Assistant) Lecturer since 1927.

This situation continued until 1947 when, at the age of 52, he achieved a full-time Lectureship. He had seen younger and less able candidates given precedence. All this (and the lack of academic recognition accorded his wife) was to be a source of bitterness to him both at the time and in later years: a bitterness contained by his high intelligence and powers of self-sufficiency. ‘In his youth’, noted The Times’ obituarist, ‘he had shown prowess at cross-country racing and the loneliness of the long-distance runner adhered.’

h1

Book Review of  ‘Philology’ by James Turner — Tom Shippey

July 18, 2014
Comparative philology, tracing the history and development of especially the Indo-European languages, rapidly gained immense prestige, most of all in Germany. No discipline, declared Jacob Grimm, doyen of philologists and fairy-tale collector, "is haughtier, more disputatious, or more merciless to error." It was a hard science in every sense, like math or physics, with a ruthless ethic of finicky detail.

Comparative philology, tracing the history and development of especially the Indo-European languages, rapidly gained immense prestige, most of all in Germany. No discipline, declared Jacob Grimm, doyen of philologists and fairy-tale collector, “is haughtier, more disputatious, or more merciless to error.” It was a hard science in every sense, like math or physics, with a ruthless ethic of finicky detail.

The skeptical approach of modern scholarship, and its insistence on scrutinizing all forms of evidence, reflects the legacy of a nearly forgotten discipline. Mr. Shippey is the editor of The Shadow-Walkers: Jacob Grimm’s Mythology of the Monstrous and The Road to Middle-Earth: How J.R.R. Tolkien Created a New Mythology. You’ll find a chapter dealing with evil and the ring from the Dr. Shippey on payingattentiontothesky. 

***********************************

James Turner’s book on “philology” must be the most wide-ranging work of intellectual history for many years. But what is it about? As Mr. Turner declares in his prologue, “philology has fallen on hard times in the English-speaking world.” It may be the foundation of all the humanities, with one significant exception, but “many college-educated Americans no longer recognize the word.”

Its original meaning, “love of words,” is unhelpful. “Tough love” would be a better description: a critical attitude toward words, their roots and their meanings — one that admits no exceptions. It could well be said that a readiness to scrutinize anything, treating even the Bible “like any other book,” is still one of the distinctive marks of Western civilization, seen in every discipline, from literary criticism to theology, history to anthropology.

The first philologists, back in the pre-Christian era, took that attitude with Homer’s epics, which were already deeply venerated and formed the basis of young men’s education. But “The Iliad” and “The Odyssey” were centuries old by the time of the great librarians of Alexandria Eratosthenes and Zenodotus. The poems’ texts had been passed on first by word of mouth and then by scribes prone to error or deliberate meddling. The early philologists, then, compared different versions of texts, noted repetitions and struck out dubious lines, such as those added to cover up the non-participation of Athens in the Trojan War.

Textual scrutiny became even more vital in the early Christian era. For one thing, a New Testament canon had to be formed. “The Gospel of John” was in. “The Gospel of Nicodemus” remained popular for centuries, for it followed Jesus down to Hell, but its provenance was dubious, so it was out. The Old Testament presented problems, too, notably the issue of translation, from Hebrew to Greek to Latin. Christian aristocrats built up great research libraries, like Cassiodorus, and brooded on etymology, like Isidore of Seville.

It didn’t last. Modern historians do not like the term “Dark Ages,” but as far as philology goes there was then a centuries-long hiatus, beginning with the fall of Rome in 410. Libraries were destroyed, manuscripts lost or scrubbed down and used again for pious purposes. Medieval scholasticism was deeply intellectual, but it was logical rather than philological.

Only in the 15th century did textual study and textual recovery become once again a passion. Critical moments include the foundation in 1498 of the first “trilingual college” (Hebrew, Greek and Latin) at Alcalá near Madrid, where Cervantes’s house still stands; the literal unearthing in 1546 of old Roman inscriptions, with dates, from the Roman Forum; and in 1519 the publication of what would be the most influential “lost text” of the Renaissance, Tacitus’ “Germania,” an ethnography of German tribes that delighted German scholars with its positive image of their ancestors.

Spanish, Italian and German scholars would soon be joined by the great humanist and Bible scholar from the Netherlands, Erasmus, who established the principle of rejecting the facilior lectio: If you have two readings in different manuscripts, reject the easier one. That’s the one put in by some dumb copy editor. A hasty or absent-minded scribe is more likely to substitute a simpler, more familiar phrasing for a more complicated original than vice-versa. Erasmus’s clever rule of thumb remains one of the foundations of textual criticism.

Not for the last time, however, English scholarship hung back, short of libraries and affected by “cultural cringe”—though after the execution of Charles I in 1649, Milton won the ensuing war of words over classical and biblical precedents for regicide against the royalist French philologist Salmasius. In the following century, Richard Bentley, “the greatest scholar that England or perhaps Europe ever bred,” as A.E. Housman called him, returned to philology’s origins with his discovery of the “Homeric digamma.” Certain irregularities in the Homeric poems, he realized, could be explained by assuming that copyists had consistently omitted a w-like letter that was part of ancient Greek but not Classical Greek. “Oinos” (“wine”), for example, was originally “woinos.” Once Bentley put the digamma back in, these lines became metrically regular.

But how did this textual philology affect other disciplines? Mr. Turner shows with great force and erudition how the habit of skeptical scrutiny affected, first, the writing of history. No longer could you claim to study nations and civilizations without first studying texts. The philological revolution swept away, among others, P.F. Suhm, a Danish historian from the 18th century.

Determined to recover the history of Scandinavia from ancient times, Suhm had put together many volumes and a gigantic “spreadsheet” that gathered every bit of data from all the sagas and chronicles available. But he had not studied the texts, only read them. Most of his data was fiction, his harmonization of it so much work wasted. “Garbage in, garbage out,” as we now say.

What you needed was critically certified and preferably contemporary documentary evidence, as exemplified by the sources Edward Gibbon drew on for his “Decline and Fall” (1776-89) and those that the Scottish historian William Robertson used for his “History of America” (1777). The principle was, once again, check everything, spare nothing.

Well-meaning Americans cleaned up George Washington’s spelling and vulgar idioms; philological historians put them back again. Noah Webster’s 1828 “American Dictionary” piously traced etymologies back to the biblical language Aramaic: After Webster’s death a German philologist removed them. J.M. Kemble swallowed Suhm hook, line and sinker in his first 1833 edition of “Beowulf” but repudiated his mistake in a panic only four years later.

What was happening from about 1800 on was the coming of “comparative philology,” best described as the Darwinian event for the humanities as a whole. Like “The Origin of Species,” it was powered by wider horizons and new knowledge. By the late 18th century, conscientious British colonial administrators, who had had Latin and Greek drummed into them at school, found that they needed classical Persian, and even Sanskrit, to do their jobs properly. They could not help noticing the similarities between the Eastern languages and their classical counterparts. But what did these mean, and what was the origin, not of species, but of language differentiation?

Comparative philology, tracing the history and development of especially the Indo-European languages, rapidly gained immense prestige, most of all in Germany. No discipline, declared Jacob Grimm, doyen of philologists and fairy-tale collector, “is haughtier, more disputatious, or more merciless to error.” It was a hard science in every sense, like math or physics, with a ruthless ethic of finicky detail.

The new model spread almost immediately to mythology, most notably in the work of Max Müller, the great student of the sacred books of India, and was extended to anthropology, the philology of the preliterate, by scholars such as J.G. Frazer, author of “The Golden Bough” (1890). Simultaneously, though, the philological insistence on using and scrutinizing all forms of evidence had revolutionized classical studies, which became in Germany the combined discipline called “Altertumswissenschaft,” the science of antiquity.

Of this transformation archaeology was a vital part, including the rediscovery in modern Turkey of the Mausoleum, one of the Seven Wonders of the ancient world, in 1855, and the shattering later discoveries in Egypt, Crete and Troy. Art history, too. Kenneth Clark later stated explicitly that to identify a painting as by Bellini or Botticelli needed the skills of a Bentley or a Housman. But the most iconoclastic effect of philology may well have been on Bible study, with the coming, once again from Germany, of the “höhere Kritik” or “higher criticism”: looking at words, yes, but also authorship, literary forms and, as Mr. Turner says, “meanings in light of deep history.”

David Strauss’s “Life of Jesus, Critically Examined” (1835-36), written in German, put the cat among the pigeons (it was later translated by George Eliot). It declared that the Gospels must be seen as mythic expressions of faith, not reliable historical narratives. Once again, British scholarship hung back or responded with spluttering outrage. When a rather timid attempt at the new approach appeared in English in 1860, in the form of the overview anthology “Essays and Reviews,” a German reviewer noted that it did not inaugurate a new era in scholarship, as the authors seemed to think; it showed that Oxford “had not yet learned what scholarship was.”

The result, however, as Mr. Turner shows with indisputable range and force, is the structure of arts faculties in modern universities. All but one of our modern disciplines have philology in their ancestry — all except philosophy, which, he declares, “arrives at universally valid generalizations” rather than scrutinizing individual cases.

One may quibble with him here about how much philological influence persists. In the English-speaking world, the last century has seen a determined war of extermination fought against comparative philologists by their deadly enemies, the literary critics. J.R.R. Tolkien in particular, who said of himself “I am a pure philologist,” fought all his career to promote and then to save a remnant of philology within the Oxford syllabus. The critics won within academia, only to find their aces trumped by Tolkien’s success in the wider world. Tolkien nevertheless used his 1959 Oxford “Valedictory Lecture” to lambaste the colleagues he called “misologists.”

Philology played little part in the 20th-century rise of New Criticism, now very old, with its emphasis on context-free close reading, and even less in the craze for literary theory, which has wandered off in the direction of philosophical speculation. Mr. Turner, perhaps wisely, says nothing about the present “bonfire of the humanities,” as seen in declining enrollments. But turning a collective back on Grimm is like ditching Darwin—though their significance does not mean either they, or anyone else, should be immune to scrutiny. “Tough love” applies to everybody.

Including Mr. Turner. J.M. Kemble’s 1836 pamphlet, written in German to show the author was a proper “Philolog,” is not a study of the West Saxon dialect, as Mr. Turner says, but of the genealogy of the West Saxon dynasty: “der Westsachsen” is plural, not singular; and the dialect is called “Westsächsisch,” not “Westsachsen.” Picky, picky, picky. That’s the way philologists work. Bless their cold and stony hearts.

h1

Quantum Physics: The Multiverse of Parmenides 1 — Heinrich Pas

July 9, 2014
Heisenberg traveled to Copenhagen, Denmark, in the fall of 1941 to visit his fatherly friend and mentor Niels Bohr. According to Heisenberg, his intention was to inform Bohr that the construction of a nuclear bomb was possible but that the German physicists would not try to build it and to suggest that physicists in the allied nations should follow the same policy. This epic conversation, however, only resulted in a lasting breakdown of their friendship. Bohr, the son of a Jewish mother and the citizen of an occupied country, could not have much sympathy for any agreement with the German physicist. From left to right: Enrico Fermi, godfather of the neutrino; Werner Heisenberg, a creator of quantum mechanics; and Wolfgang Pauli, the father of the neutrino.

Heisenberg traveled to Copenhagen, Denmark, in the fall of 1941 to visit his fatherly friend and mentor Niels Bohr. According to Heisenberg, his intention was to inform Bohr that the construction of a nuclear bomb was possible but that the German physicists would not try to build it and to suggest that physicists in the allied nations should follow the same policy. This epic conversation, however, only resulted in a lasting breakdown of their friendship. Bohr, the son of a Jewish mother and the citizen of an occupied country, could not have much sympathy for any agreement with the German physicist. From left to right: Enrico Fermi, godfather of the neutrino; Werner Heisenberg, a creator of quantum mechanics; and Wolfgang Pauli, the father of the neutrino.

A major breakthrough in the story of quantum physics be­gins with a young man holed up in a rain pipe in order to find a quiet place for reading. It is the year 1919, in Munich, shortly after the end of World War I. The chaotic rioting in the streets that followed the revolution driving the German emperor out of office has finally calmed down, and now eighteen-year-old Werner Heisenberg can find some leisure time again.

He had been working as a local guide, assisting a vigilante group that was trying to reestablish order in the city, but now he could retreat, after the night watch on the command center’s hotline, onto the roof of the old seminary where his· cohort was accommodated. There he would lie, in the warm morning sun, in the rain pipe, reading Plato’s dia­logues. 

And on one of these mornings, while Ludwig street below him and the university building across the way with the small fountain in front slowly came to life, he came across that part in Timaeus where Plato philosophizes about the smallest constituents of matter, and the idea that the smallest particles can finally be resolved into mathematical structures and shapes, that one would encounter symmetries as the ba­sic pillar of nature-an idea that will fascinate him so deeply that it will capture him for the rest of his life.

Werner Heisenberg was to become one of the most important physicists of his generation. When just turned forty, he was the head of the German nuclear research program, which in World War II examined the possibilities for utilizing nuclear power, including the feasibility of nuclear weapons. In this position he was on the assassination list of the US Office of Strategic Services, but a special agent who had permission to kill Heisenberg in a lecture hall decided against it, after he heard Heisenberg’s lecture on abstract S-ma­trix theory and concluded. that the practical usefulness  of Heisenberg’s research was marginal.

Even today, historians debate Heisenberg’s role in Nazi Germany. His opponents criticize his remaining in Germany and his commitment to the nuclear research project, the so-called Uranverein, which, according to these critics, failed to build a nuclear weapon for Hitler only because Heisenberg was unable to do it. Extreme admirers, such as Thomas Powers in Heisenberg’s War, argue that Heisenberg used his position to prevent the construction of a German nuclear bomb by exaggerating its difficulties when questioned by officials, bestowing a moral mantle on Heisenberg he never had claimed for himself. 

What is well documented is that Heisenberg traveled to Copenhagen, Denmark, in the fall of 1941 to visit his fatherly friend and mentor Niels Bohr. According to Heisenberg, his intention was to inform Bohr that the construction of a nuclear bomb was possible but that the German physicists would not try to build it and to suggest that physicists in the allied nations should follow the same policy. This epic conversation, however, only resulted in a lasting breakdown of their friendship. Bohr, the son of a Jewish mother and the citizen of an occupied country, could not have much sympathy for any agreement with the German physicist.

In 1998, the British author Michael Frayn wove different perceptions of this meeting into a play that essentially deals with the parallel existence of different realities, both in psychology and in quantum mechanics. After all, among all his other activities, Heisenberg was famous for one thing: was one of the masterminds of a revolutionary new theory. 

Just six years after the sunny morning in the rain pipe, Heisenberg, now twenty-three years old and a postdoc at the University of Gottingen, was forced by his hay fever to leave his institute for two weeks, and he spent some sleepless time on Helgoland, a tiny and once holy red rock off Germany’s coast in the North Sea-days that would shatter the most basic grounds of physics. One-third of the day the young man climbed in the famous cliffs; one-third he memorized the works of Goethe, the poet who served as a national idol in Germany and who followed the classical paradigm of the ancient Greeks; and the last third he worked on his calculations. 

In these calculations he developed a formalism that would be the bed­ rock of modern quantum physics and would do nothing less than change the world: “In Helgoland there was one moment when it came to me just as a revelation . . . . It was rather late at night. I had finished this tedious calculation and at the end it came out correct. Then I climbed a rock, saw the sun rise and was happy.

“Nowadays the technical applications of quantum physics account for about one-third of the US gross domestic product. Nevertheless, Richard P.Feynman commented some forty years after Heisenberg’s work that the theory is so crazy that nobody can actually comprehend it, and Einstein had earlier declared bluntly: this is obvious nonsense. What makes quantum physics special is that this theory breaks radically with the concept of causality. In our daily lives we are used to ordered sequences of cause and effect: You and a friend clink your glasses with just a little bit too much verve; one glass breaks; beer runs down to the floor; your significant other/ roommate/parents cry out. 

One event causes the next one. This is exactly where quantum physics is different, where this strict connection between cause and effect no longer exists. For example, how a particle reacts to an influence can be predicted only in terms of probabilities. But this is not the end of the story: Unless the effect on the particle is actually observed, all possible consequences seem to be realized simultaneously. Particles can reside in two different locations at once! And particles exhibit properties of waves while waves behave in certain situations  like  particles. 

An object thus has both properties of a particle and of a wave, depending on how it is observed. The particle corresponds to an indivisible energy portion of the wave, a so-called quantum. On the other hand, the wave describes the probability for the particle to be located at a certain place. This property of quantum mechanics can be depicted most easily with the famous double-slit experiment (Figure below).

Figure 3.2. Double-slit experiment. As long as no measurement determines which slits the particles are passing through, they behave like interfering waves, which pass simultaneously though both slits (left side). Where two  wave crests coincide, the probability of detecting a particle is largest; where a crest coincides with a trough, the probability is very small or zero. The resulting image is called an interference pattern. As soon as an external measurement disturbs the system-for example, if one uses irradiation with light to determine which path the electrons take through the slits — the wave collapses into single particles, which accumulate in narrow bands behind the slits they were flying through (right side).

Figure 3.2. Double-slit experiment. As long as no measurement determines which slits the particles are passing through, they behave like interfering waves, which pass simultaneously though both slits (left side). Where two wave crests coincide, the probability of detecting a particle is largest; where a crest coincides with a trough, the probability is very small or zero. The resulting image is called an interference pattern. As soon as an external measurement disturbs the system-for example, if one uses irradiation with light to determine which path the electrons take through the slits — the wave collapses into single particles, which accumulate in narrow bands behind the slits they were flying through (right side).

When a particle beam hits a thin wall with two narrow slits in it, the corresponding wave penetrates both slits and spreads out on the other side as a circular wave. On a screen situated behind the wall, in accordance with the wave nature of the electrons, an interference  pattern appears, resulting from the superposition of the waves originating from the two slits in the wall.

Where a crest meets another crest or a trough meets another trough the wave gets amplified. A crest encountering a trough, on the other hand, results in little or no amplitude (left side). This pattern appears, however, only as long as it is unknown through which slit a single electron passed. As soon as this is determined, for example by blocking one of the slits or by irradiating the electrons with light, the two-slit interference pattern gets destroyed and the electrons behave just like classical particles. To be more accurate, a new wave emanates from the slit, and the pattern exhibited on the screen is the one for a wave passing through a single slit, which resembles a smooth probability distribution (right side).

Heisenberg and Bohr interpreted this as a collapse of the wave function due to  the  measurement  process  in  which one gets a result with the probability given by the amplitude squared of the wave. This is the so-called Copenhagen inter­pretation of quantum physics, which is still taught at universi­ties around the globe. According to this interpretation, a par­ticle is located in many places simultaneously until finally a measurement assigns it a concrete location. And this is true not only for position; it applies to other measurable quantities such as momentum, energy, the instant of a nuclear decay, and other properties as well. 

Erwin Schrodinger, both collaborator with and competitor of Heisenberg in the development of quantum physics, carried this idea to an extreme: “One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be se­ cured against direct interference by the cat).”

In Schrodinger’s experiment the death or life of the cat depends on whether a radioactive isotope does or doesn’t decay in a particular time period. As long as we do not check whether the isotope did decay or not, nor how the cat is doing, Schrodinger’s cat is simultaneously dead and alive, or as Schrodinger phrased it: “[The wave function of the system would have] in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.

There are two reasons why we don’t observe such bizarre phenomena in our daily lives: One is that the wavelengths of ordinary objects around us are tiny compared with the sizes of the objects themselves. The other is that the objects we deal with every day are always interacting with their environment and thus are being measured all the time. A beer bottle, for example, may very well be situated in two different locations, but only for an extremely short time and for an extremely small separation (too short and too small to measure).

h1

From Locke to Hume 2 — Richard Tarnas

July 3, 2014
Hume concluded that the mind itself was only a bundle of disconnect . perceptions, with no valid claims to substantial unity, continuous existence, or internal coherence, let alone to objective knowledge. All order and coherence, including that giving rise to the idea of the human self were understood to be mind-constructed fictions. Human beings require such fictions to live, but the philosopher could not substantiate them.

Hume concluded that the mind itself was only a bundle of disconnected perceptions, with no valid claims to substantial unity, continuous existence, or internal coherence, let alone to objective knowledge. All order and coherence, including that giving rise to the idea of the human self were understood to be mind-constructed fictions. Human beings require such fictions to live, but the philosopher could not substantiate them.

A continuation of a reading selection from The Passion of the Western Mind

*******************************

But Berkeley in turn was followed by David Hume, who drove the empiricist epistemological critique to its final extreme, making use of Berkeley’s insight while turning it in a direction more characteristic of the modem mind — more reflective of that secular skepticism growingly visible from Montaigne through Bayle and the Enlightenment.

As an empiricist who grounded all human knowledge in sense experience; Hume agreed with Locke’s general orientation, and he agreed too with Berkeley’s criticism of Locke’s theory of representation ; but he disagreed with Berkeley’s idealist solution. Human experience was indeed of the phenomenal only, of sense impressions, but there was no way to ascertain what was beyond the sense impressions, spiritual or otherwise. Like Berkeley, Hume could not accept Locke’s views on representative perception, but neither could he accept Berkeley’s identification of external objects with internal ideas, rooted ultimately in the mind of God.

To begin his analysis, Hume made a distinction between sensory impressions and ideas: Sensory impressions are the basis of any knowledge, and they come with a force and liveliness that make them unique. Ideas are faint copies of those impressions. One can experience through the senses an impression of the color blue, and on the basis of this impression one can have an idea of that recalled. 

The question therefore arises, what causes the sensory impression? If every valid idea has a basis in a corresponding impression, then to what impression can the mind point for its idea of causality? None, Hume answered. If the mind analyzes its experience without preconception, it must recognize that in fact all its supposed knowledge is based on a continuous chaotic volley of discrete sensations, and that on these sensations the mind imposes an order of its own. The mind draws from its  experience an explanation that in fact derives from the mind itself, not from the experience. 

The mind cannot really know what causes the sensations, for it never experiences “cause” as a sensation. It experiences only simple impressions, atomized phenomena, and causality per se is not one of those simple impressions. Rather, through an association of ideas — which is only a habit of the human imagination — the mind assumes a causal relation that in fact has no basis in a sensory impression. All that man has to base his knowledge on is impressions in the mind, and he cannot assume to know what exists beyond those impressions.

Hence the presumed basis for all human knowledge, the causal relation, is never ratified by direct human experience. Instead, the mind experiences certain impressions that suggest they are caused by an objective substance existing continuously and independently of the mind; but the mind never experiences that substance, only the suggestive impressions. Similarly, the mind may perceive that one event, A, is repeatedly followed by another event, B, and on that basis the mind may project that A causes B.

But in fact all that is known is that A and B have been regularly perceived in close association. The causal nexus itself has never been perceived, nor can it be said to exist outside of the human mind and its internal habits. Cause must be recognized as merely the accident of a repeated conjunction of events in the mind. It is reification of a psychological expectation, apparently affirmed by experience but never genuinely substantiated.

Even the ideas of space and time are ultimately not independent realities, as Newton assumed, but are simply the result of experiencing the coexistence or succession of particular objects. From repeated experiences of this kind, the notions of time and space are abstracted by mind, but actually time and space are only ways of experiencing objects. All general concepts originate in this way, with the mind moving from an experience of particular impressions to an idea of relationship between those impressions, an idea that the mind then separates and reifies. But the general concept, the idea, is only the result of the mind’s habit association. At bottom, the mind experiences only particulars, and any relation between those particulars is woven by the mind into the fabric of its experience. The intelligibility of the world reflects habits of the mind not the nature of reality.

Part of Hume’s intention was to refute the metaphysical claims philosophical rationalism and its deductive logic. In Hume’s view, two kinds of propositions are possible, one based purely on sensation and the other purely on the intellect. A proposition based on sensation concerns obvious matters of concrete fact (e.g., “it is a sunny day”), which are always contingent (they could have been different, though in fact the were not).

By contrast, a proposition based purely on intellect concerns relations between concepts (e.g., “all squares have four equal sides”), and these are always necessary — that is, their denial leads to self-contradiction. 

But the truths of pure reason, such as those of mathematics, are necessary only because they exist in a self-contained system with no mandatory reference to the external world. They are true only by logical definition, by making explicit what is implicit in their own terms, and these can claim no necessary relation to the nature of things. Hence the only truths of which pure reason is capable are tautological. Reason alone cannot assert a truth about the ultimate nature of things. 

Moreover, not only does pure reason have no direct insight into metaphysical matters, neither can reason pronounce on the ultimate nature of things by inference from experience. One cannot know the supersensible by analyzing the sensible, because the only principle upon which one can base such a judgment — causality — is finally grounded only in the observation of particular concrete events in temporal succession. Without the elements of temporality and concreteness, causality is rendered meaningless. Hence all metaphysical arguments, which seek to make certain statements about all possible reality beyond temporal concrete experience, are vitiated at their basis. Thus for Hume, metaphysics was just an exalted form of mythology, of no relevance to the real world. 

But another and, for the modem mind, more disturbing consequence of Hume’s critical analysis was its apparent undermining of empirical science itself, for the latter’s logical foundation, · induction, was now recognized as unjustifiable. The mind’s logical progress from many particulars to a universal certainty could never be absolutely legitimated: no matter how many times one observes a given event-sequence, one can never be certain that that event-sequence is a causal one and will always repeat itself in subsequent observations.

Just because event B has always been observed to follow event A in the past cannot guarantee it will always do so in the future. Any acceptance of that “law,” any belief that the sequence represents a true causal relationship, is only an ingrained psychological persuasion, not a logical certainty. The apparent causal necessity in phenomena is the necessity only of subjective conviction, of the human imagination controlled by its regular association of ideas. It has no objective basis. 

One can perceive the regularity of events, but not their necessity. The latter is no more than a subjective feeling induced by the experience of apparent regularity. In such a context, science is possible, but it is a science of the phenomenal only, of appearances registered in the mind, and its certainty is a subjective one, determined not by nature but by human psychology.

Paradoxically, Hume had begun with the intention of applying rigorous Newtonian “experimental” principles of investigation to man, to bring the successful empirical methods of natural science to a science of man. But he ended by casting into question the objective certainty of empirical science altogether. If all human knowledge is based on empiricism, yet induction cannot be logically justified, then man can have no certain knowledge. 

With Hume, the long-developing empiricist stress on sense per­ceptions, from Aristotle and Aquinas to Ockham, Bacon, and Locke, was brought to its ultimate extreme, in which only the volley and chaos of those perceptions exist, and any order imposed on those perceptions was arbitrary, human, and without objective foundation.

In terms of Plato’s fundamental distinction between “knowledge” (of reality) and “opinion” (about ·appearances), for Hume all human knowledge had to be regarded as opinion. Where Plato had held sensory impressions to be faint copies of Ideas, Hume held ideas to be faint copies of sensory impressions. In the long evolution of the Western mind from the ancient idealist to the modem empiricist, the basis of reality had been entirely reversed: Sensory experience, not ideal apprehension, was the stand of truth — and that truth was utterly problematic. Perceptions alone were real for the mind, and one could never know what stood beyond them. 

Locke had retained a certain faith in the capacity of the human mind to grasp, however imperfectly, the general outlines of an external world by means of its combining operations. But for Hume, not only was human mind less than perfect, it could never claim access to the world’s order, which could not be said to exist apart from the mind. That order was not inherent in nature, but was the result of the mind’s own associating tendencies. 

If nothing was in the mind that did not intimately derive from the senses, and if all valid complex ideas were based on simple ideas derived from sensory impressions, then the idea of cause itself, and thus certain knowledge of the world, had to be critically reconsidered, for cause was never so perceived. It could never be derived from a simple direct impression. Even the experience of a continuous existing substance was only a belief produced by many impressions recurring in a regular way, producing the fiction of an enduring entity.

Pursuing this psychological analysis of human experience still further, Hume concluded that the mind itself was only a bundle of disconnect . perceptions, with no valid claims to substantial unity, continuous existence, or internal coherence, let alone to objective knowledge. All order and coherence, including that giving rise to the idea of the human self were understood to be mind-constructed fictions.

Human beings require such fictions to live, but the philosopher could not substantiate them. With Berkeley, there had been no necessary material basis for experience, though the mind had retained a certain independent spiritual power derived from God’s mind, and the world experienced by the mind derived its order from the same source. But with the more secular skepticism of Hume, nothing could be said to be objectively necessary  not God, not order, not causality, nor substantial existents, nor personal identity, nor real knowledge. All was contingent. 

Man knows only phenomena, chaotic impressions; the order he perceives therein is imagined, for reasons of psychological habit and instinctual need, and then projected. Thus did Hume articulate philosophy’s paradigmatic skeptical argument, one that in turn was to stimulate Immanuel Kant to develop the central philosophical position of the modem era.

h1

From Locke to Hume 1 — Richard Tarnas

July 2, 2014
The reason that objectivity exists, that different individuals continually perceive a similar world, and that a reliable order inheres in that world , is that the world  and its order depend on a mind that transcends individual minds and is universal — namely, God's mind.

The reason that objectivity exists, that different individuals continually perceive a similar world, and that a reliable order inheres in that world , is that the world and its order depend on a mind that transcends individual minds and is universal — namely, God’s mind.

Another reading selection from The Passion of the Western Mind

**************************

With Newton’s synthesis, the Enlightenment began with an unprecedented confidence in human reason, and the new science’s success in explicating the natural world affected the efforts of philosophy in two ways: first, by locating the basis of human knowledge in the human mind and its encounter with the physical world; and second, by directing philosophy’s attention to an analysis of the mind that was capable of such cognitive success.

It was above all John Locke, Newton’s contemporary and Bacon’s heir, who set the tone for the Enlightenment by affirming the foundational principle of empiricism: There is nothing, in the intellect that was not previously in the senses (Nihil est in intellectu quad non antea fuerit. in sensu). Stimulated to philosophy by reading Descartes, yet also influenced by the contemporary empirical science of Newton, Boyle, and the Royal Society, and affected as well by Gassendi’s atomistic empiricism, Locke could not accept the Cartesian rationalist belief in innate ideas.

In Locke’s analysis, all knowledge of the world must rest finally on man’s sensory experience. Through the combining and compounding of simple sensory impressions or “ideas” (defined as mental contents) into more complex concepts, through reflection after sensation, the “mind can arrive at sound conclusions. Sense impressions and inner reflection on these impressions: “These two are the fountains of knowledge, from whence all the ideas we have, or can naturally have, do spring.”

The mind is at first a blank tablet, upon which experience writes. It is intrinsically a passive receptor of its experience, and receives atom sensory impressions that represent the external material objects causing them. From those impressions, the mind can build its conceptual . understanding by means of its own introspective and compounding operations. The mind possesses innate powers, but not innate ideas. Cognition begins with sensation. 

The British empiricist demand that sensory experience be the ultimate source of knowledge of the world set itself in opposition to the Continental rationalist orientation, epitomized in Descartes and variously elaborated by Spinoza and Leibniz, which held that the mind alone, through its recognition of clear, distinct, and self-evident truths, could achieve certain knowledge . 

For the empiricists, such empirically grounded rationalism was, as Bacon had said, akin to a spider’s producing cobwebs out of its own substance. The characteristic imperative of Enlightenment (soon to be carried by Voltaire from England to Continent and the French Encyclopedists) held that reason requires sensory experience to know anything about the world other than its own concoctions .

The best criterion of truth was henceforth its genetic basis — in sense experience — not just its apparent intrinsic rational validity, which could be spurious. In subsequent empiricist thought, rationalism was increasingly delimited in its legitimate claims: The mind without sensory evidence cannot possess knowledge of the world, but can only speculate, define terms, or perform mathematical and logical operations.

Similarly, the rationalist belief that science could attain certain knowledge of general truths about the world was increasingly displaced by a less absolutist position, suggesting that science cannot make known the real structure of things but can only, on the basis of hypotheses conceding appearances, discover probable truths.

This nascent skepticism in the empiricist position was already visible in Locke’s own difficulties with his theory of knowledge. For Locke recognized there was no guarantee that all human ideas of things genuinely resembled the external objects they were supposed to represent Nor was he able to reduce all complex ideas, such as the idea of substance, to simple ideas or sensations. 

There were three factors in the process of human knowledge: the mind, the physical object, and the perception or idea in the mind that represents that object. Man knows directly only the idea in the mind, not the object . He knows the object only mediately, through the idea. Outside man’s perception is simply a world of substances in motion; the various impressions of the external.world that man experiences in cognition cannot be absolutely confirmed as belonging to the world in itself.

Locke, however, attempted a partial solution to such problems by making the distinction (following Galileo and Descartes) between primary and secondary qualities — between those qualities that inhere in all extended material objects as objectively measurable, like weight and shape and motion, and those that inhere only in the subjective human experience of those objects, like taste and odor and color. While primary qualities produce ideas in the mind that genuinely resemble the external object, secondary qualities produce ideas that are simply consequences of the subject’s perceptual apparatus. By focusing on the measurable primary qualities, science can gain reliable knowledge of the material world

But Locke was followed by Bishop Berkeley, who pointed out that if the empiricist analysis of human knowledge is carried through rigorously, then it must be admitted that all qualities that the human mind registers, whether primary or secondary, are ultimately experienced as ideas in the mind, and there can be no conclusive inference whether or not some of those qualities “genuinely” represent or resemble an outside object. 

Indeed, there can be no conclusive inference concerning even the existence of a world of material objects outside the mind producing those ideas. For there is no justifiable means by which one can distinguish between objects and sensory impressions, and thus no idea in the mind can be said to be “like” a material thing so that the latter is “represented” to the mind. Since one can never get outside of the mind to compare the idea with the actual object, the whole notion of representation is groundless. The same arguments Locke used against the representational accuracy of secondary qualities were equally applicable to primary qualities, for in the end both types of qualities must be regarded as experiences of the mind.

Locke’s doctrine of representation was therefore untenable. In Berkeley’s analysis, all human experience is phenomenal, limited to appearances in the mind. Man’s perception of nature is his mental experience of nature, and consequently all sense data must finally be adjudged as “objects for the mind” and not representations of material substances. In effect, while Locke had reduced all mental contents to an ultimate basis in sensation, Berkeley now further reduced all sense data to mental contents. 

The Lockean distinction between qualities that belong to the mind and qualities that belong to matter could not be sustained, and with this breakdown Berkeley, a bishop of the church, sought to overcome the contemporary tendency toward “atheistic Materialism” which he felt unjustifiably arisen with modem science. The empiricist rightly affirms that all knowledge rests on experience.

But in the end, Berkeley pointed out, all experience is nothing more than experience — all mental representations of supposed material substances are finally ideas in the mind  — , and therefore the existence of a material world external to the mind is an unwarranted assumption . 

All that can be known with certainty to exist is the mind and its ideas, including those ideas that seem to represent a material world. From a rigorously philosophical point of view, “to be,” does not mean “to be a material substance”; rather, “to be” means “to be perceived by a mind” (esse est percipi)

Yet Berkeley held that the individual mind does not subjectively determine its experience of the world, as if the latter were a fantasy susceptible to any person’s whim of the moment. The reason that objectivity exists, that different individuals continually perceive a similar world, and that a reliable order inheres in that world , is that the world and its order depend on a mind that transcends individual minds and is universal — namely, God’s mind. 

That universal mind produces sensory ideas in individual minds according to certain regularities, the constant experience of which gradually reveals to man the “laws of nature.” It is this situation that allows the possibility of science. Science is not hampered by the recognition of sense data’s immaterial basis, for it can continue its analysis of objects just as well with the critical knowledge that they are objects for the mind — not external material substances but recurrent groups of sense qualities.

The philosopher does not have to worry about the problems created by Locke’s representation of an eternal material reality that evaded certain corroboration, because the material world does not exist as such. The ideas in the mind are the final truth. Thus Berkeley strove to preserve the empiricist orientation and solve Locke’s representation problems, while also preserving a spiritual foundation for human experience and natural science.

h1

Understanding Inequality — John Steele Gordon

June 11, 2014
To see how fundamental the microprocessor — a dirt-cheap computer on a chip — is, do a thought experiment. Imagine it's 1970 and someone pushes a button causing every computer in the world to stop working. The average man on the street won't have noticed anything amiss until his bank statement failed to come in at the end of the month. Push that button today and civilization collapses in seconds. Cars don't run, phones don't work, the lights go out, planes can't land or take off. That is all because the microprocessor is now found in nearly everything more complex than a pencil.

To see how fundamental the microprocessor — a dirt-cheap computer on a chip — is, do a thought experiment. Imagine it’s 1970 and someone pushes a button causing every computer in the world to stop working. The average man on the street won’t have noticed anything amiss until his bank statement failed to come in at the end of the month. Push that button today and civilization collapses in seconds. Cars don’t run, phones don’t work, the lights go out, planes can’t land or take off. That is all because the microprocessor is now found in nearly everything more complex than a pencil.

Extreme leaps in innovation, like the invention of the microprocessor, bring with them staggering fortunes. Mr. Gordon is the author of “An Empire of Wealth: The Epic History of American Economic Power” (HarperCollins, 2004). This is a reblog from a recent WSJ article.

****************************

Judging by the Forbes 400 list, the richest people in America have been getting richer very quickly. In 1982, the first year of the list, there were only 13 billionaires on it. A net worth of $75 million was enough to earn a spot. The 2013 list has nothing but billionaires, with $1.3 billion as the cutoff. Sixty-one American billionaires aren’t rich enough to make the list.

Many regard this as a serious problem, seeing the development of a plutocracy dominating the American economy through the sheer power of its wealth. The French economist Thomas Piketty, in his new book “Capital in the 21st Century,” calls for an 80% tax on incomes over $250,000 and a 2% annual tax on net worth in order to prevent an excessive concentration of wealth.

That is a monumentally bad idea.

The great growth of fortunes in recent decades is not a sinister development. Instead it is simply the inevitable result of an extraordinary technological innovation, the microprocessor, which Intel brought to market in 1971. Seven of the 10 largest fortunes in America today were built on this technology, as have been countless smaller ones. These new fortunes unavoidably result in wealth being more concentrated at the top.

But no one is poorer because Bill Gates, Larry Ellison, et al., are so much richer. These new fortunes came into existence only because the public wanted the products and services — and lower prices — that the microprocessor made possible. Anyone who has found his way home thanks to a GPS device or has contacted a child thanks to a cellphone appreciates the awesome power of the microprocessor. All of our lives have been enhanced and enriched by the technology.

This sort of social transformation has happened many times before. Whenever a new technology comes along that greatly reduces the cost of a fundamental input to the economy, or makes possible what had previously been impossible, there has always been a flowering of great new fortunes – often far larger than those that came before. The technology opens up many new economic niches, and entrepreneurs rush to take advantage of the new opportunities.

The full-rigged ship that Europeans developed in the 15th century, for instance, was capable of reaching the far corners of the globe. Soon gold and silver were pouring into Europe from the New World, and a brisk trade with India and the East Indies sprang up. The Dutch exploited the new trade so successfully that the historian Simon Schama entitled his 1987 book on this period of Dutch history “The Embarrassment of Riches.”

Or consider work-doing energy. Before James Watt’s rotary steam engine, patented in 1781, only human and animal muscles, water mills and windmills could supply power. But with Watt’s engine it was suddenly possible to input vast amounts of very-low-cost energy into the economy. Combined with the factory system of production, the steam engine sparked the Industrial Revolution, causing growth — and thus wealth as well as job creation — to sharply accelerate.

By the 1820s so many new fortunes were piling up that the English social critic John Sterling was writing, “Wealth! Wealth! Wealth! Praise to the God of the 19th century! The Golden Idol! The mighty Mammon!” In 1826 the young Benjamin Disraeli coined the word millionaire to denote the holders of these new industrial fortunes.

Transportation is another fundamental input. But before the railroad, moving goods overland was extremely, and often prohibitively, expensive. The railroad made it cheap. Such fortunes as those of the railroad-owning Vanderbilts, Goulds and Harrimans became legendary for their size.

The railroad also made possible many great fortunes that had nothing, directly, to do with railroads at all. The railroads made national markets possible and thus huge economies of scale — to the benefit of everyone at every income level. Many merchandising fortunes, such as F.W. Woolworth’s five-and-dime, could not have happened without the cheap and quick transportation of goods.

Many of the new fortunes in America’s Gilded Age in the late 19th century were based on petroleum, by then inexpensive and abundant thanks to Edwin Drake’s drilling technique. Steel, suddenly made cheap thanks to the Bessemer converter, could now have a thousand new uses. Oil and steel, taken together, made the automobile possible. That produced still more great fortunes, not only in car manufacturing, but also in rubber, glass, highway construction and such ancillary industries.

Today the microprocessor, the most fundamental new technology since the steam engine, is transforming the world before our astonished eyes and inevitably creating huge new fortunes in the process.

To see how fundamental the microprocessor — a dirt-cheap computer on a chip — is, do a thought experiment. Imagine it’s 1970 and someone pushes a button causing every computer in the world to stop working. The average man on the street won’t have noticed anything amiss until his bank statement failed to come in at the end of the month. Push that button today and civilization collapses in seconds. Cars don’t run, phones don’t work, the lights go out, planes can’t land or take off. That is all because the microprocessor is now found in nearly everything more complex than a pencil.

The number of new economic niches created by cheap computing power is nearly limitless. Opportunities in software and hardware over the past 30 years have produced many billionaires — but they’re not all in Silicon Valley. The Walton family collectively is worth, according to Forbes, $144.7 billion, thanks to the world’s largest retail business. But Wal-Mart couldn’t exist without the precise inventory controls that the microprocessor makes possible.

The “income disparity” between the Waltons and the patrons of their stores is as pronounced as critics complain, but then again the lives of countless millions of Wal-Mart shoppers have been materially enriched by the stores’ staggering array of affordable goods.

Just as the railroad, the most important secondary technology of the steam engine, produced many new fortunes, the Internet is producing enormous numbers of them, from the likes of Amazon, Facebook and Twitter. When Twitter went public last November, it created about 1,600 newly minted millionaires.

Any attempt to tax away new fortunes in the name of preventing inequality is certain to have adverse effects on further technology creation and niche exploitation by entrepreneurs — and harm job creation as a result. The reason is one of the laws of economics: Potential reward must equal the risk or the risk won’t be taken.

And the risks in any new technology are very real in the highly competitive game that is capitalism. In 1903, 57 automobile companies opened for business in this country, hoping to exploit the new technology. Only the Ford Motor Co. survived the Darwinian struggle to succeed. As Henry Ford’s fortune grew to dazzling levels, some might have decried it, but they also should have rejoiced as he made the automobile affordable for everyman.

**********************

Some readers took exception:

John Steele Gordon’s post is short on facts. For example, he could have cited figures showing how the share of the country’s wealth held by the top 1% has gone from approximately 25% in 1981 to approximately 35% in 2010.

He could also have mentioned that the Gini coefficient, a broad-based, widely accepted measure of income inequality, for the U.S. was higher than in Sweden, Norway, Austria, Germany, Denmark, Austria, Italy, Canada, France, Switzerland, the United Kingdom, Japan, Israel, Iran, etc.

I agree with Mr. Gordon that capitalism has benefited more people than any other widely practiced economic system. But the question is, “Have its benefits been fairly distributed?” I think not. Personally, I tend to agree with a recent pope who condemned what he called “rapacious capitalism.” This is what it seems to me we now have in the U.S.

Bernard Schrautemeier in the WSJ

St. Joseph, Mo.

Mr. Gordon contends that allowing a few members of society to accumulate massive wealth does no harm and is the only way to sustain the incentive to excel. This seems to be a widely held view, but it is flat wrong in my opinion.

If society claimed, in support of the common good, another 5% of GDP, and in doing so, raised marginal income-tax rates enough to claim 75% of all annual incomes above $25 million, is there really anyone out there who believes that entrepreneurs, inventors, hedge-fund managers, executives, rock stars and star athletes would stop working, inventing, playing and performing?

Unusually talented people want to be the leaders and deciders in society and will strive to be recognized as such. U.S. taxes are much lower than those in most other developed countries and could rise substantially without denying high achievers the princely incomes they desire. However, I don’t think we need to create a society unable to fund its infrastructure, educate its young, care for its sick and protect its environment to permit successful people to realize breathtakingly large annual incomes and accumulate wealth that is almost beyond imagination.

R.L. Crandall in the WSJ

I think what doesn’t get dealt with in any of these opinion here are the benefits of massive wealth to a society like the United States. While European governments may claw back a great deal of the wealth of their citizenry, the US, as is being pointed out, allows this great disparity in wealth to pass through almost untouched.

All in all I think I prefer to have my fellow citizens dispense with their wealth. I mean, what do you do with such wealth after you have taken care of yourselves and your offspring? Usually it goes to some foundation who works at disposing of it to the lasting memory of your name. The work of great foundations seems to me far more preferable than the kind of government waste and inefficiencies we are so familiar with. But that’s just me.

dj

 

h1

CRAFTSMAN: Raymond Carver – George Packer

April 17, 2014
That was the end of Bad Ray and the beginning of Good Raymond. He had ten more years before a lifetime of smoking finally caught up with him and he died at fifty, in 1988. During that decade he found happiness with a poet. He wrote some of his best stories and escaped the trap of self-parody that had begun to be called minimalism, turning to more fullness of expression in the service of a more generous vision. He became famous and entered the middle class. He received prestigious appointments and won major prizes, a literary hero redeemed from hell. He walked with the happy carefulness of someone pardoned on the verge of execution.

That was the end of Bad Ray and the beginning of Good Raymond. He had ten more years before a lifetime of smoking finally caught up with him and he died at fifty, in 1988. During that decade he found happiness with a poet. He wrote some of his best stories and escaped the trap of self-parody that had begun to be called minimalism, turning to more fullness of expression in the service of a more generous vision. He became famous and entered the middle class. He received prestigious appointments and won major prizes, a literary hero redeemed from hell. He walked with the happy carefulness of someone pardoned on the verge of execution.

A few pages from George Packer’s The Unwinding, a nonfiction work that plumbs the dissolution of American lives over the past thirty years and the gradual decline of what Charles Krauthammer calls the social compact of family, Church, and community – be it the schools, the Boy Scouts, the Lions Club, the Grange, whatever: neighbors-helping-neighbors. There has been nothing left in the ravaged remains of secular America except the government. Raymond Carver was the chronicler of much of that in his short stories of people set adrift. One  of my most popular posts on payingattentiontothesky was a retelling of his short story What We Talk About When We Talk About Love.

*************************************

Ray was a drinker. He picked it up from C.R., his father. C.R. was a saw filer at a lumber mill in the Yakima Valley and a good storyteller. Ray picked that up, too. C.R. could go for months without sipping a beer, then he would disappear from home for a while, and Ray and his mother and younger brother would sit down to dinner with a sense of doom. That was how Ray drank: once he started, he couldn’t stop.

Ray grew up in the 1940s and ’50s. He was a tall, fat boy. He stood hunched over, with an arm or leg bent at a bad angle, and his eyes had a fat boy’s hooded squint even after he lost the weight. His pants and shirts looked like gabardine, what an unemployed forty-year-old would wear. He spoke in a faint mumble so you had to listen close, but it often turned out that he had said something funny or sharp.

The Carvers lived in four rooms in a seven-hundred-square-foot box of a house on a concrete slab. There was nowhere to be alone and they lived together like strangers.

Ray loved to shoot geese and fish for trout along the Columbia River. He liked to read the pulps and outdoor magazines. One day, he told the man who took him along hunting that he had sent a story to one of the magazines and it had come back. That was why Ray had looked nervous all morning.

“Well, what did you write?” the man said.

“I wrote a story about this wild country,” Ray said, “the flight of the wild geese and hunting the geese and everything in this remote country down here. It’s not what appeals to the public, they said.”

But he didn’t give up.

Ray saw an ad in Writer’s Digest for the Palmer Institute of Authorship in Hollywood. It was a correspondence course. C.R. paid the twenty-five-dollar enrollment fee and Ray started doing the sixteen installments, but he ran out of money for the monthly payments. After he received his high school diploma, his parents expected him to go to work in the sawmill. That wasn’t how things went.

Ray got a pretty girl named Maryann pregnant. She was going to study at the University of Washington, but Ray and Maryann were crazy about each other, so they got married instead. In 1957 their daughter was born in a hospital two floors below the psychiatric ward where C.R. was being treated for a nervous breakdown. A year later a baby boy arrived. Ray was twenty and Maryann was eighteen, and that was their youth.

They began to wander. They had great dreams and believed that hard work would make those dreams come true. Ray was going to be a writer. Everything else would come after that.

They moved around the West and they never stopped. They lived in Chico and Paradise and Eureka and Arcata and Sacramento and Palo Alto and Missoula and Santa Cruz and Cupertino. Every time they started to settle in, Ray would get restless and they would move on to somewhere else. The family’s main support was Maryann. She packed fruit, waited tables, sold encyclopedias door-to-door. Ray worked at a drugstore, a sawmill, a service station, and a stockroom, and as a night janitor at a hospital. The work was not ennobling. He would come home too wiped out to do anything.

Ray wanted to write a novel. But a man who was trying to wash six loads of clothes in a Laundromat while his wife was serving food somewhere and the kids were waiting for him to come pick them up somewhere else and it was getting late and the woman ahead of him kept putting more dimes in her dryer — that man could never write a novel. To do that, he would need to be living in a world that made sense, a world that stayed fixed in one place so that he could describe it accurately. That wasn’t Ray’s world.

In Ray’s world the rules changed every day, and he couldn’t see past the first of next month, when he would have to find money for rent and school clothes. The most important fact of his life was that he had two children, and he would never get out from under the baleful responsibility of having them. Hard work, good intentions, doing the right things — these would not be enough, things would not get better. He and Maryann would never get their reward. That was the other thing he understood in the Laundromat. And somewhere along the way, his dreams started to go bust.

Without the heart to write anything long, which might have brought in real money, and with the deep frustration of seeing no way out, he could write only poems, and very short stories. Then he rewrote them again and again, sometimes over many years.

The stories were about people who did not succeed. That had been Ray’s experience, and those were his people. His characters were unemployed salesmen, waitresses, mill hands. They lived nowhere in particular, in bedrooms and living rooms and front yards where they couldn’t get away from one another or themselves and everyone was alone and adrift.

Their names weren’t fancy — Earl, Arlene, L.D., Rae — and they seldom had more than one, if that. Nothing like religion or politics or community surrounded them, except the Safeway and the bingo hall. Nothing happening anywhere in the world, there was only a boy fighting a fish, a wife selling a used car, two couples talking themselves into paralysis. Ray left almost everything out.

In one story, a wife learns that her husband, just back from a fish trip with his buddies, left the brutalized corpse of a girl lying in the river for three days before reporting it.

My husband eats with good appetite but he seems tired, edgy. He chews slowly, arms on the table, and stares at something across the room. He looks at me and looks away again. He wipes his mouth on the napkin. He shrugs and goes on eating. Something has come between us though he would like me to believe otherwise.

“What are you staring at me for?” he asks. “What is it?” he says and puts his fork down.

“Was I staring?” I say and shake my head stupidly, stupidly.

His characters spoke a language that sounded ordinary, except that every word echoed with the strange, and in the silences between words a kind of panic rose. These lives were trembling over a void.

“Most of my characters would like their actions to count for something,” Ray once said. “But at the same time they’ve reached the point — many people do — that they know it isn’t so. It doesn’t add up any longer The things you once thought important or even worth dying for aren’t worth a nickel now. It’s their lives they’ve become uncomfortable with, lives they see breaking down. They’d like to set things right, but they can’t.

Ray was doing things the long, hard way, going against every trend of the period. In those years, the short story was a minor literary form. Realism seemed played out. The writer Ray brought most quickly to mind, Hemingway, was at the start of a posthumous eclipse. In the sixties and seventies, the most discussed writers — Mailer, Bellow, Roth, Updike, Barth, Wolfe, Pynchon — reached for overstatement, not restraint, writing sprawling novels of intellectual, linguistic, or erotic excess, and high-octane journalism. There was a kind of competition to swallow American life whole — to mirror and distort in prose the social facts of a country that had a limitless capacity for flux and shock.

Ray, whose hero was Chekhov, moved in the opposite direction from literary trends and kept faith with a quieter task, following Ezra Pound’s maxim that “fundamental accuracy of statement is the one sole morality of writing.” By paying close attention to the lives of marginal, lost people, people who scarcely figured and were rarely taken seriously in contemporary American fiction (if they appeared anywhere, it was in the paintings of Edward Hopper), Ray had his fingers on the pulse of a deeper loneliness. He seemed to know, in the unintentional way of a fiction writer, that the country’s future would be most unnerving in its very ordinariness, in the late-night trip to the supermarket, the yard sale at the end of the line. He sensed that beneath the surface of life there was nothing to stand on.

In the early seventies, Maryann got her degree and began to teach high school English. That freed Ray to put his effort into writing and finding a college teaching job. He began publishing stories in big East Coast magazines. The Carvers bought their first house, in the future Silicon Valley. There was a nonstop party scene with other working-class writers and their wives in the area. Things were looking up for the Carvers. That was when everything went to pieces.

The children became teenagers, and Ray felt that they now held the reins. Ray and Maryann each had an affair. They went into bankruptcy twice. He was convicted of lying to the state of California on his unemployment claim and almost sent to prison. Instead, he went in and out of detox. His drinking turned poisonous, with long blackouts. Maryann tried to keep up in order not to lose him. Ray was a quiet, spooked-looking man, but with the scotch he grew menacing, and one night, after Maryann flirted with a friend, Ray hit her with a wine bottle. She lost 60 percent of her blood from the severed artery by her ear and was taken to the emergency room while Ray hid in the kitchen.

A few months later, in 1976, his first book of stories, Will You Please Be Quiet, Please? — written over nearly two decades — was published in New York. The dedication page said: THIS BOOK IS FOR MARYANN.

Ray was a drinker and a writer. The two had always gone along separate tracks. What the first self fled or wrecked or rued or resented, the second stared into high art. But now his writing dwindled to nothing.

“The time came and went when everything my wife and I held sacred or considered worthy of respect, every spiritual value, crumbled away,” he later wrote. “Something terrible had happened to us.” He never intended to become an alcoholic, a bankrupt, a cheat, a thief, and a liar. But he was all those. It was the 1970s, and a lot of people were having a good time, but Ray knew ahead of the years that the life of partying and drinking poor was a road into darkness.

In the middle of 1977 he went to live by himself on the remote California coast near Oregon. It was fear for his writing, not for his own life or the life of his family that made him take his last drink there. Sober, he began to write again. In 1978 he and Maryann split.

That was the end of Bad Ray and the beginning of Good Raymond. He had ten more years before a lifetime of smoking finally caught up with him and he died at fifty, in 1988. During that decade he found happiness with a poet. He wrote some of his best stories and escaped the trap of self-parody that had begun to be called minimalism, turning to more fullness of expression in the service of a more generous vision. He became famous and entered the middle class. He received prestigious appointments and won major prizes, a literary hero redeemed from hell. He walked with the happy carefulness of someone pardoned on the verge of execution.

The turn to flash and glitz in the eighties worked in his favor. During the Reagan years he was named the chronicler of blue-collar despair. The less articulate his characters, the more his many new readers loved the creator. If the sinking working class fascinated and frightened them, they could imagine that they knew its spirit through his stories, and so they fetishized him.

The New York literary scene, hot and flush again, took him to its heart. He became a Vintage Contemporary alongside writers in their twenties who had learned to mimic the austere prose without having first forged it in personal fires. He posed for jacket portraits with some of the old menace, like a man who had wandered into a book party from the scary part of town.

“They sold his stories of inadequate, failed, embarrassed and embarrassing men, many of them drunkards, all of them losers, to yuppies,” one of his old friends said. “His people confirmed the yuppies in their sense of superiority.”

But every morning, Good Raymond got up, made coffee, sat at his desk, and did exactly what Bad Ray had always done. After all, they were the same craftsman. The distractions were different now, but he was still trying to set down what he saw and felt with utmost accuracy, and in the American din, that small thing was everything.

h1

SEX 2 From Roger Scruton’s An Intelligent Person’s Guide to Philosophy

April 10, 2014
Life in the actual world is difficult and embarrassing. Most of all it is difficult and embarrassing in our confrontation with other people who, by their very existence as subjects, rearrange things in defiance of our will. It requires a great force, such as the force of sexual desire, to overcome the self-protection that shields us from intimate encounters. It is tempting to take refuge in substitutes, which neither embarrass us nor resist the impulse of our spontaneous cravings. The habit grows of creating a compliant world of desire, in which unreal objects become the focus of real emotions, and the emotions themselves are rendered incompetent to participate in the building of personal relations. The fantasy blocks the passage to reality, which becomes inaccessible to the will. In this process the fantasy Other, since he is entirely the instrument of my will, becomes an object for me, one among many substitutes defined purely in terms of a sexual use.

Life in the actual world is difficult and embarrassing. Most of all it is difficult and embarrassing in our confrontation with other people who, by their very existence as subjects, rearrange things in defiance of our will. It requires a great force, such as the force of sexual desire, to overcome the self-protection that shields us from intimate encounters. It is tempting to take refuge in substitutes, which neither embarrass us nor resist the impulse of our spontaneous cravings. The habit grows of creating a compliant world of desire, in which unreal objects become the focus of real emotions, and the emotions themselves are rendered incompetent to participate in the building of personal relations. The fantasy blocks the passage to reality, which becomes inaccessible to the will. In this process the fantasy Other, since he is entirely the instrument of my will, becomes an object for me, one among many substitutes defined purely in terms of a sexual use.

The intentionality of desire is the topic for a book, and since I have written that book, I shall confine myself here to a few remarks. My hope is to put philosophy to its best use, which is that of shoring up the human world against the corrosive seas of pseudo-science. In true sexual desire, the aim is union with the other, where ‘the other’ denotes a particular person, with a particular perspective on my actions.

The reciprocity which is involved in this aim is achieved in a state of mutual arousal, and the interpersonal character of arousal determines the nature of the ‘union’ that is sought. All desire is compromising, and the choice to express it or to yield to it is an existential choice, in which the self is, or may be, in danger.

Not surprisingly, therefore, the sexual act is surrounded by prohibitions; it brings with it a weight of shame, guilt and jealousy, as well as the heights of joy and happiness. It is inconceivable that a morality of pure permission should issue from the right conception of such a compromising force, and, as I argue in Sexual Desire, the traditional morality, in which monogamous heterosexual union, enshrined in a vow rather than a contract, is the norm, shows far more sensitivity to what is at stake than any of the known alternatives.

If it is so difficult now to see the point of that morality, it is in part because human sexual conduct has been redescribed by the pseudo-science of sexology, and as a result not only robbed of its interpersonal intentionality, but also profoundly demoralized. In redescribing the human world in this way, we also change it. We introduce new forms of sexual feeling – shaped by the desire for an all-comprehending permission. The sexual sacrament gives way to a sexual market; and the result is a fetishism of the sexual commodity.

Richard Posner, for example, in his worthless but influential book entitled Sex and Reason (but which should have been called Sex and Instrumental Reason), opens his first chapter with the following sentence: There is sexual behavior, having to do mainly with excitation of the sexual organs.’ In reality, of course, sexual behaviour has to do with courtship, desire, love, jealousy, marriage, grief, joy and intrigue. Such excitement as occurs is excitement of the whole person. As for the sexual organs, they can be as ‘excited’ (if that is the word) by a bus journey as by the object of desire. Nevertheless, Posner’s description of desire is necessary, if he is to fulfil his aim of deriving a morality of sexual conduct from the analysis of cost and benefit (which, apparently, is what is meant by ‘reason’). So what are the ‘costs’ of sexual gratification?

One is the cost of search. It is zero for masturbation, considered as a solitary activity, which is why it is the cheapest of practices. (The qualification is important: ‘mutual masturbation’, heterosexual or homosexual, is a form of nonvaginal intercourse, and its search costs are positive.)

Posner proceeds to consider hypothetical cases: for example, the case where a man sets a ‘value’ of ‘twenty’ on ‘sex’ with a ‘woman of average attractiveness’, and a ‘value’ of ‘two’ on ‘sex’ with a ‘male substitute’. If you adopt such language, then you have made woman (and man too) into a sex object and sex into a commodity. You have redescribed the human world as a world of things; you have abolished the sacred, the prohibited and the protected, and presented sex as a relation between aliens: ‘Th’expence of spirit in a waste of shame’, in Shakespeare’s famous words. Posner’s language is opaque to what is wanted in sexual desire; it reduces the other person to an instrument of pleasure, a means of obtaining something that could have been provided equally by another person, by an animal, by a rubber doll or a piece of Kleenex.

Well, you might say, why not, if people are happier that way? In whose interest is it, to retain the old form of desire, with its individualizing intentionality, its hopeless yearnings, its furies and jealousies, its lifelong commitments and lifelong griefs?

Modern philosophers shy away from such questions, although they were much discussed in the ancient world. Rather than consider the long-term happiness and fulfillment of the individual, the modern philosopher tends to reduce the problem of sexual morality to one of rights — do we have a right to engage in, or to forbid, this or that sexual practice?

From such a question liberal conclusions follow as a matter of course; but it is a question that leaves the ground of sexual morality unexplored. This ground is not to be discovered in the calculus of rights and duties, but in the theory of virtue. What matters in sexual morality is the distinction between virtuous and vicious dispositions. I have already touched on this distinction in the last chapter, when considering the basis of our moral thinking. I there emphasized the role of virtue in creating the foundations of moral order. But it is also necessary, if we are to give objective grounds for the pursuit of virtue, to show how the happiness and fulfilment of the person are furthered by virtue and jeopardized by vice.

This, roughly speaking, is the task that Aristotle set himself in the Nicomachean Ethics, in which he tried to show that the deep questions of morality concern the education of the moral being, rather than the rules governing his adult conduct. Virtue belongs to character, rather than to the rules of social dialogue, and arises through an extended process of moral development. The virtuous person is disposed to choose those courses of action which contribute to his flourishing – his flourishing, not just as an animal, but as a rational being or person, as that which he essentially is. In educating a child I am educating his habits, and it is therefore clear that I shall always have a reason to inculcate virtuous habits, not only for my sake, but also for his own.

At the same time, we should not think of virtue as a means only. The virtuous person is the one who has the right choice of ends. Virtue is the disposition to want, and therefore to choose, certain things for their own sakes, despite the warring tendency of appetite. Courage, for example, is the disposition to choose the honorable course of action, in face of danger. It is the disposition to overcome fear, for the sake of that judged to be right. All rational beings have an interest in acquiring courage, since without it they can achieve what they really want only by luck, and only in the absence of adversity.

Sexual virtue is similar: the disposition to choose the course of action judged to be right, despite temptation. Education should be directed towards the special kind of temperance which shows itself, sometimes as chastity, sometimes as fidelity, sometimes as passionate desire, according to the ‘right judgement’ of the subject. The virtuous person desires the person whom he may also love, who can and will return his desire, and to whom he may commit himself. In the consummation of such a desire there is neither shame nor humiliation, and the ‘nuptuality’ of the erotic impulse finds the space that it needs in order to flourish.

The most important feature of traditional sexual education is summarized in anthropological language as the ‘ethic of pollution and taboo’. The child was taught to regard his body as sacred, and as subject to pollution by misperception or misuse. The sense of pollution is by no means a trivial side-effect of the ‘bad sexual encounter’: it may involve a penetrating disgust, at oneself, one’s body, one’s situation, such as is experienced by the victim of rape. Those sentiments express the tension contained within our experience of embodiment.

At any moment we can become ‘mere body’, the self driven from its incarnation, and its habitation ransacked. The most important root idea of sexual morality is that I am in my body, not as a ‘ghost in the machine’, but as an incarnate self. My body is identical with me: subject and object are merely two aspects of a single thing, and sexual purity is the guarantee of this.

Sexual virtue does not forbid desire: it simply ensures the status of desire as an interpersonal feeling. The child who learns ‘dirty habits’ detaches his sex from himself, sets it outside himself as something curious and alien in the world of objects. His fascinated enslavement to the body is also a withering of desire, a scattering of erotic energy and a loss of union with the other. Sexual virtue sustains the subject of desire, making him present as a self in the very act which overcomes him.

Traditional sexual education also involved a sustained war against fantasy. Fantasy plays an important part in our sexual doings, and even the most passionate and faithful lover may, in the act of love, rehearse to himself other scenes of sexual abandon than the one in which he is engaged. Nevertheless, there is truth in the Freudian contrast between fantasy and reality, and in the belief that the first is in some way destructive of the second. Fantasy replaces the real, resistant, objective world with a pliant substitute – and that, indeed, is its purpose.

Life in the actual world is difficult and embarrassing. Most of all it is difficult and embarrassing in our confrontation with other people who, by their very existence as subjects, rearrange things in defiance of our will. It requires a great force, such as the force of sexual desire, to overcome the self-protection that shields us from intimate encounters. It is tempting to take refuge in substitutes, which neither embarrass us nor resist the impulse of our spontaneous cravings.

The habit grows of creating a compliant world of desire, in which unreal objects become the focus of real emotions, and the emotions themselves are rendered incompetent to participate in the building of personal relations. The fantasy blocks the passage to reality, which becomes inaccessible to the will. In this process the fantasy Other, since he is entirely the instrument of my will, becomes an object for me, one among many substitutes defined purely in terms of a sexual use.

The sexual world of the fantasist is a world without subjects, in which others appear as objects only. And should the fantasy take possession of him so far as to require that another person submit to it, the result is invariably indecent, tending to rape. The words that I quoted from Richard Posner are indecent in just the way that one must expect, when people no longer see the object of desire as a subject, wanted as such.

Sexual morality returns us, then, to the great conundrum around which these chapters have revolved: the conundrum of the subject, and his relation to the world of space and time.

h1

SEX 1 From An Intelligent Person’s Guide to Philosophy — Roger Scruton

April 9, 2014
Precisely because desire proposes a relation between subjects, it forces both parties to account for themselves. Unwanted advances are therefore also forbidden by the one to whom they might be addressed, and any transgression is felt as a contamination. That is why rape is so serious a crime: it is an invasion of the sanctuary which harbors the victim's freedom, and a dragging of the subject into the world of things. If you describe desire in the scientistic terms used by Freud and his followers, the outrage and pollution of rape become impossible to explain. In fact, just about everything in human sexual behavior becomes impossible to explain - and it is only what might be called the 'charm of disenchantment' that leads people to receive these daft descriptions as the truth.

Precisely because desire proposes a relation between subjects, it forces both parties to account for themselves. Unwanted advances are therefore also forbidden by the one to whom they might be addressed, and any transgression is felt as a contamination. That is why rape is so serious a crime: it is an invasion of the sanctuary which harbors the victim’s freedom, and a dragging of the subject into the world of things. If you describe desire in the scientistic terms used by Freud and his followers, the outrage and pollution of rape become impossible to explain. In fact, just about everything in human sexual behavior becomes impossible to explain – and it is only what might be called the ‘charm of disenchantment’ that leads people to receive these daft descriptions as the truth.

I read a lot of Roger Scruton, simply because he makes such great sense. Nowhere does the modern liberal philosophy tank  into meaninglessness is over sex related issues, from abortion to womens’ issues to gay marriage, you can’t spend more than 3 minutes with these masters of the universe that a well-reasoned piece by Peter Kreeft or Roger Scruton wouldn’t demolish easily. Read on.

*********************************

Sex is the sphere in which the animal and the personal meet, and where the clash between the scientific and the personal view of things is felt most keenly. It therefore provides the test of any serious moral philosophy, and of any viable theory of the human world.

Until the late nineteenth century it was almost impossible to discuss sex, except as part of erotic love, and even then convention required that the peculiarities of sexual desire remain unmentioned. When the interdiction was finally lifted – by such writers as Krafft-Ebing and Havelock Ellis – it was through offering a ‘scientific’ approach to a widespread natural phenomenon. Such was the prestige of science that any investigation conducted in its name could call on powerful currents of social approval, which were sufficient to overcome the otherwise crippling reluctance to face the realities of sexual experience.

As a result, modern discussions of this experience have been conducted in a ‘scientized’ idiom which, by its very nature, removes sex from the sphere of interpersonal relations, and remodels it as a relation between objects. Freud’s shocking revelations, introduced as neutral, ‘scientific’ truths about the human condition, were phrased in the terms which are now more or less standard.

According to Freud, the aim of sexual desire is ‘union of the genitals in the act known as copulation, which leads to a release of the sexual tension and a temporary extinction of the sexual instinct – a satisfaction analogous to the sating of hunger’. This scientistic image of sexual desire gave rise, in due course, to the Kinsey report, and is now part of the standard merchandise of disenchantment. It seems to me that it is entirely false, and could become true only by so affecting our sexual emotions, as to change them into emotions of another kind.

What exactly is sexual pleasure? Is it like the pleasure of eating and drinking? Like that of lying in a hot bath? Like that of watching your child at play? Clearly it is both like and unlike all of these. It is unlike the pleasure of eating, in that its object is not consumed. It is unlike the pleasure of the bath, in that it involves taking pleasure in an activity, and in the other person who joins you. It is unlike that of watching your child at play, in involving bodily sensations and a surrender to physical desire.

Sexual pleasure resembles the pleasure of watching something, however, in a crucial respect: it has intentionality. It is not just a tingling sensation; it is a response to another person, and to the act in which you are engaged with him or her. The other person may be imaginary: but it is towards a person that your thoughts are directed, and pleasure depends on thought.

This dependency on thought means that sexual pleasure can be mistaken, and ceases when the mistake is known. Although I would be a fool not to jump out of the soothing bath after being told that what I took for water is really acid, this is not because I have ceased to feel pleasurable sensations in my skin. In the case of sexual pleasure, the discovery that it is an unwanted hand that touches me at once extinguishes my pleasure. The pleasure could not be taken as confirming the hitherto unacknowledged sexual virtues of some previously rejected person.

A woman who makes love to the man who has disguised himself as her husband is no less the victim of rape, and the discovery of her mistake can lead to suicide. It is not simply that consent obtained by fraud is not consent; it is that the woman has been violated, in the very act which caused her pleasure.

What makes a pleasure into a sexual pleasure is the context of arousal. And arousal is not the same as tumescence. It is a leaning towards’ the other, a movement in the direction of the sexual act, which cannot be separated, either from the thoughts on which it is founded, or from the desire to which it leads. Arousal is a response to the thought of the other as a self-conscious agent, who is alert to me, and who is able to have ‘designs’ on me. This is evident from the caress and the glance of desire.

A caress of affection is a gesture of reassurance – an attempt to place in the consciousness of the other an image of one’s own tender concern for him. Not so, however, the caress of desire, which outlines the body of the recipient; its gentleness is not that of reassurance only, but that of exploration. It aims to fill the surface of the other’s body with a consciousness of your interest – interest, not only in the body, but in the person as embodied. This consciousness is the focal point of the other’s pleasure. Sartre writes (Being and Nothingness) of the caress as ‘incarnating’ the other: as though, by your action, you bring the soul into the flesh (the subject into the object) and make it palpable.

The caress is given and received with the same awareness as the glance is given and received. They each have an epistemic component (a component of anticipation and discovery). It is hardly surprising, given this, that the face should have such supreme and overriding importance in the transactions of sexual desire. On the scientistic view of sex it is hard to explain why this should be so – why the face should have the power to determine whether we will, or will not, be drawn to seek pleasure in another part.

But of course, the face is the picture of the other’s subjectivity: it shines with the light of self, and it is as an embodied subject that the other is wanted. Perversion and obscenity involve the eclipse of the subject, as the body and its mechanism are placed in frontal view. In obscenity flesh becomes opaque to the self which lives in it: that is why there is an obscenity of violence as well as an obscenity of sex.

A caress may be either accepted or rejected: in either case, it is because it has been ‘read’ as conveying a message sent from you to me. I do not receive this message as an explicit act of meaning something, but as a process of mutual discovery, a growing to awareness in you which is also a coming to awareness in me. In the first impulse of arousal, therefore, there is the beginning of that chain of reciprocity which is fundamental to interpersonal attitudes. She conceives her lover conceiving her conceiving him … not ad infinitum, but to the point of mutual recognition of the other, as fully present in his body.

Sexual arousal has, then, an epistemic and interpersonal intentionality. It is a response to another individual, based in revelation and discovery, and involving a reciprocal and co-operative heightening of the common experience of embodiment. It is not directed beyond the other, to the world at large; nor is it transferable to a rival object who might ‘do just as well’. Of course, arousal may have its origin in highly generalized thoughts, which flit libidinously from object to object.

But when these thoughts have concentrated into the experience of arousal their generality is put aside; it is then the other who counts, and his particular embodiment. Not only the other, but I myself, and the sense of my bodily reality in the other’s perspective. Hence arousal, in the normal case, seeks seclusion in a private place, where only the other is relevant to my attention. Indeed, arousal attempts to abolish what is not private – in particular to abolish the perspective of the onlooker, of the ‘third person’ who is neither you nor I.

I explored some of the ways in which the subject is realized in the world of objects, and placed great emphasis on intention, and the distinction between predicting and deciding for the future. But it should not be supposed that the subject is revealed only through voluntary activity.

On the contrary, of equal importance are those reactions which cannot be willed but only predicted, but which are nevertheless peculiar to self-conscious beings. Blushing is a singular instance. Although an involuntary matter, and – from the physiological point of view – a mere rushing of blood to the head, blushing is the expression of a complex thought, and one that places the self on view. My blush is an involuntary recognition of my accountability before you for what I am and what I feel. It is an acknowledgement that I stand in the light of your perspective, and that I cannot hide in my body. A blush is attractive because it serves both to embody the perspective of the other, and also at the same time to display that perspective as responsive to me.

The same is true of unguarded glances and smiles, through which the other subject rises to the surface of his body and makes himself visible. In smiling, blushing, laughing and crying, it is precisely my loss of control over my body, and its gain of control over me, that create the immediate experience of an incarnate person. The body ceases at these moments to be an instrument, and reasserts its natural rights as a person. In such expressions the face does not function merely as a bodily part, but as the whole person: the self is spread across its surface, and there ‘made flesh’.

The concepts and categories that we use to describe the embodied person are far removed from the science of the human body. What place in such a science for smiles as opposed to grimaces, for blushes as opposed to flushes, for glances as opposed to looks? In describing your color as a blush, I am seeing you as a responsible agent, and situating you in the realm of embarrassment and self-knowledge. If we try to describe sexual desire with the categories of human biology, we miss precisely the intentionality of sexual emotion, its directedness towards the embodied subject.

The caricature that results describes not desire but perversion. Freud’s description of desire is the description of something that we know and shun – or ought to shun. An excitement which concentrates on the sexual organs, whether of man or of woman, which seeks, as it were, to bypass the complex negotiation of the face, hands, voice and posture, is perverted. It voids desire of its intentionality, and replaces it with a pursuit of the sexual commodity, which can always be had for a price.

It is part of the intentionality of desire that a particular person is conceived as its object. To someone agitated by his desire for Jane, it is ridiculous to say, ‘Take Henrietta, she will do just as well.’ Thus there arises the possibility of mistakes of identity. Jacob’s desire for Rachel seemed to be satisfied by his night with Leah, only to the extent that, and for as long as, Jacob imagined it was Rachel with whom he was lying. (Genesis 29, v. 22-25; and see the wonderful realization of this little drama in Thomas Mann’s Joseph and his Brothers.)

Our sexual emotions are founded on individualizing thoughts: it is you whom I want and no other. This individualizing intentionality does not merely stem from the fact that it is persons (in other words, individuals) whom we desire. It stems from the fact that the other is desired as an embodied subject, and not just as a body. You can see the point by drawing a contrast between desire and hunger (a contrast that is expressly negated by Freud). Suppose that people were the only edible things; and suppose that they felt no pain on being eaten and were reconstituted at once.

How many formalities and apologies would now be required in the satisfaction of hunger! People would learn to conceal their appetite, and learn not to presume upon the consent of those whom they surveyed with famished glances. It would become a crime to partake of a meal without the meal’s consent. Maybe marriage would be the best solution.

Still, this predicament is nothing like the predicament in which we are placed by desire. It arises from the lack of anything impersonal to eat, but not from the nature of hunger. Hunger is directed towards the other only as object, and any similar object will serve just as well. It does not individualize the object, or propose any other union than that required by need.

When sexual attentions take such a form, they become deeply insulting. And in every form they compromise not only the person who addresses them, but also the person addressed. Precisely because desire proposes a relation between subjects, it forces both parties to account for themselves. Unwanted advances are therefore also forbidden by the one to whom they might be addressed, and any transgression is felt as a contamination.

That is why rape is so serious a crime: it is an invasion of the sanctuary which harbors the victim’s freedom, and a dragging of the subject into the world of things. If you describe desire in the scientistic terms used by Freud and his followers, the outrage and pollution of rape become impossible to explain. In fact, just about everything in human sexual behavior becomes impossible to explain – and it is only what might be called the ‘charm of disenchantment’ that leads people to receive these daft descriptions as the truth.

h1

Nietzsche and Emerson 2 – Jennifer Ratner-Rosenhagen

April 8, 2014
Nietzsche understood what it meant to travel imaginatively through time and space in order to find a thinker to think with. Just as he had to travel to the mental and moral world of a mid-nineteenth-century American philosopher enroute to himself, twentieth-century American readers would now turn to him for the same. They would look across the Atlantic for an example of the perils and possibilities of the aboriginal intellect. They would look to a nineteenth-century German thinker in order to feel at home.

Nietzsche understood what it meant to travel imaginatively through time and space in order to find a thinker to think with. Just as he had to travel to the mental and moral world of a mid-nineteenth-century American philosopher enroute to himself, twentieth-century American readers would now turn to him for the same. They would look across the Atlantic for an example of the perils and possibilities of the aboriginal intellect. They would look to a nineteenth-century German thinker in order to feel at home.

 

A few pages from the prologue of American Nietzsche. If you ever asked yourself how Nietzsche happened, how the “mad, mustachioed Teutonic philosopher of the hammer” ever found his way to our cultural bloodstream. This is the book for you.

American Nietzsche is neither a biography nor a formal analysis of philosophical concepts. Professor Ratner-Rosenhagen is a historian, and the subject of her book is presented through the lens of her discipline. It is, in short, an insightful and skillfully written treatment of the influence of Friedrich Nietzsche’s ideas and image on American culture. Refreshingly, I detected no axes being ground, no hidden agendas skulking in the shadows. The author has simply identified an important story that needed to be told, and has done so in a thorough, well-organized, and interesting manner. Whatever your level of familiarity with Nietzsche the person or his work, or your opinions about either, if you have an interest in the events, ideas, and people that shaped 20th century American culture then you will very likely find this book engaging.”

************************************

And so it was that in 1862 Nietzsche discovered in Emerson a thinker to think with. While the American author impressed on his young German pupil that the life of the philosopher is a life on the open sea, he also taught him that no other thinker can tell him where he’s heading or where to find firm land. He simply works by “provocation” along the way.

And provoke Nietzsche, Emerson did. Nietzsche continued to read Emerson intensively throughout 1863, later noting that of all the books he “read the most,” Emerson’s topped the lists” And this was just the beginning. From the age of seventeen up until his mental breakdown at the age of forty-four; from his days as a gymnasium student through his graduate studies, his professorship, and then his years as an itinerant writer; and from the safe harbor of Christian faith to the tumultuous seas of indeterminacy, Friedrich Nietzsche turned repeatedly to Emerson, who then pushed him forward. In time, many others would propel Nietzsche’s thinking — Plato, Kant, Goethe, Lange, Schopenhauer, and Wagner — but none survived his penchant for slaying his own intellectual gods.

He never sought to slay Emerson, however; the enthusiasm he expressed for him as a teenager reappeared in his essays, journals, and letters, over the course of his entire intellectual career. Emerson’s influence on Nietzsche was unmistakable even to Nietzsche himself. As he thought about himself while writing an early draft of his autobiography, he couldn’t help but think of Emerson. Indeed, it was a rereading of Emerson’s “Spiritual Laws” (1841) that suggested “Ecce Homo” as an appropriate title for his autobiography.” As he reflected on his intellectual path, he couldn’t help but reflect warmly on Emerson’s company along the way: “Emerson, with his Essays, has been a good friend and someone who has cheered me up even in dark times: he possesses so much skepsis, so many’possibilities,’that with him even virtue becomes spiritual.”

Yet Nietzsche’s ideas are not carbon copies of Emerson’s. If they were, his uses of Emerson would be a lot less interesting than they are. The sheer fact that he read Emerson in translation reminds us that Nietzsche had a lifelong relationship with a highly mediated Emerson. Even accounting for linguistic variations, though, the similarities are striking enough that the additional awareness that Nietzsche “loved Emerson from first to last,” as Walter Kaufmann put it, has made many, like Kaufmann himself, insist that nevertheless, “one would never mistake a whole page of Emerson for a page of Nietzsche.”

Perhaps. One might take Kaufmann up on the challenge and place a Nietzschequotation, image, or broad concern alongside its Emersonian counterpart and see how easy or difficult it is to drive a wedge between the two. One could juxtapose their criticism of barren scholarship; their concern that excessive reverence for the past makes us “fatalists,” as Emerson believed, and makes the past our “gravedigger,” as Nietzsche had; or their anxiety over belatedness, which fostered a longing in Emerson to be “born again,” and a fear in Nietzsche of being “late-born.” One could examine how both authors expressed an abiding interest in power. While Emerson averred that “life is a search after power,” Nietzsche came to believe that “life simply is will to power.’” Both emphasized a conception of power as something striving, pressing onward.

For Emerson, “Life only avails, not the having lived. Power ceases in the instant of repose; it resides in the moment of transition from a last to a new state.” Nietzsche celebrated “plastic power,”which he described as, “the capacity to develop out of oneself in one’s own way, to transform and incorporate into oneself what is past and foreign, to heal wounds, to replace what has been lost, to recreate broken molds.”

It might be of no consequence that Nietzsche was rereading Emerson in 1881-82 while preparing The Gay ,Science and Thus Spoke Zarathustra (1883-85). What is noteworthy, nonetheless, is the philosophers’ shared aversion to the view of revelation as something historical, rather than ongoing, and to any belief in a divinity outside the self. Emerson believed this created a bankrupt spirituality, “as if God were dead,” to which Nietzsche had his madman announce in the affirmative that “God is dead.”

Someone well versed in Emerson and Nietzsche might never mistake Emerson’s line from “Compensation, “In general, every evil to which we do not succumb, is a benefactor,” with Nietzsche’s from Twilight of the Idols, “What does not kill me makes me stronger.” But at least it is worth noting that Emerson’s line in Nietzsche’s personal copy is heavily underlined.

Whether we look for affinities or influences, the parallels between Emerson and Nietzsche mount. But we miss what Emerson meant to Nietzsche if we fail to consider how Nietzsche used Emerson not to get closer to him but to get closer to himself. For Nietzsche, Emerson provided an image of the philosopher willing to go it alone without inherited faith, without institutional affiliation, without rock or refuge for his truth claims.

As Nietzsche made his way from spiritually-adrift teenager, to philology professor, to freelance philosopher, Emerson’s image of the philosopher, and his approach to philosophy as a way of life, proved essential to his self-definition. Emerson gave Nietzsche a way of describing himself to himself, as we see in his letter of 1866 to an old friend, Carl von Gersdorff. In it, Nietzsche dreamily imagined himself” as Emerson so excellently describes [it] … pure, contemplative, impartial eye.”

It was Emerson who imparted to Nietzsche the image of philosophy as a spirit of play, laughter, and dancing. Nietzsche repeatedly employed this image of levity and joyousness when he considered his own thinking. In the aphorism “Learning to think,” Nietzsche complained, “our schools no longer have any idea what this means…. Thinking has to be learned.. . as a form of dancing…. Who among Germans still knows from experience that subtle thrill … of intellectual light feet”

It was Emerson’s characterization of the liberated thinker as “intellectual nomad” that helped Nietzsche to imagine himself as a “free spirit” in a quest for truths of his own making.” Likewise it was Emerson who impressed on Nietzsche the power of the oppositional intellect to make the world anew. “Let an American tell them what a great thinker who arrives on this earth signifies as a new centre of tremendous forces,” affirmed Nietzsche in “Schopenhauer as Educator” (1874). Quoting this American’s essay “Circles” (1841), Nietzsche affirmed, “Beware,’ says Emerson, “when the great God lets loose a thinker on this planet. Then all things are at risk. It is as when a conflagration has broken out in a great city, and no man knows what is safe, or where it will end.”

But of all the uses Nietzsche had for Emerson, it was his notion that a philosopher without foundations works by provocation, not instruction, as an “exemplar,” not a guide, which most vividly suggested to Nietzsche the possibilities of his own philosophy. The philosopher is useful insofar as he helps carry one to one’s self. “No one can construct for you the bridge upon which precisely you must cross the stream of life, no one but you yourself alone,” Nietzsche insisted. “There exists in the world a single path along which no one can go except you: whither does it lead? Do not ask, go along it.”

Nietzsche found confirmation in another quotation from Emerson’s “Circles”: “A man never rises higher than when he does not know whither his path [will] lead him.” If Emerson sent Nietzsche on the path of philosophy without absolutes, on a path to become who he was, he also reminded him that he would not be waiting for him upon his arrival.

Throughout the 1880s Nietzsche sent manuscript after manuscript to his publisher, and his publisher, in turn, sent them off as books to a German reading public as yet indifferent to his ideas. Nietzsche never forgave his German contemporaries for leaving him in the lurch. Undaunted, he spent most of the final year of his productive intellectual life, though struggling with illness, swept up in a euphoric mood. It was during what would become his final sprint of productivity that a third fan letter arrived from America, this time from Karl Knortz, a Prussian-born freelance writer in New York, who wrote to express his admiration for Thus Spoke Zarathustra.

Nietzsche now had reason to believe that the praise it contained truly signaled that his dawn was finally breaking, for just a few months earlier the prominent Danish literary critic Georg Brandes had delivered a series of high-profile lectures on him in Copenhagen, at long last drawing attention to his genius. In his letter, Knortz, a translator of American authors into German and a promoter of German literature for American readers, also relayed his desire to promote Nietzsche to American audiences.

But in order to do that, Knortz would need Nietzsche’s help. So he asked the German author for a description of himself and a characterization of his oeuvre. Nietzsche gladly obliged. In a letter of reply dated June 21, i888, he sketched a portrait of his work and himself for his would-be American audience:

The task of giving you some picture of myself, as a thinker, or as a writer and poet, seems to me extraordinarily difficult…. The thought of advertising myself is utterly alien to me personally; I have not lifted a finger with that end in view. Of my Zarathustra, I tend to think that it is the profoundest work in the German tongue, also the most perfect in its language. But for others to feel this will require whole generations to catch up with the inner experiences from which that work could arise.”

Nietzsche may have thought that his philosophy awaited an audience of readers yet unborn, but given Knortz’s enthusiasm, he had reason to suspect that he might first find that audience in America. In a letter to his publisher asking for his assistance in facilitating Knortz’s propaganda, he speculated about the value of securing a readership across the Atlantic. “In principle all my experiences show that my influence begins on the periphery and only from there will the currents ripple back to the `Fatherland.” That summer, Nietzsche sent off a flurry of letters to friends telling them that he had “admirers in North America.” Soon Americans would learn, he enthused, that “I am the most independent spirit of Europe and the only German writer — that’s something! — .”n

Though Nietzsche liked the image of himself as an intellectual nomad, and though he long ago decided that the thinker without foundations must go not only without compass or guide, but also without a final destination, his desire for freedom never fully subdued his longing for an intellectual home. He knew from his own experiences that a feeling of refuge — while fleeting — is necessary even for the free spirit. He likewise knew from his own experiences reading Emerson that sometimes it is abroad that the aboriginal intellect finds a home.

A home in America for Nietzsche’s philosophy? After almost three decades with Emerson’s writings, the prospect seemed likely indeed. After all, it was America that had created the thinker with whom he thought as he came to terms with himself and his world. It was the American Emerson who showed Nietzsche the possibilities of thought beyond the good and evil of Christian piety.

It was the American Emerson who critiqued sterile ideas and made philosophy a friend to life. It was the American Emerson who understood that philosophical inquiry in a world without absolutes works by example and provocation only. And it was the American Emerson who Nietzsche believed never could have been produced within the suffocating philistinism of his native German culture. Nietzsche did not know much about America, but he did know — or at least he believed — that with one exception (himself), Germany could never have given birth to such a dynamic thinker. He summed up his feelings for Emerson this way: “Emerson. Never have I felt so much at home in a book, and in my home as — I shouldn’t praise it, it is too close to me.

Nietzsche understood what it meant to travel imaginatively through time and space in order to find a thinker to think with. Just as he had to travel to the mental and moral world of a mid-nineteenth-century American philosopher enroute to himself, twentieth-century American readers would now turn to him for the same. They would look across the Atlantic for an example of the perils and possibilities of the aboriginal intellect. They would look to a nineteenth-century German thinker in order to feel at home.

Follow

Get every new post delivered to your Inbox.

Join 272 other followers