This year I was honored to present the 2018 John W. Campbell Award for Best New Writer at Worldcon’s Hugo Awards Ceremony, and several people have asked me to post my presentation speech, in which I used Japanese examples to talk about the invaluable impact of new authors expanding the breadth of what gets explored in genre fiction’s long conversation. Here is the speech, followed by some expanded comments:
First awarded in 1973, this award was named for John W. Campbell, the celebrated editor of Astounding and Analog who introduced many beloved new authors to the field. This is not a Hugo award, but is sponsored by Dell Magazines, and administered by Worldcon. Spring Schoenhuth of Springtime Studios created the Campbell pin, and the tiara made by Amanda Downum was added in 2005/2006. This award is unusual for considering short fiction and novels together, providing a cross-section of innovation in the field, and, often, offering a first personal welcome to new writers unfamiliar with the social world of fandom.
I’m currently curating an exhibit on the history of censorship around the world, and one section of the exhibit keeps coming to mind as I consider the Campbell Award. Immediately after World War II, in Japan authors and journalists were effectively forbidden to talk about the war, due to censorship exercised by both the reformed Japanese government and American occupation forces. This left a generation of kids desperate to understand the events which had shattered their world and families, but with no one willing to have that conversation, and no books to turn to. Enter Osamu Tezuka whose 1952 Astro Boy (Tetsuwan Atomu, 1952-68) bypassed censors who saw it as merely a kids’ science fiction story, while it depicted a civil rights movement for robot A.Is., including anti-robot hate-crimes, hate-motivated international wars, nuclear bombs, and the rise of the robot-hating dictator “Hitlini.”
Tezuka’s science fiction became the tool a generation used to understand the roots of World War II and how to work toward a more peaceful and cooperative future, but what makes this relevant to the Campbell Award is the next step. Many autobiographies of those who were kids in Japan in the 1950s describe reading and re-reading Tezuka’s early science fiction until the cheap paperbacks fell apart, but by the later 1960s these same young readers became young authors, like Yoshihiro Tatsumi, Keiji Nakazawa, and their peers. They in turn led a movement to push the envelope of what could be depicted in popular genre fiction in Japan, writing grittier more adult works, battling censorship and backlash, and ultimately opening a space for more serious genre fiction. These new voices didn’t just contribute their works, they changed speculative fiction to let Tezuka and other authors they had long looked up to write new works too, finally depicting the war directly, and producing some of the best works of their careers, including Tezuka’s Buddhist science fiction masterpiece Phoenix.
These authors I’m discussing are all manga authors, comic book authors, but the difference between prose and comics doesn’t matter here, their world like ours was and is a self-conscious community of speculative fiction readers and writers dedicated to imagining different presents, pasts, and futures, and thereby advancing a conversation which injects imagination, hope, and caution into our real world efforts to and build the best future possible. It is in that spirit that the John W. Campbell award welcomes to our field not only today’s new voices but the ways that these voices will change the field, stimulating new responses from everybody, from those like John Varley and George R. R. Martin who were Campbell finalists more than forty years ago, to next year’s finalists. This year’s finalists are Katherine Arden, Sarah Kuhn, Jeannette Ng, Vina Jie-Min Prasad, Rebecca Roanhorse, and Rivers Solomon.
The examples I discussed in this speech come from my exhibit’s case on the censorship of comic books and graphic novels, which are targeted by censorship more often than text fiction because of their visual format (which makes obscenity charges easier to advance), their association with children, and the power of political cartoons.
Tezuka’s manga I discuss in the exhibit with the chilling title “Childhood Without Books” since during World War II a generation of Japanese kids grow up in a broken school system which had all but shut down or been transformed into a military pre-training program, while censored presses produced only war propaganda, and Japan even had a ban on “frivolous literature” which generally meant anything that wasn’t for the war. In effect, a generation of kids grew up with no access to literature, and plunged straight from that to the new era of post-war censorship. Numerous autobiographies by members of this generation vividly recount the arrival of the first bright, colorful books by “God of Manga” Osamu Tezuka, such as New Treasure Island, Lost World, Nextworld, and above all Astro Boy whose depictions of anti-robot voter suppression tactics are very powerful today, while its repeated engagement nuclear bombs and other weapons of mass destruction were, for adults and kids alike, often the first and only available literary discussion of nuclear warfare. Tezuka also made a point of discussing racism as a global issue, and Astro Boy depicts lynch mobs in America, the Cambodian genocide, and post-colonial exploitation in Africa.
Thus, while being perceived as “for kids” often brings comics under extra fire, in the case of Astro Boy, censors ignored a mere science fiction comic, which let Tezuka kick start the conversation about the mistakes of the past and the possibilities of a better future.
Making Room for Adults: One young reader who read and reread Tezuka’s early manga until they fell apart was Yoshihiro Tatsumi, whose autobiography A Drifting Life begins with Tezuka’s impact on him in his early post-war years. As Tatsumi himself began to publish manga in the 1950s-70s, Japan experienced its own wave of public and parental outrage about comics harming children similar to that which had affected the English-speaking world slightly earlier. Since the Japanese word for comic books, manga, literally means “whimsical pictures” critics argued that manga must by definition be light and funny. Tatsumi coined the alternate term gekiga(“dramatic pictures”) adopted by a wave of serious and provocative authors who set out to depict serious dramatic topics, such as crime stories, suicide, sexuality, prostitution, the debt crisis, alienation, the psychology of evil, and the dark and uncomfortable social issues and tensions affecting Japanese society.
By the 1970s, the efforts of Tatsumi and his peers to make space for mature manga helped to expand the range of what artists dared to depict, contributing to the loosening of censorship and social pressure, which in turn let thethe authors Tatsumi and others had looked up to as children to finally treat the war directly. Thus Tatsumi’s efforts moving forward from his childhood model Osamu Tezuka in turn paved the way for Tezuka to finally own including Message to Adolfwhich depicts how racism gradually poisons individuals and society, Ayako which depicts the degeneration of traditional Japanese society during the post-war occupation, MW which depicts government corruption and the human impact of weapons of mass destruction, sections of his beloved medical drama Black Jackwhich treat war and exploitation, Ode to Kirihitowhich treats medical dehumanization and apartheid in South Africa, Alabasterwhich treats ideas of race and beauty in the USA, and his epic Phoenix, considered one of the great masterpieces of the manga world.
Another of Tezuka’s avid early readers was Hiroshima survivor Keiji Nakazawa, who found in art and manga hope for a universal medium which could let his pleas for peace and nuclear disarmament cross language barriers. Many of the grotesque images of gory melting faces in Nakazawa’s harrowing autobiography Barefoot Gen are indistinguishable from the imagery in violent horror comics advocates of comics censorship so often denounce as harmful to children.
Our impulse to place political works like Barefoot Gen in a separate category from graphic horror or pornography despite their identical visual content is reflected in many governments’ obscenity laws, which ban vaguely-defined “obscene” or “indecent” content and often demand that works accused obscenity prove they have “artistic merit” to refute the charge, a rare situation where even legal systems with “innocent until proven guilty” standards put the burden of proof on the defendant. Some modern democracies which have state censorship, such as New Zealand, have worked to improve this by creating legislation which defines very clearly what can be censored (for example depictions of sexual exploitation of minors, or of extreme torture) rather than banning “indecent” content in the abstract. (I strongly recommend the New Zealand Chief Censors’ endlessly fascinating censorship ratings office blog which offers a vivid portrait of the trends in modern censorship, and what censorship would probably look like in the USA without the First Amendment).
If you’re interested in looking at some of these works, beyond Astro Boy, my top recommendations are Tezuka’s Message to Adolf and the work of another giant of the early post-war, Shigeru Mizuki, best known for his earlier Kitaro series which collects Japanese oral tradition yokai ghost stories. After the efforts of Tatsumi and others broadened the scope of what manga was allowed to depict, Mizuki published his magnificent Showa: a History of Japan, recently published in English by Drawn & Quarterly.
The first volume depicts the lead up to WWII in the 1920s-30s, and is fascinating to compare to the current political world, since it shows how Japanese society was became gradually more militarized and toxic due to tiny incremental short-term political and social decisions which feel very much like many one sees today, but paralleled by severe restrictions on speech and suppression of active resistance different from what one sees today. Ferociously critical of Japan’s government and warmongers, Mizuki’s history is also autobiography, depicting himself as a child, and how the day to day games kids played on the street became more violent and military, playing soldier instead of house, as the society drifted toward fascism.
It’s an extraordinarily powerful read, and particularly captures how, parallel to political events, moments of celebrity controversy and sensational news reflect and propel cultural shifts – think of how 100 years from now someone writing a history of the rise of America’s alt right movement would not include Milo Yiannopoulos, who had no demonstrable direct political role, yet for those living on the ground in this era he was clearly a factor/ indicator/ ingredient in the tensions of the times. Mizuki includes incidents and figures like that which parallel the political events and his family’s experiences, recreating the on-the-ground experience in a way unlike any other history I’ve read. I can’t recommend it enough to anyone interested in what fascism’s rise can teach us about today, and about how cultures change.
The idea: Revolutions in information technology always trigger innovations in censorship and information control, so we’re bringing together 25 experts on information revolutions past and present to create a filmed series of discussions (which we will post online for all to enjoy!) which we hope will help people understand the new forms of censorship and information control that are developing as a result of the digital revolution. And we’ve put together a museum exhibit on the history of censorship, a printed catalog with 200+ pages of full color images of banned and censored books, which you can get as a Kickstarter thank-you. More publications will follow.
For those who’ve wondered why there haven’t been many Ex Urbe posts recently, the work for this project has been a big part of it, though other real reasons include my chronic pain, and the tenure scramble (victory!), and racing to finish Terra Ignota book 4, and female faculty being put on way too many committees (12! seriously?!). But now that the preparatory work of the project is done, I should be able to share more here over the coming weeks and months.
The project was born out of Cory Doctorow and me sitting down at conventions from time to time and chatting about our work, and over and over something he was seeing current corporations or governments try out with digital regulation would be jarringly similar to something I saw guilds or city-states try during the print revolution. One big issue in both eras, for example, was/is the difference between systems that try to regulate content before it is released, i.e. requiring books to have licenses before they could be printed, or content to be vetted before it is published (think the Inquisition, the Comics Code Authority, or movie ratings in places like New Zealand where it’s illegal to screen unrated films), vs. systems that allow things to be released without oversight but create apparatus for policing/ removing/ prosecuting them after release if they’re found objectionable (like England in the 16th century, or online systems that have users flag content). Past information revolutions–from the printing press, to radio and talkies–give us test cases that show us what effects different policies had, so by looking, for example, at where the book trade fared better, Paris or Amsterdam, we can also look at what effects different regulations are likely to have on current information economies, and artistic output. We’ve got people who work on the Inquisition, digital music, the birth of copyright, ditto machines, Google, banned plays, burnings of Jewish books, comic book censorship, an amazing list!
There will more to share over the next months as the videos go online, but today I want to share one of the fun little pieces I wrote for exhibit on Book Burning. Writing for exhibits always an extra challenge, since only so much can fit on a museum wall or item label, so, 2+ millenia of of book burning… can I do it justice in 550 words?
We can divide book burnings into three kinds: eradication burnings which seek to destroy a text, collection burnings which target a library or archive, and symbolic burnings which primarily aim to send a message.
The earliest known book burnings are one mentioned in the Hebrew Bible (Jeremiah 36), then the burning of Confucian works (and execution of Confucian scholars) in Qin Dynasty China, 213-210 BC. Christian book burning began after the Council of Nicaea, when Emperor Constantine ordered the burning of works of Arian (non-Trinitarian) Christianity. In the manuscript era eradication burnings could destroy all copies of a text—as in 1073 when Pope Gregory VII ordered the burning of Sappho—but after 1450 the movable type printing press made eradication burnings of published material effectively impossible unless one seized the whole print run before copies were dispersed. This was difficult for even the Inquisition, but it still practiced frequent symbolic book burning, especially in the Enlightenment, when a condemnation from Rome required Paris to publicly burn one or a few copies of a book, while all knew many more remained. When the beloved Encyclopédie was condemned, the French authorities tasked to burn it burned Jansenist theological writings in its place, a symbolic act two steps removed from harming the original.
Since print’s advent eradication burnings have diminished, though collection burnings continue, often targeting communities such as Protestant or Jewish communities, language groups such as indigenous texts in Portuguese-held Goa (India), universities whose organized collections are unique even if individual items are not, or state or institutional archives which contain unique content even in an age of print. Regime changes and political unrest have long been triggers for archive burnings, such as the burning of the National Archives of Bosnia and Herzegovina in 2014. Some book burnings result from smaller scale conflicts, as in 1852 when Armand Dufau, in charge of the school for the blind in Paris, ordered the burning of all books in the newly-invented braille system, of which he disapproved. Nazi burnings of Jewish and “un-German” material employed eradication rhetoric but were mainly collection burnings, as when youth groups burned 25,000 books from university libraries in 1933, or symbolic burnings, performing destruction to spread fear among foes and excitement among supporters while many party members retained or sold valuable books stolen from Jewish collections rather than destroying them.
Today, archived documents and historic manuscript collections remain most vulnerable to eradication burning, such as those burned in Iraq’s national Library in 2003, in two libraries in Tumbuktu in 2013, and others recently burned by ISIS. Large-scale book burnings in America have included the activities of the New York Society for the Suppression of Vice (founded 1873) which boasted of burning 15 tons of books and nearly 4 million “lewd” pictures, burnings of comic books in 1948, and burning of communist material during the Second Red Scare of the 1950s. Since then, most book burnings in America have been small-scale symbolic burnings of works such as Harry Potter, books objected to in schools or college classrooms, or of Bibles or Qur’ans. In a rare 2010 case of an attempted eradication burning, the Pentagon bought and burned nearly the whole print run of Antony Shaffer’s Operation Dark Heart, which—authorities said—contained classified information.
In the comments to the Progress post, a reader asked for clarification on what was so awful about Hobbes, and this was Ada’s response, which I am reposting as a post so that it doesn’t stay buried down there:
The Hobbes reference referred, not to my opinion of him or modern opinions on him, but contemporary opinions of him, how hated and feared he was by his peers in the mid-17th century. I’ll treat him more in the next iteration(s) of my skepticism series, but in brief Hobbes was a student of Bacon (he was actually Bacon’s amanuensis for a while) and used Bacon’s new techniques of observation and methodical reasoning with absolute mastery, BUT used them to come to conclusions that were absolutely terrifying to his peers, attacking the dignity of the human race, the foundations of government, the pillars of morality of his day, in ways whose true terror are hard for us to feel when we read Leviathan in retrospect, having accepted many of Hobbes’s ideas and being armored against the others by John Locke. But among his contemporaries, “The Beast of Malmsbury” as he was called, held an unmatched status as the intellectual terror of his day. In fact there are few thinkers ever in history who were so universally feared and hated–it’s only a slight exaggeration to say that for the two decades after the publication of Leviathan, the sole goal of western European philosophy was to find some way to refute Thomas Hobbes WITHOUT (here’s the tricky part) undermining Bacon. Because Bacon was light, hope, progress, the promise of a better future, and Hobbes was THE BEST wielder of Bacon’s techniques. So they couldn’t just DISMISS Hobbes without undermining Bacon, they had to find a way to take Hobbes on in his own terms and Bacon better than Hobbes did. It took 20 years and John Locke to achieve that, but in the meantime Hobbes so terrified his peers that they literally rewrote the laws of England more than once to extend censorship enough to silence Hobbes.
Also the man Just. Wouldn’t. Die. They wanted him dead and gone so they could forget him and move on but he lived to be 91, a constant reminder of the intellectual terror whose shadow had loomed so long over all of Europe. To give a sample of a contemporary articulation of the fear and amazement Hobbes caused in his peers, here is a satirical broadside published to celebrate his death:
My favorite verse from it is:
“Leviathan the Great is dead! But see
The small Behemoths of his progeny
Survive to battle all divinity!”
So I chose Hobbes as an example because he’s really the first “backfire” of Bacon, the first unexpected, unintended consequence of the new method. Hobbes’s book didn’t cause any atrocities, didn’t result in wars or massacres, but it did spread terror through the entire intellectual world, and was the first sniff of the scarier places that thought would go once Bacon’s call to examine EVERYTHING genuinely did examine everything… even things people did NOT want anyone to doubt. So while Hobbes is wonderful, from the perspective of his contemporaries he was the first warning sign that progress cannot be controlled, and that, while it will change parts of society we think are bad, it will change the parts we value too.
Hope that helps clear it up? I’ll discuss Hobbes more in later works.
Is progress inevitable? Is it natural? Is it fragile? Is it possible? Is it a problematic concept in the first place? Many people are reexamining these kinds of questions as 2016 draws to a close, so I thought this would be a good moment to share the sort-of “zoomed out” discussions the subject that historians like myself are always having.
There is a strange doubleness to experiencing an historic moment while being a historian one’s self. I feel the same shock, fear, overload, emotional exhaustion that so many are, but at the same time another me is analyzing, dredging up historical examples, bigger crises, smaller crises, elections that set the fuse to powder-kegs, elections that changed nothing. I keep thinking about what it felt like during the Wars of the Roses, or the French Wars of Religion, during those little blips of peace, a decade long or so, that we, centuries later, call mere pauses, but which were long enough for a person to be born and grow to political maturity in seeming-peace, which only hindsight would label ‘dormant war.’ But then eventually the last flare ended and then the peace was real. But on the ground it must have felt exactly the same, the real peace and those blips. That’s why I don’t presume to predict — history is a lesson in complexity not predictability — but what I do feel I’ve learned to understand, thanks to my studies, are the mechanisms of historical change, the how of history’s dynamism rather than the what next. So, in the middle of so many discussions of the causes of this year’s events (economics, backlash, media, the not-so-sleeping dragon bigotry), and of how to respond to them (petitions, debate, fundraising, art, despair) I hope people will find it useful to zoom out with me, to talk about the causes of historical events and change in general.
Two threads, which I will later bring together. Thread one: progress. Thread two: historical agency.
Part 1: The Question of Progress As Historians Ask It
“How do you discuss progress without getting snared in teleology?” a colleague asked during a teaching discussion. This is a historian’s succinct if somewhat technical way of asking a question which lies at the back of a lot of the questions people are wrestling with now. Progress — change for the better over historical time. The word has many uses (social progress, technological progress), but the reason it raises red flags for historians is the legacy of Whig history, a school of historical thought whose influence still percolates through many of our models of history. Wikipedia has an excellent opening definition of Whig history:
Whig history… presents the past as an inevitable progression towards ever greater liberty and enlightenment, culminating in modern forms of liberal democracy and constitutional monarchy. In general, Whig historians emphasize the rise of constitutional government, personal freedoms, and scientific progress. The term is often applied generally (and pejoratively) to histories that present the past as the inexorable march of progress towards enlightenment… Whig history has many similarities with the Marxist-Leninist theory of history, which presupposes that humanity is moving through historical stages to the classless, egalitarian society to which communism aspires… Whig history is a form of liberalism, putting its faith in the power of human reason to reshape society for the better, regardless of past history and tradition. It proposes the inevitable progress of mankind.
In other words, this approach presumes a teleology to history, that human societies have always been developing toward some pre-set end state: apple seeds into apple trees, humans into enlightened humans, human societies into liberal democratic paradises.
Some of the problems with this approach are transparent, others familiar to those of my readers who have been engaging with current discourse about the problems/failures/weaknesses of liberalism. But let me unpack some of the other problems, the ones historians in particular worry about.
Developed in the earlier the 20th century, Whig history presents a particular set of values and political and social outcomes as the (A) inevitable and (B) superior end-points of all historical change — political and social outcomes that arise from the Western European tradition. The Eurocentric distortions this introduces are obvious, devaluing all other cultures. But even for a Europeanist like myself, who’s already studying Europe, this approach has a distorting effect by focusing our attentions onto historical moments or changes or people that were “right” or “correct,” that took a step “forward.” When one attempts to write a history using this kind of reasoning, the heroes of this process (the statesman who founded a more liberal-democratic-ish state, the scientist whose invention we still use today, the poet whose pamphlet forwards the cause) loom overlarge in history, receiving too much attention. On the one hand, yes, we need to understand those past figures who are keystones of our present — I teach Plato, and Descartes, and Machiavelli with good reason — but if we study only the keystones, and not the other less conspicuous bricks, we wind up with a very distorted idea of the whole edifice.
Whig history also makes it dangerously easy to stray into placing moral value on those things which advanced the teleologicaly-predetermined future. Such things seem to be “correct” thus “good” thus “better” while those whose elements which did not contribute to this teleological development were “dead ends” or “mistakes” or “wrong” which quickly becomes “bad.” In such a history whole eras can be dismissed as unworthy of study for failing to forward progress (The Middle Ages did great stuff, guys!) while other eras can be disproportionately celebrated for advancing it (The Renaissance did a lot of dumb stuff too!). And, of course, whole regions can be dismissed for “failing” to progress (Africa, Asia) as can sub-regions (Poland, Spain).
To give an example within the realm of intellectual history, teleological intellectual histories very often create the false impression that the only figures involved in a period’s intellectual world were heroes and villains, i.e. thinkers we venerate today, or their nasty bad backwards-looking enemies. This makes it seem as if the time period in question was already just previewing the big debates we have today. Such histories don’t know what to do with thinkers whose ideas were orthogonal to such debates, and if one characterizes the Renaissance as “Faith!” vs. “Reason!” and Marsilio Ficino comes along and says “Let’s use Platonic Reason to heal the soul!” a Whig history doesn’t know what to do with that, and reads it as a “dead end” or “detour.” Only heroes or villains fit the narrative, so Ficino must either become one or the other, or be left out. Teleological intellectual histories also tend to give the false impression that the figures we think are important now were always considered important, and if you bring up the fact that Aristotle was hardly read at all in antiquity and only revived in the Middle Ages, or that the most widely owned author in the Enlightenment was the now-obscure fideist encyclopedist Pierre Bayle, the narrative has to scramble to adopt.
Teleological history is also prone to “presentism” <= a bad thing, but a very useful term! Presentism is when one’s reading of history is distorted by one’s modern perspective, often through projecting modern values onto past events, and especially past people. An essay about the Magna Carta which projects Enlightenment values onto its Medieval authors would be presentist. So are histories of the Renaissance which want to portray it as a battle between Reason and religion, or say that only Florence and/or Venice had the real Renaissance because they were republics, and only the democratic spirit of republics could foster fruitful, modern, forward-thinking people. Presentism is also rearing its head when, in the opening episodes of the new Medici: Masters of Florence TV series, Cosimo de Medici talks about bankers as the masterminds of society, and describes himself as a job-creator, not the conceptual space banking was in in 1420. Presentism is sometimes conscious, but often unconscious, so mindful historians will pause whenever we see something that feels revolutionary, or progressive, or proto-modern, or too comfortable, to check for other readings, and make triple sure we have real evidence. Sometimes things in the past really were more modern than what surrounded them. I spent many dissertation years assembling vast grids of data which eventually painstakingly proved that Machaivelli’s interest in radical Epicurean materialism was exceptional for his day, and more similar to the interests of peers seventy years in his future than his own generation — that Machiavelli was exceptional and forward-thinking may be the least surprising conclusion a Renaissance historian can come to, but we have to prove such things very, very meticulously, to avoid spawning yet another distorted biography which says that Galileo was fundamentally an oppressed Bill Nye. Hint: Galileo was not Bill Nye; he was Galileo.
These problems, in brief, are why discussions of progress, and of teleology, are red flags now for any historian.
Unfortunately, the bathwater here is very difficult to separate from an important baby. Teleological thinking distorts our understanding of the past, but the Whig approach was developed for a reason. (A) It is important to have ways to discuss historical change over time, to talk about the question of progress as a component of that change. (B) It is important to retain some way to compare societies, or at least to assess when people try to compare societies, so we can talk about how different institutions, laws, or social mores might be better or worse than others on various metrics, and how some historical changes might be positive or negative. While avoiding dangerous narratives of triumphant [insert Western phenomenon here] sweeping through and bringing light to a superstitious and backwards [era/people/place], we also want to be able to talk about things like the eradication of smallpox, and our efforts against malaria and HIV, which are undeniably interconnected steps in a process of change over time — a process which is difficult to call by any name but progress.
So how do historians discuss progress without getting snared in teleology?
And how do I, as a science fiction writer, as a science fiction reader, as someone who tears up every time NASA or ESA posts a new picture of our baby space probes preparing to take the next step in our journey to the stars, how do I discuss progress without getting snared in teleology?
I, at least, begin by being a historian, and talking about the history of progress itself.
Part 2: A Brief History of Progress
In the early seventeenth century, Francis Bacon invented progress.
Let me unpack that.
Ideas of social change over time had existed in European thought since antiquity. Early Greek sources talk about a Golden Age of peaceful, pastoral abundance, followed by a Silver Age, when jewels and luxuries made life more opulent but also more complicated. There followed a Bronze Age, when weapons and guards appeared, and also the hierarchy of have and have-nots, and finally an Iron Age of blood and war and Troy. Some ancients added more detail to this narrative, notably Lucretius in his Epicurean epic On the Nature of Things. In his version the transition from simple, rural living to luxury-hungry urbanized hierarchy was explicitly developmental, caused, not by divine planning or celestial influences, but by human invention: as people invented more luxuries they then needed more equipment–technological and social — to produce, defend, control, and war over said luxuries, and so, step-by-step, tranquil simplicity degenerated into sophistication and its discontents.
Lucretius’s developmental model of society has several important components of the concept of progress, but not all of them. It has the state of things vary over the course of human history. It also has humanity as the agent of that change, primarily through technological innovation and social changes which arise in reaction to said innovation. It does not have (A) intentionality behind this change, (B) a positive arc to this change, (C) an infinite or unlimited arc to this change, or–perhaps most critically–(D) the expectation that any more change will occur in the future. Lucretius accounts for how society reached its present, and the mythological eras of Gold, Silver, Bronze and Iron do the same. None of these ancient thinkers speculate — as we do every day — about how the experiences of future generations might continue to change and be fundamentally different from their own. Quantitatively things might be different — Rome’s empire might grow or shrink, or fall entirely to be replaced by another — but fundamentally cities will be cities, plows will be plows, empires will be empires, and in a thousand years bread will still be bread. Even if Lucan or Lucretius speculate, they do not live in our world where bread is already poptarts, and will be something even more outlandish in the next generation.
Medieval Europe came to the realization — and if you grant their starting premises they’re absolutely right — that if the entire world is a temporary construct designed by an omnipotent, omniscient Creator God for the purpose of leading humans through their many trials toward eternal salvation or damnation, then it’s madness to look to Earth history for any cause-to-effect chains, there is one Cause of all effects. Medieval thought is no more monolithic than modern, but many excellent examples discuss the material world as a sort of pageant play being performed for us by God to communicate his moral lessons, and if one stage of history flows into another — an empire rises, prospers, falls — that is because God had a moral message to relate through its progression. Take Dante’s obsession with the Emperor Tiberius, for example. According to Dante, God planned the Crucifixion and wanted His Son to be lawfully executed by all humanity, so the sin and guilt and salvation would be universal, so He created the Roman Empire in order to have there be one government large enough to rule and represent the whole world (remember Dante’s maps have nothing south of Egypt except the Mountain of Purgatory). The empire didn’t develop, it was crafted for God’s purposes, Act II scene iii the Roman Empire Rises, scene v it fulfills its purpose, scene vi it falls. Applause.
Did the Renaissance have progress? No. Not conceptually, though, as in all eras of history, constant change was happening. But the Renaissance did suddenly get closer to the concept too. The Renaissance invented the Dark Ages. Specifically the Florentine Leonardo Bruni invented the Dark Ages in the 1420s-1430s. Following on Petrarch’s idea that Italy was in a dark and fallen age and could rise from it again by reviving the lost arts that had made Rome glorious, Bruni divided history into three sections, good Antiquity, bad Dark Ages, and good Renaissance, when the good things lost in antiquity returned. Humans and God were both agents in this, God who planned it and humans who actually translated the Greek, and measured the aqueducts, and memorized the speeches, and built the new golden age. Renaissance thinkers, fusing ideas from Greece and Rome with those of the Middle Ages, added to old ideas of development the first suggestion of a positive trajectory, but not an infinite one, and not a fundamental one. The change the Renaissance believed in lay in reacquiring excellent things the past had already had and lost, climbing out of a pit back to ground level. That change would be fundamental, but finite, and when Renaissance people talk about “surpassing the ancients” (which they do) they talk about painting more realistic paintings, sculpting more elaborate sculptures, perhaps building more stunning temples/cathedrals, or inventing new clever devices like Leonardo’s heated underground pipes to let you keep your potted lemon tree roots warm in winter (just like ancient Roman underfloor heating!) But cities would be cities, plows would be maybe slightly better plows, and empires would be empires. Surpassing the ancients lay in skill, art, artistry, not fundamentals.
Then in the early seventeenth century, Francis Bacon invented progress.
If we work together — said he — if we observe the world around us, study, share our findings, collaborate, uncover as a human team the secret causes of things hidden in nature, we can base new inventions on our new knowledge which will, in small ways, little by little, make human life just a little easier, just a little better, warm us in winter, shield us in storm, make our crops fail a little less, give us some way to heal the child on his bed. We can make every generation’s experience on this Earth a little better than our own. There are — he said — three kinds of scholar. There is the ant, who ranges the Earth and gathers crumbs of knowledge and piles them, raising his ant-mound, higher and higher, competing to have the greatest pile to sit and gloat upon–he is the encyclopedist, who gathers but adds nothing. There is the spider, who spins elaborate webs of theory from the stuff of his own mind, spinning beautiful, intricate patterns in which it is so easy to become entwined — he is the theorist, the system-weaver. And then there is the honeybee, who gathers from the fruits of nature and, processing them through the organ of his own being, produces something good and useful for the world. Let us be honeybees, give to the world, learning and learning’s fruits. Let us found a new method — the Scientific Method — and with it dedicate ourselves to the advancement of knowledge of the secret causes of things, and the expansion of the bounds of human empire to the achievement of all things possible.
Bacon is a gifted wordsmith, and he knows how to make you ache to be the noble thing he paints you as.
“How, Chancellor Bacon, do we know that we can change the world with this new scientific method thing, since no one has ever tried it before so you have no evidence that knowledge will yield anything good and useful, or that each generation’s experience might be better than the previous?”
It is not an easy thing to prove science works when you have no examples of science working yet.
Bacon’s answer — the answer which made kingdom and crown stream passionate support and birthed the Academy of Sciences–may surprise the 21st-century reader, accustomed as we are to hearing science and religion framed as enemies. We know science will work–Bacon replied–because of God. There are a hundred thousand things in this world which cause us pain and suffering, but God is Good. He gave the cheetah speed, the lion claws. He would not have sent humanity out into this wilderness without some way to meet our needs. He would not have given us the desire for a better world without the means to make it so. He gave us Reason. So, from His Goodness, we know that Reason must be able to achieve all He has us desire. God gave us science, and it is an act of Christian charity, an infinite charity toward all posterity, to use it.
They believed him.
And that is the first thing which, in my view, fits every modern definition of progress. Francis Bacon died from pneumonia contracted while experimenting with using snow to preserve chickens, attempting to give us refrigeration, by which food could be stored and spread across a hungry world. Bacon envisioned technological progress, medical progress, but also the small social progresses those would create, not just Renaissance glories for the prince and the cathedral, but food for the shepherd, rest for the farmer, little by little, progress. As Bacon’s followers reexamined medicine from the ground up, throwing out old theories and developing…
I’m going to tangent for a moment. It really took two hundred years for Bacon’s academy to develop anything useful. There was a lot of dissecting animals, and exploding metal spheres, and refracting light, and describing gravity, and it was very, very exciting, and a lot of it was correct, but–as the eloquent James Hankins put it–it was actually the nineteenth century that finally paid Francis Bacon’s I.O.U., his promise that, if you channel an unfathomable research budget, and feed the smartest youths of your society into science, someday we’ll be able to do things we can’t do now, like refrigerate chickens, or cure rabies, or anesthetize. There were a few useful advances (better navigational instruments, Franklin’s lightning rod) but for two hundred years most of science’s fruits were devices with no function beyond demonstrating scientific principles. Two hundred years is a long time for a vastly-complex society-wide project to keep getting support and enthusiasm, fed by nothing but pure confidence that these discoveries streaming out of the Royal Society papers will eventually someday actually do something. I just think… I just think that keeping it up for two hundred years before it paid off, that’s… that’s really cool.
…okay, I was in the middle of a sentence: As Bacon’s followers reexamined science from the ground up, throwing out old theories and developing new correct ones which would eventually enable effective advances, it didn’t take long for his followers to apply his principle (that we should attack everything with Reason’s razor and keep only what stands) to social questions: legal systems, laws, judicial practices, customs, social mores, social classes, religion, government… treason, heresy… hello, Thomas Hobbes. In fact the scientific method that Bacon pitched, the idea of progress, proved effective in causing social change a lot faster than genuinely useful technology. Effectively the call was: “Hey, science will improve our technology! It’s… it’s not doing anything yet, so… let’s try it out on society? Yeah, that’s doing… something… and — Oh! — now the technology’s doing stuff too!” Except that sentence took three hundred years.
We know now, as Bacon’s successors learned, with harsher and harsher vividness in successive generations, that attempts at progress can also cause negative effects, atrocious ones. Like Thomas Hobbes. And the Terror phase of the French Revolution. And the life-expectancy in cities plummeting as industrialization spread soot, and pollutants, and cholera, and mercury-impregnated wallpaper, and lead-whitened bread, Mmmmm lead-whitened bread… And just as technological discoveries had their monstrous offspring, like lead-whitened bread, the horrors of colonization were some of the monstrous offspring of the social applications of Reason. Monstrous offspring we are still wrestling with today.
Part 3: Progresses
We now use the word “progress” in many senses, many more than Bacon and his peers did. There is “technological progress.” There is “social progress.” There is “economic progress.” We sometimes lump these together, and sometimes separate them.
Thus the general question “Has progress failed?” can mean several things. It can mean, “Have our collective efforts toward the improvement of the human condition failed to achieve their desired results?” This is being asked frequently these days in the context of social progress, as efforts toward equality and tolerance are facing backlash.
But “Has progress failed?” can also mean “Has the development of science and technology, our application of Reason to things, failed to make the lived experience of people better/ happier/ less painful? Have the changes been bad or neutral instead of good?” In other words, was Bacon right that human’s using Reason and science can change our world, but wrong that we can make it better?
I want to stress that it is no small intellectual transformation that “progress” can now be used in a negative sense as well as a positive one. The concept as Bacon crystallized it, and as the Enlightenment spread it, was inherently positive, and to use it in a negative sense would be nonsensical, like using “healing” in a negative sense. But look at how we actually use “progress” in speech today. Sometimes it is positive (“Great progress this year!”) and sometimes negative (“Swallowed up by progress…”). This is a revolutionary change from Bacon’s day, enabled by two differences between ourselves and Bacon.
First we have watched the last several centuries. For us, progress is sometimes the first heart transplant and the footprints on the Moon, and sometimes it’s the Belgian Congo with its Heart of Darkness. Sometimes it’s the annihilation of smallpox and sometimes it’s polio becoming worse as a result of sanitation instead of better. Sometimes it’s Geraldine Roman, the Phillipines’ first transgender congresswoman, and sometimes it’s Cristina Calderón, the last living speaker of the Yaghan language. Progress has yielded fruits much more complex than honey, which makes sentences like “The prison of progress” sensical to us.
We have also broadened progress. For Bacon, progress was the honey and the honeybees, hard, systematic, intentional human action creating something sweet and useful for mankind. It was good. It was new. And it was intentional. In its nascent form, Bacon’s progress did not differentiate between progress the phenomenon and progress the concept. If you asked Bacon “Was there progress in the Middle Ages?” he would have answered, “No. We’re starting to have progress right now.” And he’s correct about the concept being new, about intentional or self-aware progress, progress as a conscious effort, being new. But if we turn to Wikipedia it defines “Progress (historical)” as “the idea that the world can become increasingly better in terms of science, technology, modernization, liberty, democracy, quality of life, etc.” Notice how agency and intentionality are absent from this. Because there was technological and social change before 1600, there were even technological and social changes that undeniably made things better, even if they came less frequently than they do in the modern world. So the phenomenon we study through the whole of history, far before the maturation of the concept.
As “progress” broadened to include unsystematic progress as well as the modern project of progress, that was the moment we acquired the questions “Is progress natural?” and “Is progress inevitable?” Because those questions require progress to be something that happens whether people intend it or not. In a sense, Bacon’s notion of progress wasn’t as teleological as Whig history. Bacon believed that human action could begin the process of progress, and that God gave Reason to humanity with this end in mind, but Bacon thought humans had to use a system, act intentionally, gather the pollen to make the honey, he didn’t think they honey just flowed. Not until progress is broadened to include pre-modern progress, and non-systematic, non-intentional modern progress, can the fully teleological idea of an inescapable momentum, an inevitability, join the manifold implications of the word “progress.”
Now I’m going to show you two maps.
This is map of global population, rendered to look like a terrain. It shows the jagged mountain ranges of south and east Asia, the vast, sweeping valleys of forest and wilderness. The most jagged spikes may be a little jarring, the intensity of India and China, but even those are rich brown mountains, while the whole thing has the mood of a semi-untouched world, much more pastoral wilderness than city, and almost everywhere a healthy green. This makes progress, or at least the spread of population, feel like a natural phenomenon, a neutral phenomenon.
This is the Human Ooze Map. This map shows exactly the same data, reoriented to drip down instead of spiking up, and to be a pus-like yellow against an ominous black background. Instantly the human metropolises are not natural spikes within a healthy terrain, but an infection clinging to every oozing coastline, with the densest mega-cities seeming to bleed out amidst the goop, like open pustules.
Both these maps show one aspect of ‘progress’. Whether the teeming cities of our modern day are an apocalyptic infection, or a force as natural as the meandering of shores and tree-lines, depends on how we present the narrative, and the moral assumptions that underlie that presentation. Presentism and the progress narrative in general have very similar distorting effects. When we examine past phenomena, institutions, events, people, ideas, some feel viscerally good or viscerally bad, right or wrong, forward-moving or backward-moving, values they acquire from narratives which we ourselves have created, and which orient how we analyze history, just as these mapmakers have oriented population up, or down, resulting in radically different feelings. Jean-Jacques Rousseau’s model of the Noble Savage, happier the rural simplicity of Lucretius’s Golden Age rather than in the stressful ever-changing urban world of progress, is itself an image progress presented like the Human Ooze Map, reversing the moral presentation of the same facts.
Realizing that the ways we present data about progress are themselves morally charged can help us clarify questions that are being asked right now about liberalism, and nationalism, and social change, and opposition to social change. Because when we ask whether the world is experiencing a “failure” or a “revolution” or a “regression” or a “backlash” or a “last gasp” or a “pendulum swing” or a “prelude to triumph” etc., all these characterizations reorient data around different facets of the concept of progress, positive or negative, natural or intentional, just as these two maps reorient population around different morally-charged visualizations.
In sum: post colonialism, post industrialization, post Hobbes, we can no longer talk about progress as a unilateral, uncomplicated, good, not without distorting history, and ignoring the terrible facets of the last several centuries. Bacon thought there would be only honey, he was wrong. But we can’t not discuss progress because, during these same centuries, each generation’s experience has been increasingly different from the last generation, and science and human action are propelling this change. And there has been some honey. We need ways to talk about that.
But not without bearing in mind how we invest progress with different kinds of moral weight (the terrain or the ooze…)
And not without a question Bacon never thought to ask, because he did not realize (as we do) that technological and social change had been going on for many centuries before he made the action conscious. So Bacon never thought to ask: Do we have any power over progress?
Part 4: Do Individuals Have the Power to Change History?
Feelings of helplessness and despair have also been big parts of the shock of 2016. Helplessness and despair are questions, as well as feelings. They ask: Am I powerless? Can I personally do anything to change this? Do individuals have any power to shape history? Are we just swept along by the vast tides of social forces? Are we just cogs in the machine? What changes history?
Within a history department this divide often manifests methodologically.
Economic historians, and social historians, write masterful examinations of how vast social and economic forces, and their changes, whether incremental or rapid, have shaped history. Let’s call that Great Forces history. Whenever you hear people comparing our current wealth gap to the eve of the French Revolution, that is Great Forces history. When a Marxist talks about the inevitable interactions of proletariat and bourgeoisie, or when a Whig historian talks about the inevitable march of progress, those are also kinds of Great Forces history.
Great Forces history is wonderful, invaluable. It lets us draw illuminating comparisons, and helps us predict, not what will happen but what could happen, by looking at what has happened in similar circumstances. I mentioned earlier the French Wars of Religion, with their intermittent blips of peace. My excellent colleague Brian Sandberg of NIU (a brilliant historian of violence) recently pointed out to me that France during the Catholic-Protestant religious wars was about 10% Protestant, somewhat comparable to the African American population of the USA today which is around 13%. A striking comparison, though with stark differences. In particular, France’s Protestant/Calvinist population fell disproportionately in the wealthy, politically-empowered aristocratic class (comprising 30% of the ruling class), in contrast with African Americans today who fall disproportionately in the poorer, politically-disempowered classes. These similarities and differences make it very fruitful to look at the mechanisms of civil violence in 16th and 17th century France (how outbreaks of violence started, how they ended, who against whom) to help us understand the similar-yet-different ways civil violence might operate around us now. That kind of comparison is, in my view, Great Forces history at its most fruitful. (You can read more by Brian Sandberg on this issue in his book, on his blog, and on the Center for the Study of Religious Violence blog; more citations at the end of this article.)
But are we all, then, helpless water droplets, with no power beyond our infinitesimal contribution to the tidal forces of our history? Is there room for human agency?
History departments also have biographers, and intellectual historians, and micro-historians, who churn out brilliant histories of how one town, one woman, one invention, one idea reshaped our world. Readers have seen me do this here on Ex Urbe, describing how Beccaria persuaded Europe to discontinue torture, how Petrarch sparked the Renaissance, how Machiavelli gave us so much. Histories of agents, of people who changed the world. Such histories are absolutely true — just as the Great Forces histories are — but if Great Forces histories tell us we are helpless droplets in a great wave, these histories give us hope that human agency, our power to act meaningfully upon our world, is real. I am quite certain that one of the causes of the explosive response to the Hamilton musical right now is its firm, optimistic message that, yes, individuals can, and in fact did, reshape this world — and so can we.
This kind of history, inspiring as it is, is also dangerous. The antiquated/old-fashioned/bad version of this kind of history is Great Man history, the model epitomized by Thomas Carlyle’s Heroes, Hero-Worship and the Heroic in History (a gorgeous read) which presents humanity as a kind of inert but rich medium, like agar ready for a bacterial culture. Onto this great and ready stage, Nature (or God or Providence) periodically sends a Great Man, a leader, inventor, revolutionary, firebrand, who makes empires rise, or fall, or leads us out of the black of ignorance. Great Man history is very prone to erasing everyone outside a narrow elite, erasing women, erasing the negative consequences of the actions of Great Men, justifying atrocities as the collateral damage of greatness, and other problems which I hope are familiar to my readers.
But when done well, histories of human agency are valuable. Are true. Are hope.
So if Great Forces history is correct, and useful, and Human Agency history is also correct, and useful… how do we balance that? They are, after all, contradictory.
Part 5: The Papal Election of 2016
Every year in my Italian Renaissance class, here at the University of Chicago, I run a simulation of a Renaissance papal election, circa 1490-1500. Each student is a different participant in the process, and they negotiate, form coalitions, and, eventually, elect a pope. And then they have a war, and destroy some chunk of Europe. Each student receives a packet describing that students’ character’s goals, background, personality, allies and enemies, and a packet of resources, cards representing money, titles, treasures, armies, nieces and nephews one can marry off, contracts one can sign, artists or scholars one can use to boost one’s influence, or trade to others as commodities: “I’ll give you Leonardo if you send three armies to guard my city from the French.”
Some students in the simulation play powerful Cardinals wielding vast economic resources and power networks, with clients and subordinates, complicated political agendas, and a strong shot at the papacy. Others are minor Cardinals, with debts, vulnerabilities, short-term needs to some personal crisis in their home cities, or long-term hopes of rising on the coattails of others and perhaps being elected three or four popes from now. Others, locked in a secret chamber in the basement, are the Crowned Heads of Europe — the King of France, the Queen of Castile, the Holy Roman Emperor — who smuggle secret orders (text messages) to their agents in the conclave, attempting to forge alliances with Italian powers, and gain influence over the papacy so they can use Church power to strengthen their plans to launch invasions or lay claim to distant thrones. And others are not Cardinals at all but functionaries who count the votes, distribute the food, the guard who keeps watch, the choir director who entertains the churchmen locked in the Sistine, who have no votes but can hear, and watch, and whisper.
There are many aspects to this simulation, which I may someday to discuss here at greater length (for now you can read a bit about it on our History Department blog), but for the moment I just want to talk about the outcomes, and what structures the outcomes. I designed this simulation not to have any pre-set outcome. I looked into the period as best I could, and gave each historical figure the resources and goals that I felt accurately reflected that person’s real historical resources and actions. I also intentionally moved some characters in time, including some Cardinals and political issues which do not quite overlap with each other, in order to make this an alternate history, not a mechanical reconstruction, so that students who already knew what happened to Italy in this period would know they couldn’t have the “correct” outcome even if they tried, which frees everyone to pursue goals, not “correct” choices, and to genuinely explore the range of what could happen without being too locked in to what did. I set up the tensions and the actors to simulate what I felt the situation was when the election begin, then left it free to flow.
I have now run the simulation four times. Each time some outcomes are similar, similar enough that they are clearly locked in by the greater political webs and economic forces. The same few powerful Cardinals are always leading candidates for the throne. There is usually also a wildcard candidate, someone who has never before been one of the top contenders, but circumstances bring a coalition together. And, usually, perhaps inevitably, a juggernaut wins, one of the Cardinals who began with a strong power-base, but it’s usually very, very close. And the efforts of the wildcard candidate, and the coalition that formed around that wildcard, always have a powerful effect on the new pope’s policies and first actions, who’s in the inner circle and who’s out, what opposition parties form, and that determines which city-states rise and which city-states burn as Italy erupts in war.
And the war is Always. Totally. Different.
Because as the monarchies race to make alliances and team up against their enemies, they get pulled back-and-forth by the ricocheting consequences of small actions: a marriage, an insult, a bribe traded for a whisper, someone paying off someone else’s debts, someone taking a shine to a bright young thing. Sometimes France invades Spain. Sometimes France and Spain unite to invade the Holy Roman Empire. Sometimes England and Spain unite to keep the French out of Italy. Sometimes France and the Empire unite to keep Spain out of Italy. Once they made a giant pan-European peace treaty, with a set of marriage alliances which looked likely to permanently unify all four great Crowns, but it was shattered by the sudden assassination of a crown prince.
So when I tell people about this election, and they ask me “Does it always have the same outcome?” the answer is yes and no. Because the Great Forces always push the same way. The strong factions are strong. Money is power. Blood is thicker than promises. Virtue is manipulable. In the end, a bad man will be pope. And he will do bad things. The war is coming, and the land — some land somewhere — will burn. But the details are always different. A Cardinal needs to gather fourteen votes to get the throne, but it’s never the same fourteen votes, so it’s never the same fourteen people who get papal favor, whose agendas are strengthened, whose homelands prosper while their enemies fall. And I have never once seen a pope elected in this simulation who did not owe his victory, not only to those who voted, but to one or more of the humble functionaries, who repeated just the right whisper at just the right moment, and genuinely handed the throne to Monster A instead of Monster B. And from that functionary flow the consequences. There are always several kingmakers in the election, who often do more than the candidate himself to get him on the throne, but what they do, who they help, and which kingmaker ends up most favored, most influential, can change a small war in Genoa into a huge war in Burgundy, a union of thrones between France and England into another century of guns and steel, or determine which decrees the new pope signs. That sometimes matters more than whether war is in Burgundy or Genoa, since papal signatures resolve questions such as: Who gets the New World? Will there be another crusade? Will the Inquisition grow more tolerant or less toward new philosophies? Who gets to be King of Naples? These things are different every time, though shaped by the same forces.
Frequently the most explosive action is right after the pope is elected, after the Great Forces have thrust a bad man onto Saint Peter’s throne, and set the great and somber stage for war, often that’s the moment that I see human action do most. That’s when I get the after-midnight message on the day before the war begins: “Secret meeting. 9AM. Economics cafe. Make sure no one sees you. Sforza, Medici, D’Este, Dominicans. Borgia has the throne but he will not be master of Italy.” And together, these brave and haste-born allies, they… faicceed? Fail and succeed? They give it all they have: diplomacy, force, wealth, guile, all woven together. They strike. The bad pope rages, sends forces out to smite these enemies. The kings and great thrones take advantage, launch invasions. The armies clash. One of the rebel cities burns, but the other five survive, and Borgia (that year at least) is not Master of Italy.
We feel it, the students as myself, coming out of the simulation. The Great Forces were real, and were unstoppable. The dam was about to break. No one could stop it. But the human agents — even the tiniest junior clerk who does the paperwork — the human agents shaped what happened, and every action had its consequences, imperfect, entwined, but real. The dam was about to break, but every person there got to dig a channel to try to direct the waters once they flowed, and that is what determined the real shape of the flood, its path, its damage. No one controlled what happened, and no one could predict what happened, but those who worked hard and dug their channels, most of them succeeded in diverting most of the damage, achieving many of their goals, preventing the worst. Not all, but most.
And what I see in the simulation I also see over and over in real historical sources.
This is how both kinds of history are true. There are Great Forces. Economics, class, wealth gaps, prosperity, stagnation, these Great Forces make particular historical moments ripe for change, ripe for war, ripe for wealth, ripe for crisis, ripe for healing, ripe for peace. But individuals also have real agency, and our actions determine the actual consequences of these Great Forces as they reshape our world. We have to understand both, and study both, and act on the world now remembering that both are real.
So, can human beings control progress? Yes and no.
Part 6: Ways to Talk About Progress in the 21st Century
Few things have taught me more about the world than keeping a fish tank.
You get some new fish, put them in your fish tank, everything’s fine. You get some more new fish, the next morning one of them has killed almost all the others. Another time you get a new fish and it’s all gaspy and pumping its gills desperately, because it’s from alkeline waters and your tank is too acidic for it. So you put in a little pH adjusting powder and… all the other fish get sick from the Ammonia that releases and die. Another time you get a new fish and it’s sick! So you put fish antibiotics in the water, aaaand… they kill all the symbiotic bacteria in your filter system and the water gets filled with rotting bacteria, and the fish die. Another time you do absolutely nothing, and the fish die.
What’s happening? The same thing that happened in the first two centuries after Francis Bacon, when the science was learning tons, but achieving little that actually improved daily life. The system is more complex than it seems. A change which achieves its intended purpose also throws out-of-whack vital forces you did not realize were connected to it. The acidity buffer in the fish tank increases the nutrients in the water, which causes an algae bloom, which uses up the oxygen and suffocates the catfish. The marriage alliance between Milan and Ferrara makes Venice friends with Milan, which makes Venice’s rival Genoa side with Spain, which makes Spain reluctant to anger Portugal, which makes them agree to a marriage alliance, and then Spain is out of princesses and can’t marry the Prince of Wales, and the next thing you know there are soldiers from Scotland attacking Bologna. A seventeenth-century surgeon realizes that cataracts are caused by something white and opaque appearing at the front of the eye so removes it, not yet understanding that it’s the lens and you really need it.
So when I hear people ask “Has social progress has failed?” or “Has liberalism failed?” or “Has the Civil Rights Movement failed?” my zoomed-in self, my scared self, the self living in this crisis feels afraid and uncertain, but my zoomed-out self, my historian self answers very easily. No. These movements have done wonders, achieved tons! But they have also done what all movements do in a dynamic historical system: they have had large, complicated consequences. They have added something to the fish tank. Because the same Enlightenment impulse to make a better, more rational world, where everyone would have education and equal political empowerment BOTH caused the brutalities of the Belgian Congo AND gave me the vote. And that’s the sort of thing historians look at, all day.
But if the consequences of our actions are completely unpredictable, would it be better to say that change is real but progress controlled by humans is just an idea which turned out to be wrong? No. I say no. Because I gradually got better at understanding the fish tank. Because the doctors gradually figured out how the eye really does function. Because some of our civil rights have come by blood and war, and others have come through negotiation and agreement. Because we as humans are gradually learning more about how our world is interconnected, and how we can take action within that interconnected system. And by doing so we really have achieve some of what Francis Bacon and his followers waited for through those long centuries: we have made the next generation’s experience on this Earth a little better than our own. Not smoothly, and not quickly, but actually. Because, in my mock papal election, the dam did break, but those students who worked hard to dig their channels did direct the flood, and most of them managed to achieve some of what they aimed at, though they always caused some other effects too.
Is it still blowing up in our faces?
Is it going to keep blowing up in our faces, over and over?
Is it going to blow up so much, sometimes, that it doesn’t seem like it’s actually any better?
Is that still progress?
Because there was a baby in the bathwater of Whig history. If we work hard at it, we can find metrics for comparing times and places which don’t privilege particular ideologies. Metrics like infant mortality. Metrics like malnutrition. Metrics like the frequency of massacres. We can even find metrics for social progress which don’t irrevocably privilege a particular Western value system. One of my favorite social progress metrics is: “What portion of the population of this society can be murdered by a different portion of the population and have the murderer suffer no meaningful consequences?” The answer, for America in 2017, is not 0%. But it’s also not 90%. That number has gone down, and is now far below the geohistorical norm. That is progress. That, and infant mortality, and the conquest of smallpox. These are genuine improvements to the human condition, of the sort that Bacon and his followers believed would come if they kept working to learn the causes and secret motions of things. And they were right. While Whig history privileges a very narrow set of values, metrics which track things like infant mortality, or murder with impunity, still privilege particular values — life, justice, equality — but aim to be compatible with as many different cultures, and even time periods, as possible. They are metrics which stranded time travelers would find it fairly easy to explain, no matter where they were dumped in Earth’s broad timeline. At least that’s our aim. And such metrics are the best tool we have at present to make the comparisons, and have the discussions about progress, that we need to have to grapple with our changing world.
Because progress is both a concept and a phenomenon.
The concept is the hope that collective human effort can make every generation’s experience on this Earth a little better than the previous generation’s. That concept has itself become a mighty force shaping the human experience, like communism, iron, or the wheel. It is valuable thing to look at the effects that concept has had, to talk about how some have been destructive and others constructive, and to study, from a zoomed-out perspective, the consequences, successes, and failures of different movements or individuals who have acted in the name of progress.
The phenomenon is also real. My own personal assessment of it is just that, a personal assessment, with no authority beyond some years spent studying history. I hope to keep reexamining and improving this assessment all the days of my life. But here at the beginning of 2017 I would say this:
Progress is not inevitable, but it is happening.
It is not transparent, but it is visible.
It is not safe, but it is beneficial.
It is not linear, but it is directional.
It is not controllable, but it is us. In fact, it is nothing but us.
Progress is also natural, in my view, not in the sense that it will inevitably triumph over its doomed opposition, but in the sense that the human animal is part of nature, so the Declaration of the Rights of Man is as natural as a bird’s nest or a beaver dam. There is no teleology, no inevitable correct ending locked in from time immemorial. But I personally think there is a certain outcome to progress, gradual but certain: the decrease of pain in the human condition over time. Because there is so much desire in this world to make a better one. Bacon was right that we ache for it. And the real measurable changes we have made show that he was also right that we can use Reason and collective effort to meet our desires, even if the process is agonizingly slow, imperfect, and dangerous. But we know now how to go about learning the causes and secret motions of things. And how to use that knowledge.
We are also learning to understand the accidental negative consequences of progress, looking out for them, mitigating them, preventing them, creating safety nets. We’re getting better at it. Slowly, but we are.
Zooming back in hurts. It’s easy to say “the French Wars of Religion” and erase the little blips of peace, but it’s hard to feel fear and pain, or watch a friend feel fear and pain. Sometimes I hear people say they think that things today are worse than they’ve ever been, especially the hate, or the race relations in the USA, that they’re worse now than ever. That we’ve made no progress, quite the opposite. Similarly, I think a person who grew up during one of the peaceful pauses in the French Wars of Religion might say, when the violence restarted, that the wars were worse now than they had ever been, and farther than ever from real peace. They aren’t actually worse now. They genuinely were worse before. But they are really, really bad right now, and it does really, really hurt.
The slowness of social progress is painful, I think especially because it’s the aspect of progress that seemed it would come fastest. During that first century, when Bacon’s followers were waiting in maddening impatience for their better medical knowledge to result in any actual increase in their ability to save lives, social progress was already working wonders. The Enlightenment did extend franchise, end torture on an entire continent, achieved much, and had this great, heady, explosive feeling of victory and momentum. It seemed like social progress was already half-way-done before tech even got started. But Charles Babbage kicked off programmable computing in 1833 and now my pocket contains 100x the computing power needed to get Apollo XI to the Moon, so why, if Olympe de Gouges wrote the Declaration of the Rights of Woman and the Citizen in 1791, do we still not have equal pay?
Because society is a very complicated fish tank. Because we still have a lot to learn about the causes and secret motions of society.
But if there is a dam right now, ready to break and usher in a change, Great Forces are still shaped by human action. Our action.
Studying history has proved to me, over and over, that things used to be worse. That they are better now. Progress is real. That’s a consolation, but a hollow one while we’re still here facing the pain. What fills its hollowness, for me at least, is remembering that secret meeting in the Economics cafe, that hasty plan, diplomacy, quick action — not a second chance after the disaster, but a next chance. And a next. And a next, to take actions that really did achieve things, even if not everything. Human action combining with the flood is not powerlessness. And that’s how I think progress really works.
And as promised, more citations on the demographics of religious violence in France, with thanks to Brian Sandberg:
Brian Sandberg, Warrior Pursuits: Noble Culture and Civil Conflict in Early Modern France (Baltimore, MD: Johns Hopkins University Press, 2010).
Philip Benedict, “The Huguenot Population of France, 1600-85,” in The Faith and Fortunes of France’s Huguenots, 1600-85 (Aldershot: Ashgate, 2001), 39-42, 92-95.
Arlette Jouanna, La France du XVIe siècle, 1483-1598 (Paris: Presses Universitaires de France, 1996), 325-340.
Jacques Dupâquier, ed., De la Renaissance à 1789, vol. 2 of Histoire de la population française (Paris: Presses Universitaires de France, 1988), 81-94.
Off to Italy again. This seems like a good time to share a link to a video of an illustrated talk Ada gave at the Lumen Christi institute in Chicago in February. It’s a fascinating overview of the place of San Marco in Florence, with lots of excellent pictures. It’s like an audio version of an Ex Urbe post, with Fra Angelico, the meaning of blue, the Magi, the Medici, Savonarola, confraternities, and the complexities of Renaissance religious and artistic patronage.
And here’s one of the pictures mentioned but not shown in the presentation, a nine panel illustration by Filippo Dolcaiati “The History of Antonio Rinaldeschi.” It depicts the real historical fate of Rinaldeschi, who became drunk while gambling and threw manure at an icon of the Virgin Mary. A fascinating incident for demonstrating the functions of confraternities, and for demonstrating how seriously the people of Florence took the protection offered by saints and icons.
Second, due to a recent policy change in Italy’s national museums I was able to finally take literally thousands of photos of artifacts and spaces in museums that have been forbidden to cameras for years. I’ve started sharing the photos on Twitter (#historypix) so follow me on Twitter if you would enjoy random photos of cool historical artifacts twice a day.
Meanwhile I don’t yet have another full essay ready to post here, but I’m happy to say the reason is that I’m working away on the page proofs of Too Like the Lightning, the final editing step before the books go to press. I’ve even received a photo from my editor of the Advanced Release Copies for book reviewers sitting in a delicious little pile! It’s fun seeing how many different baby steps the book is taking on its long path to becoming real: cover art, page count, typography, physicality in many stages, first the pre-copy-edit Advanced Bound Manuscripts, then the post-copy-edit but pre-page-proof Advanced Release Copies, evolving toward the final hardcover transformation by transformation. My biggest point of suspense at this point is wondering how fat it will be, how heavy in the hand…
And now, a quick piece of history fun:
There is a dimly-lit hallway half way through the Vatican museum (after you’ve looked at 2,000 Roman marbles, 1,000 Etruscan vases and enough overwhelming architecture to make you start feeling slightly punchy) hung on the left-hand side with stunning tapestries of scenes from the life of Christ based on cartoons by Raphael. But on the right-hand side in the same hallway, largely ignored by the thousands of visitors who stumble through, is my favorite Renaissance tapestry cycle, a sequence of images of The Excessively Exciting Life of Pope Urban VIII. My best summary of these images is that, when I showed them to my excellent friend Jonathan (author of our What Color is Pluto? guest post) he scratched his chin and said, “I think the patronage system may have introduced some bias.” And it’s very true, these are an amazing example of Renaissance art whose sole purpose is excessive flattery of the patron, a genre common in all media: histories, biographies, dedications, sculptures, paintings, verses, and, in this case, thread.
These tapestries are fragile and quite faded, and the narrow hallway thronging with Raphael-admirers makes it awkward to get a good angle, but with much effort I think these capture the over-the-top absurdity which makes these tapestries such a delight. Urban VIII now is best known for engaging in unusually complicated military and political maneuvering, expanding and fortifying the papal territories, pushing fiercely against Hapsburg expansion into Italy, finishing the canonization of St. Ignatius of Loyola, persecuting Galileo, commissioning a lot of Bernini sculptures, and spending so much on military and artistic expenses that he got the papacy so head over heels in debt that the Roman people hated him, the Cardinals conspired to depose him (note: it usually takes a few high-profile murders and/or orgies to get them to do that, so this was a LOT of debt), and his successor was left spending 80% of the Vatican’s annual income on interest repayments alone. But let’s see what scenes from his life he himself wanted us to remember:
My favorite is the first: Angels and Muses descend from Heaven to attend the college graduation of young Maffeo Barberini (not yet pope Urban VIII) and give him a laurel crown. If all graduation ceremonies were this exciting, we’d never miss them! Also someone there has a Caduceus, some weird female version of Hermes? Hard to say. And look at the amazing fabric on the robe of the man overseeing the ceremony.
Second, Maffeo Barberini receives the Cardinal’s Hat, attended by an angel, while Pope Paul V who is giving him the hat points in a heavy-handed foreshadowing way to his own pope hat nearby. What could it mean?!
Next, the fateful election! Heavenly allegories of princely virtues come to watch as the wooden slips are counted and the vote counter is astonished by the dramatic result! Note how, propaganda aside, this is useful for showing us what the slips looked like.
In the one above I particularly like the guy who’s peering into the goblet to make absolutely sure no slips are stuck there:
On the other side of the same scene, our modest Urban VIII is so surprised to be elected he practically swoons! And even demands a recount, while the nice acolyte kneels before him with the (excessively heavy) papal tiara on a silver platter.
Now Urban’s adventures as pope! He breaks ground for new construction projects in Rome, attended by some floating cupid creature holding a book for the flying allegorical heart of the city:
He builds new fortresses to defend Rome:
He makes peace between allegorical ladies representing Rome and Etruria (the area right next to Rome: note, if there is strife between Rome and Etruria in the first place, things in Italy are VERY VERY BAD! But the tapestries aren’t going into that):
And finally, Urban VIII defends Rome from Famine and Plague by getting help from St. Peter, St. Paul, Athena, and St. Sebastian. Well done, your Holiness!
How about that for the exciting life of a late Renaissance pope? You get to hang out with lots of allegorical figures, and vaguely pagan deities as well as saints, and everyone around you is always gesturing gracefully! No matter they fought so hard for the papal tiara. Also, no bankers or moneylenders or interest repayment to be found!
More seriously, another century’s propaganda rarely makes it into our canon of what art is worth reproducing, teaching and discussing, but I often find this kind of artifact much more historically informative than most: we can learn details of clothing, spaces and items like how papers are folded, or what voting slips looked like. We can learn which acts a political figure wanted to be remembered for, what seemed important at the time, so different from what we remember. A tapestry of him canonizing St. Ignatius of Loyola would certainly be popular now, but in his day people cared more about immediate military matters, and he had no way to predict how important St. Ignatius would eventually become. Pieces like this are also a good way to remind ourselves that the Renaissance art we usually see on calendars and cell phone cases isn’t representative, it’s our own curated selection of that tiny venn diagram intersection of art that fits the tastes of BOTH then AND now. And a good reminder that we should always attend graduation ceremonies, since you never know when Angels and Muses might descend from Heaven to attend.
My own period I will treat the most briefly in this survey. This may seem like a strange choice, but I can either do a general overview, or get sidetracked discussing individual philosophers, theologians and commentators and their uses of skepticism for another five posts. So, in brief:
In the later Middle Ages, within the philosophical world, the breadth of disagreement within scholarship, how different the far extreme theories were on any given topic, was rather circumscribed. A good example of a really fractious fight is the question of, within your generally Aristotelian tripartite rational immortal soul, which of the two decision-making principles is more powerful, the Intellect or the Will? It’s a big and important question – without it we will starve to death like Buridan’s ass, and be unable to decide whether to send our second sons to Franciscan or a Dominican monasteries, plus we need it to understand how Original Sin, Grace and salvation work. But the breadth of answers is not that big, and the question itself presumes that everyone involved already believes 90% the same thing.
Enter Petrarch, “Let’s read the classics! They’ll make us great like the Romans!” Begin 250 years of working really hard to find, copy, correct, translate, edit, print and proliferate every syllable surviving from antiquity. Now we discover that Epicurus says there’s no afterlife and the universe is made of atoms; Stoics say the universe is one giant contiguous object without motion or individual existence; Plato says there’s reincarnation (What? The Plato we used to have didn’t say that!); and Aristotle totally doesn’t say what we thought he said, it turns out the Organon was a terrible translation (Sorry, Boethius, you did your best, and we love you, but it was a terrible translation.) Suddenly the palette of questions is much broader, and the degree to which people disagree has opened exponentially wider. If we were charting a solar system before, now we’re charting a galaxy. But the humanists still tried hard to make them all agree, much as the scholastics and Peter Abelard had, since the ancients were ALL wonderful and ALL brilliant and ALL right, right? Even the stuff that contradicts the other stuff? Hence Renaissance Syncretism, attempts by philosophers like Marsilio Ficino and Giovanni Pico della Mirandola to take all the authors of antiquity, and Aquinas and a few others in the mix, and show how they were all really saying the same thing, in a roundabout, hidden, glorious, elusive, poetic, we-can-make-like-Abelard-and-make-it-all-make-sense way.
Before you dismiss these syncretic experiments as silly, or as slavish toadying, there is a logic to it if you can zoom out from modern pluralistic thinking for a minute and look at what Renaissance intellectuals had to work with.
To follow their logic chain you must begin–as they did–by positing that Christianity is true, and there is a single monotheistic God who is the source of all goodness, virtue, and knowledge. Wisdom, being wise and good at judgment, helps you tell true from false and right from wrong, and what is true and right will always agree with and point toward God. Therefore all wise people in history have really been aiming toward the same thing–one truth, one source. Plato and Aristotle and their Criteria of Truth are in the background of this, Plato’s description of the Good which is one divine thing that all reasoning minds tend toward, and Aristotle’s idea that reasoning people (philosophers, scientists) working without error will come to identical conclusions even if they’re on opposite sides of the world, because the knowable categories (fish, equilateral triangle, good) are universal. Thus, as Plato and Aristotle say we use reason to gradually approach knowledge, all philosophers in history have been working toward the same thing, and differ only in the errors they make along the way. This is the logic, but they also have evidence, and here you have to remember that Renaissance scholars did not have our modern tools for evaluating chronology and influence. They looked at early Christian writings, and they looked at Plato and Aristotle, and they said, as we do, “Wow, Plato and Aristotle have a lot of ideas in common with these early Christians!” but while we conclude, “Early Christians sure were influenced by Plato and Aristotle,” they instead concluded, “This proves that Plato and Aristotle were aiming toward the same things as Christianity!” And they had further evidence from how tangled their chronologies were. There were certain key texts like the Chaldean Oracles which they thought were much much older than we now think they are, which made it look like ideas we attribute to Plato had independently existed well before Plato. They looked at Plotinus and other late antique Neoplatonists who mixed Plato and Aristotle but claimed the Aristotelian bits were really hidden inside Plato the whole time, and they concluded, “See, Plato and Aristotle were basically saying the same thing!” Similarly confusing were the works of the figure we now call Pseudo-Dionysius, who we think was a late antique Neoplatonist voicing a mature hybrid of Platonism and Aristotelianism with some Stoicism mixed in, but who Renaissance scholars believed was a disciple of Saint Paul, leading them to conclude that Saint Paul believed a lot of this stuff, and making it seem even more like Plato, Aristotle, Stoics, ancient mystics, and Christianity were all aiming at one thing. So any small differences are errors along the way, or resolvable with “sic et non.”
The problem came when they translated more and more texts, and found more contradictions than they could really handle. Ideas much wilder and more out there than they expected suddenly had authoritative possibly-sort-of-proto-Christian authors endorsing them. Settled questions were unsettled again, sleeping dragons woken. For example, it wasn’t until the Fifth Lateran Council in 1513 that the Church officially made belief in the immortality of the soul a required doctrine for all Christians, which does not mean that lots of Christians before 1513 didn’t believe in the afterlife, but that Christians in 1513 were anxious about belief in the afterlife, feeling that it and many other doctrines were suddenly in doubt which had stood un-threatened throughout the Middle Ages. The intellectual landscape was suddenly bigger and stranger.
Remember how I said Cicero would be back? All these humanists read Cicero constantly, including the philosophical dialogs with his approach of presenting different classical sects in dialog, all equally plausible but incompatible, leading to… skepticism. And as they explored those same sects more and more broadly, Cicero the skeptic became something of the wedge that started to expand the crack, not overtly stating “Hey, guys, these people don’t agree!” but certainly pressing the idea that they don’t agree, in ways which humanists had more and more trouble ignoring as more texts came back.
Aaaaaand the Reformation made this more extreme, a lot more extreme, by (A) generating an enormous new mass of theological claims made by contradictory parties, adding another arm to our galactic spiral, and (B) developing huge numbers of fierce and damning counter-arguments to all these claims, which in turn meant developing new tools for countering and eroding belief. Thus, as we reach the 1570s, the world of philosophy is a lot bigger, a lot deadlier (as the Reformation and Counter-Reformation killed many more people for their ideas than the Middle Ages did), and a lot scarier, with vast swarms of arguments and counter-arguments, many of them powerful, persuasive, beautifully reasoned, and completely incompatible. And when you make a beautiful yes-and-no attempt to make Plato and Epicurus agree, you don’t have the men themselves on hand to say “Excuse me, in fact, we don’t agree.” But you did have real live Reformation and Counter-Reformation theologians running around responding to each other in real time, that makes syncretic reconciliation the more impossible.
Remember how Abelard, who able to make St. Jerome and St. Augustine seem to agree, drew followers like Woodstock? Well, now his successors–Scholastic and Humanist, since the Humanists were all ALSO reading Scholasticism all the time–have a thousand times as many authorities to reconcile. You think Jerome and Augustine is hard? Try Calvin and Epicurus! St. Dominic and Zwingli! Thomas Aquinas is a saint now, let’s see if you can Yes-and-No the entire Summa Theologica into agreeing with Epictetus, Pseudo-Dionysius and the Council of Trent at the same time! And remember, in the middle of all this, that most if not all of our Renaissance protagonists still believe in Hell and damnation (or at least something similar to it), and that if you’re wrong you burn in Hellfire forever and ever and ever and so do all your students and it’s your fault. Result: FEAR. And its companion, freethought. Contrary to what we might assume, this is not a case where fear stifled inquiry, but where it stimulated more, firing Renaissance thinkers with the burning need to have a solution to all these contradictions, some way to sort out the safe path amid a thousand pits of Hellfire. New syntheses were proposed, new taxonomies of positions and heresies outlined, and old beliefs reexamined and refined or reaffirmed. And this period of intellectual broadening and competition brought with it an increasing inability to believe that any one of these options is the only right way when there are so many, and they are so good at tearing each other down.
And in the middle of this, experimental and observational science is advancing rapidly, and causing more doubt. We discover new continents that don’t fit in a T-O map (Ptolemy is wrong), new plants that don’t fit existing plant taxonomy (Theophrastus is wrong), details about Animals which don’t match Aristotle (we’d better hope he’s not wrong!), the circulation of the blood which turns the four humors theory on its head (Not Galen! We really needed him!), and magnification lets us finally see the complexity of a flea, and realize there is a whole unexplored micro-universe of detail too small for the naked eye to experience, raising the question “If God made the Earth for humans, why did God bother to make things humans can’t even perceive?”
Youth: “But, Socrates, why did experimental and observational science advance in that period? Discovering new stuff that isn’t in the classics doesn’t have anything to do with reconstructing antiquity, or with the Reformation, does it?”
Good question. A long answer would be a book, but I can make a quick stab at a short one. I would point at several factors. First, after 1300, and increasingly as we approach 1600, European rulers began competing in new ways, many of them cultural. As more and more nobles were convinced by the humanist claim that true nobility and power came from the lost arts of the ancients, so scholarship and unique knowledge, including knowledge of ancient sciences, became mandatory ornaments of court, and politically valuable as ways of advertising a ruler’s wealth and power. Monarchs and newly-risen families who had seized power through war or bribery could add a veneer of nobility by surrounding themselves with libraries, scholars, poets, and scientists, who studied the ancient scientific sources of Greece and Rome but, in order to understand them more fully, also studied newer sources coming from the Middle East, and did new experiments of their own. A new astronomical model of the heavens proclaimed the power of the patron who had paid for it, just as much as a fur-lined cloak or a diamond-studded scepter.
Add to this the increase of the scales of wars caused by increased wealth which could raise larger armies, generating a situation in which new tools for warfare, and especially fortress construction, were increasingly in demand (when you read Leonardo’s discussions of his abilities, more than 75% of the inventions he mentions are tools of war). Add to that the printing press which makes it possible for novelties–whether a rediscovered manuscript or a newly-discovered muscle–to spread exponentially faster, and which makes books much more affordable, so that if only one person in 50,000 could afford a library before now it is one in 5,000, and even merchants could afford a few texts. Education was easier, and educated men were in demand at courts eager to fill themselves with scholars, and advertise their greatness with discoveries.
These are the main facilitators, but I would also cite another fundamental shift. I have talked before about Petrarch, and the humanist project to improve the world by reconstructing a lost golden age. This is the first philosophical movement since ancient stoicism that has had anything to do with the world, since medieval theology’s (perfectly rational in context!) desire to study the Eternal instead of the ephemeral meant that most scholars for many centuries had considered natural philosophy, the study of impermanent natural phenomena, as useless as studying the bathwater instead of the baby. Humanism generated a lot of arguments about why Earth and earthly things were worth more than nothing, even if they agreed Heaven and eternal things were more important, and I think the mindset which said it was a pious and worthwhile thing to translate Livy or write a treatise on good government contributed to the mindset which said it was a pious and worthwhile thing to measure mountains or write a treatise on metallurgy. Thought turned, just a little bit, toward Earth.
There, that’s the Renaissance and Reformation, oversimplified by necessity, but Descartes is chomping at the bit for what comes next. For those who want more, I shall do the crass thing here and say: for more detail, see my book Reading Lucretius in the Renaissance, or Popkin’s History of Skepticism, or wait.
At last, Montaigne!
Like the world which basked in his writings, and shuddered in his “crisis,” I love Montaigne. I love his sentences, his storytelling, his sincerity, his quips, his authorial voice. Reading Montaigne is like like slowly enjoying a glass of whatever complex, rich and subtle beverage you most enjoy a glass of (wine for many, fresh goat milk for me!). Especially because, at the end, your glass is empty. (I see a contented Descartes nodding). When I set about starting to write this series, getting to Montaigne was, in fact, my secret end goal, since, if there is a founder of modern skepticism, it is Michel Eyquem de Montaigne.
Montaigne was unique, an experiment, the natural experiment to follow at the maturation of the Renaissance classical project but still, a unique child, raised as an overt pedagogical experiment outlined by his father: Montaigne grew up speaking only Latin. He was exposed to French in his first three years by country nurses, but from three on he was only allowed contact with people–his tutor, parents and servants–speaking Latin. He was a literal attempt to raise a Cicero or Caesar, formed exclusively by classical ideas, the ideal man that the humanists had been hoping to create. Greek was later added, not with textbooks and the rod as was usual in those days but with games and music, and studies were always made to seem pleasant and wonderful by surrounding him with music (even waking the child every morning with delightful live music). He grew up to be about as perfect a Platonic Philosopher King as one could hope to imagine, studying law and entering politics, as his father wished, achieving the highest honors, but preferring life alone in his library, and frequently retiring to do just that, only to be dragged back into politics actually by popular demand of people who would come bang on his library door demanding that he come out to take up office and rule them. I think often about what it must have been like to be Montaigne, to be so immersed, enjoy these things so much, and only later discover that he was alone in a world with literally no other native speaker of his language. It must have been as difficult as it was wonderful to be Montaigne. But I think I understand why, when he lost his best friend Étienne de la Boétie, Montaigne wrote of his grief, his loss, the pain of solitude, with an intensity rarely approached in the history of human literature. He also wrote Essais, meandering writings, the source of the modern word “essay”, for which every schoolchild has the right to playfully curse him.
I will now go about explaining why Montaigne was so wonderful by describing Voltaire. Yes, it is an odd way to go about it, but the Voltaire example is clearer and more concise than any Montaigne example I have on hand, and, in this, Voltaire was a student of Montaigne, and Montaigne will only smile to see such a beautiful development of his art, as Bacon smiles on Newton, and Socrates on all of us.
At the beginning of this sequence, I outlined two potential sources of knowledge: either (A) Sense Perception i.e. Evidence, or (B) Logic/Reason. The classical skeptics were born when the reliability these two sources of knowledge were drawn into doubt, Sense Perception by the stick in water, Logic by Xeno’s Paradoxes of Motion. Responses included the skeptics’ conclusion “We can’t know anything if we can’t trust Reason or the Senses,” and the various other classical schools’ Criteria of Truth (Plato’s Ideas, Aristotle’s Categories, Epicurus’s weak empiricism, etc.) All refutations we have seen along our long path have been based on undermining one of these types of knowledge sources: so when Duns Scotus fights with Aquinas, he picks on his logic, and when Ockham fights with him he, often, picks on his material sensory evidence. (“Where is the phantasm? Huh? Huh?”)
Everybody, I’d like to introduce you to Leibniz. Leibniz, this is everybody. “Hello!” says Leibniz, “Very nice to meet you all.” We are going to viciously murder Leibniz in about three minutes. “It’s no trouble,” says Leibniz, “I’m quite used to it.” Thank you, Leibniz, we appreciate it.
Leibniz here made many great contributions to philosophy and mathematics, but one particular one was extraordinarily popular, I would go so far as to say faddy, a fad argument which swept Europe in the first half of the 18th century. You have almost certainly heard it before in mocking form, but I will do my best to be fair as we line up our target in our sites:
God is Omnipotent, Omniscient and Omnbenevolent. (Given.) “Grrrr,” quoth Socrates.
Given that God is Omniscient, He knows what the best of all possible worlds is.
Given that God is Omnipotent, He can create the best of all possible worlds.
Given that God is Omnibenevolent, He wants to create the best of all possible worlds.
Any world such a God would make must logically be the best of all possible worlds
This is the best of all possible worlds.
Now, this was a proof written, just like Anselm’s and Aquinas’s, by a philosopher expecting a readership who all believe, both in God, and in Providence. It is a comfortable proof of the logical certainty that there is Providence, that this universe is perfect (as the Stoics first theorized), and anything in it that seems to be bad or evil must, in fact, be part of a greater long-term good that we fail to see because of our limited human perspective. The proof made a huge number of people delighted to have such an elegant and simple argument for something they enthusiastically believed.
But, the proof also the side-effect that arguments about Providence often do, of making people start to try to reason out what the good was behind hidden evils. “Oh, that guy was struck with disease because he did X bad thing.” “Wolves exist to make us live in villages.” “That plague happened because those people were bad.” It was (much like Medieval proofs of the existence of God) a way philosophers could show off their cleverness to an appreciative audience, make themselves known, and put forward theories about right and wrong and what God might want.
In 1755 an enormous earthquake struck the great port city of Lisbon (Portugal), wiping out tens of thousands of people (some estimate up to 100,000) and leveling one of the great gems of European civilization. It remains to this day one of the deadliest earthquakes in recorded history, and many parts of Lisbon are still in ruins almost 300 years later. The shock and horror, to a progressive, optimistic Europe, was stunning. And immediately thereafter, fans of Leibniz started publishing essays about how it was GOOD that this had happened, because of XYZ reason. For example, one argument was that they were persecuting people for their religion, and this was God saying he disapproved <= REAL argument. (Note: Leibniz himself is innocent of all this, having died years before the earthquake – we are speaking of his followers.) Others argued that it was a bad minor effect of God’s general laws, that the physical rules of the Earth which make everything wonderful for humankind also make earthquakes sometimes happen, but that the suffering they cause is negligible against the greater goods that Providence achieves. And if one person in Europe could not stand these noxious, juvenile, pompous, inhumane, self-serving, condescending, boastful, heartless, self-congratulatory responses to unprecedented human suffering, that person was the one pen mightier than any sword, Voltaire.
Would words like these to peace of mind restore
The natives sad of that disastrous shore?
Grieve not, that others’ bliss may overflow,
Your sumptuous palaces are laid thus low;
Your toppled towers shall other hands rebuild;
With multitudes your walls one day be filled;
Your ruin on the North shall wealth bestow,
For general good from partial ills must flow;
You seem as abject to the sovereign power,
As worms which shall your carcasses devour.
No comfort could such shocking words impart,
But deeper wound the sad, afflicted heart.
When I lament my present wretched state,
Allege not the unchanging laws of fate;
Urge not the links of the eternal chain,
’Tis false philosophy and wisdom vain.
The God who holds the chain can’t be enchained;
By His blest Will are all events ordained:
He’s Just, nor easily to wrath gives way,
Why suffer we beneath so mild a sway:
This is the fatal knot you should untie,
Our evils do you cure when you deny?
Men ever strove into the source to pry,
Of evil, whose existence you deny.
If he whose hand the elements can wield,
To the winds’ force makes rocky mountains yield;
If thunder lays oaks level with the plain,
From the bolts’ strokes they never suffer pain.
But I can feel, my heart oppressed demands
Aid of that God who formed me with His hands.
Sons of the God supreme to suffer all
Fated alike; we on our Father call.
No vessel of the potter asks, we know,
Why it was made so brittle, vile, and low?
Vessels of speech as well as thought are void;
The urn this moment formed and that destroyed,
The potter never could with sense inspire,
Devoid of thought it nothing can desire.
The moralist still obstinate replies,
Others’ enjoyments from your woes arise,
To numerous insects shall my corpse give birth,
When once it mixes with its mother earth:
Small comfort ’tis that when Death’s ruthless power
Closes my life, worms shall my flesh devour.
This (in the William F. Fleming translation) is an excerpt from the middle of Voltaire’s Poem on the Lisbon Earthquake, which I heartily encourage you to read in its entirety. The poem summarizes the arguments of Camp Leibniz , and juxtaposes them with heart-wrenching descriptions of the sufferings of the victims, and with Voltaire’s own earnest and passionate expression of exactly why these kinds of arguments about Providence are so difficult to choke down when one is really on the ground suffering and feeling. The human is not a senseless pottery vessel, it is a thinking thing, it feels pain, it asks questions, it feels the special kind of pain that unanswered questions cause, the same pain the skeptics have been trying to help us escape for 3,000 years. But we don’t escape, and the poem captures it. The poem swept across Europe like a firestorm. People read it, people felt it, people recognized in Voltaire’s words the cries of anger in their own hearts. And they agreed. He won. The Leibniz fad ended. An entire continent-wide philosophical movement, slain.
And he used neither Logic nor Evidence.
Did you feel it? The poem persuaded, attacked, undermined, eroded away the respectability of Leibniz, but it did it without using EITHER of the two pillars of argument. There was no chain of reasoning. And there was no empirical observation. You could say there was some logic in the way he juxtaposed claims “God is a kind Maker” with counter-claims “I am not a potter’s jar, I am a thinking thing! I need more!”. You could say there was some empiricism or evidence-based argument in his descriptions of things he saw, or things he felt, since feelings too are sense-perceptions in a way, so reporting how one feels is reporting a sensory fact. But there was nothing in this so rigorous or so real that any of our ancient skeptics would recognize it as the empiricism they were attacking. Those people Voltaire describes – he did not see them, he just imagines them, reaching across the breadth of Europe with the strength of empathy. That potter’s wheel is a metaphor, not a syllogism. Voltaire has used a third thing, neither Reason nor Evidence, as a tool of skepticism.
What do we name this Third Thing? I have heard people propose “common sense” but that’s a terribly vexed term, going back to Cicero at least, which has been used by this point to mean 100 things that are not this thing, so even if you could also call this thing “common sense” it would just create confusion (we don’t need Aristotle looming with a lecture on the dangers of unclear vocabulary). I have heard people propose “sentiment” and I like how galling it feels to try to suggest that “sentiment” should enjoy coequal respect and power with Reason and Evidence, but it isn’t quite that either. I am not yet happy with any name for this Third Thing, and am playing around with many. All I will say is that it is real, it is powerful, it is as effective at persuading one to believe or disbelieve as Reason and Evidence are. And, even if there were shadows of this Third Thing earlier in human history, Montaigne was the smith who sharpened the blade and handed it to Voltaire, and to the rest of us.
Montaigne’s Essais are lovely, meandering, personal, structure-less, rambling musings in which topics flow one upon another, he summarizes an argument made for or against some heresy, then, rather than voicing an opinion, tells you a story about his grandmother that one time, or retells a bit of one of Virgil’s pastorals, or an anecdote about some now-obscure general, and then flows on to a different topic, never stating his opinion on the first but having shaped your thinking, through his meanders, until you feel an answer, a belief or, more often, disbelief, even if he never voiced one. And then he keeps going, taking up another argument, making it feel silly with an allegory about two bakers, another and–have you heard the news from Spain?–another, and another, and oh, the loves of Alexander, another, and another. And as it flows along you get to know him, feel you’re having a conversation with him, and somewhere toward the end you no longer believe any of the philosophical arguments he has just summarized are plausible at all, but he never once argued directly against any of them. It is a little bit like our skeptical Cicero, juxtaposing opposing views and leaving us convinced by none, but it is one level less structured, not actually a dialog with arguments and refutations. Skepticism, without Reason, without Evidence, just with the human honesty that is Montaigne, his doubts, his friendship, his communication to you, dear reader, across the barrier of page, and time, and language, this strange French-Roman, this only native Latin speaker born in a millennium, this alien, has made you realize all the philosophical convictions, everything in that broad spectrum that scholasticism plus the Renaissance plus the Reformation and Counter-Reformation ferocity have laid before you, none of it is what a person really feels deep down inside, not Montaigne, and not you. And so he leaves you a skeptic, in a completely different way from how the ancient skeptics did it, not with theses, or exercises, or lists, or counterarguments, just with… humanity?
Montaigne did it. His contemporaries found it… odd at first, a bit self-centered, this autobiographical meandering, but it was so beautiful, so entrancing, so powerful. It reared a new generation, armed with Reason and Evidence and This Third Thing, and deeply skeptical. Students at universities started raising their hands in class to ask the teachers to prove the school existed. Theologians advising princes started saying maybe it didn’t matter that much what the difference was between the different Christian faiths if they were close enough. A new age of philosophy was born, not a new school, but a new tool for dogmatism’s ancient symbiotic antagonist: doubt.
And, where doubt grows stronger and richer, so does dogmatic philosophy, having that much more to test itself against. Just as, in antiquity, so many amazing schools and ideas were born from trying to respond to Zeno and the Stick in Water, so Montaigne’s new tools of Skepticism, his revival and embellishment of skepticism, the birth, as we call it, of Modern Skepticism, was also the final ingredient necessary for an explosion of new ideas, new schools, new universes described by new philosophers trying to build systems which can stand up against a new skepticism armed, not just against Reason and Evidence, but with That Third Thing.
Thus, as 1600 approaches, the breakneck proliferation of new ideas and factions make Montaigne’s skepticism so popular that students in scholastic and Jesuit schools are starting to raise their hands and demand that the professor prove the existence of the classroom before expecting them to attend class. A “skeptical crisis” takes center stage in Europe’s great intellectual conversation, and multiplying doubt seems to have all the traditional Criteria of Truth in flight. It is onto this stage that Descartes will step, and craft, alongside his contemporaries, the first new systems which will have to cope, not with two avenues of attacking certainty, but, thanks to Montaigne, three. And will fight back against them with Montaigne’s arts as well. Next time.
For now, I will leave you with one more little snippet of the future: I lied to you, about a simple happy ending to Voltaire’s quarrel with Leibniz. Oh, Leibniz was quite dead, not just because the man himself had died but because no philosopher could take his argument seriously after the poem. Ever. Again. In fact, a few years ago I went to a talk at at a philosophy department in which a young scholar was taking on Leibniz’s Best of All Possible Worlds thesis, and picking it apart using beautiful logical argumentation, and at the end everyone applauded and congratulated him, but when the Q&A started the first Q was “Well, um, this was all quite fascinating, but, isn’t Leibniz, I mean, no one takes that argument seriously anymore…” But the young philosopher was correct to point out that, in fact, no one had ever actually directly refuted it with logic. No one saw the need. But if Voltaire’s victory over logical Leibniz was complete, Leibniz was not the most dangerous of foes. Voltaire had contemporaries, after all, armed with Montaigne’s Third Thing just as Voltaire was. Rousseau will fire back, sweet, endearing, maddening Rousseau, not in defense of Leibnitz, but against the poem which he sees as an attack on God. But this battle of two earnest and progressive deists must wait until we have brought about the brave new world that has such creatures in it. For that we need Descartes, Francis Bacon, grim Hobbes, John Locke, and the ambidextrous Bayle.
Socrates, Sartre, Descartes and our Youth have, among them, consumed twelve thousand, six hundred and forty two hypothetical eclairs in the fourteen months since we left them contemplating skepticism on the banks of a cheerily babbling imaginary brook. Much has changed in the interval, not in the land of philosophical thought-experiments (which is ever peaceful unless someone scary like Ockham or Nietzsche gets inside), but in a world two layers of reality removed from theirs. The changes appear in the world of material circumstances which shape and foster this author, who in turn shapes and fosters our philosophical picnickers. Now, having recovered from my transplant shock of being moved to the new and fertile country of University of Chicago, and with my summer work done, and Too Like the Lightningfully revised and on its way toward its May 10th release date (YES!), it is time at last to return to our hypothetical heroes, and to my sketches of the history of philosophical skepticism.
When last we saw them, Socrates, Sartre, Descartes and our Youth had rescued themselves from the throes of absolute doubt by developing Criteria of Truth, which allowed them to differentiate arenas of knowledge where certainty is possible from arenas of knowledge where certainty is not possible. (See their previous dramatic adventures with in Sketches of a History of Skepticism Part 1 and Part 2). To do this, they looked at three systems: Epicureanism, which suggests that we have certain knowledge of the world perceived by the senses, but no certain knowledge of the imperceptible atomic reality beneath; Platonism, which suggests that we have knowledge of the eternal structures that create the material world, i.e. Forms or Ideas, but not of the flawed, corruptible material objects which are the shadows of those eternal structures; and Aristotelianism, which suggests that we can have certain knowledge of logical principles and of categories within Nature, but not of individual objects.
Notably, neither Epicurus nor Aristotle was invited to our picnic, and, while you never know when any given Socrates will turn out to be a Plato in disguise, our particular Socrates seems to be staying safely in the camp of doubt: he knows that he knows nothing. Our object is not to determine which of these classical camps has the correct Criterion of Truth. In fact, our distinguished guests, Descartes and Sartre, aren’t interested in rehashing these three classical systems all of whose criteria are not only familiar, but, to them, long defunct. They have not come through this great distance in time to watch Socrates open the doors of skepticism to our Youth to just meet antiquity’s familiar dogmatists; the twinkle in Descartes’ eye (and his infinite patience dolling out eclairs) tells me he’s waiting for something else.
Descartes and Sartre expect Cicero next — Cicero, whom many might mistake as a voice for the Stoic school (the intellectual party conspicuously missing from the assembly of Plato, Aristotle, and Epicurus) but who is actually more often read by modern scholars as a new and promising kind of Skeptic. Unfortunately, Cicero is currently busy answering a flurry of letters from someone called Petrarch, so has declined to join our little gathering (or possibly he’s just miffed hearing that I’m doing an abbreviated finale to this series, so he’d only get a couple paragraphs, even if he came). So we must do our concise best to cover his contribution on our own. Pyrrho, Zeno and other early skeptical voices argued in favor of doubt by demonstrating the fallibility of the senses and of pure reason: the stick in water that looks bent, the paradoxes of motion which show how logic and reality don’t match. Cicero achieves unbelief (and aims at the eudaimonist tranquility beyond) by a different route, a luxurious one made possible by the fact that he is writing three centuries into the development of philosophy and has many different dogmatic schools to fall back on. In his philosophical dialogs, Cicero presents different interlocutors who put forth different dogmatic positions: Stoic, Platonist, Epicurean; all in dialog with each other, presenting evidence for their own positions and counter-arguments against the conclusions of others. Each interlocutor works strictly by his own Criterion of Truth, and all argue intelligently and well. But they all disagree. When you read them all together, you are left uncertain. No particular voice seems to overtop the others, and the fact that there are so many different equally plausible positions, defended with equally well-defined Criteria of Truth, leaves one with no confidence that any of them is reliable. At no point does Cicero say “I am a skeptic, I think there is no certainty,” — but the effect of reading the dialog is to be left with uncertain feelings. Cicero himself does not seem to have been a Pyrrhonist skeptic, and certainly does seem to hold some philosophical positions, especially moral principles, quite strongly. There is certainly a good case to be made that he has strong Stoic leanings, and there is validity to the Renaissance argument that he should be vaguely clustered in with Seneca and Cato, who subscribe to a mixed-together digest of Roman paganism, Stoicism, some Platonic and a few Aristotelian elements. But especially on big questions of epistemology, ontology and physics, Cicero remains solidly, frustratingly, elusive.
There are many important aspects of Cicero’s work, but for our purposes the most important is this: he has achieved doubt without actually making any skeptical arguments, or counter-arguments. He has not attacked the fundamentals of Stoicism, Platonism or Epicureanism. Instead, he has used the strengths of the three schools to undermine each other. All three schools are convincing. All are plausible. All have evidence and/or logic on their side. As a result, none of the three winds up feeling convincing, even though none of the three has been directly undermined. This is not a new achievement of Cicero’s. Epicurus used a similar technique, and Lucretius, his follower, did so too; and we know Cicero read Lucretius. But Cicero is the most important person to use this technique in antiquity, largely because 1,300 years later it will be Cicero who become the centerpiece of Renaissance education. And Cicero will have no small Medieval legacy as well.
Medieval Certainty, and the Big Question
Stereotypically for a Renaissance historian, I will move quickly through the Middle Ages, though not for the stereotypical reasons. I don’t think that the Middle Ages were an intellectual stasis; I do think that Medieval philosophy is fully of many complex things that I’m just starting to seriously work through in my own studies. I’m not ready to provide a light, fun summary of something which is, for me, still a rich forest to explore. Church Fathers, late Neoplatonists, Chroniclers, theological councils, monastic leaders, rich injections from the Middle East, Maimonides; all intersect with doubt, certainty and Criteria of Truth in rich and fascinating ways that I am not yet prepared to do justice to. So instead I will present an abstraction of one important aspect of Medieval thinking which I hope will help elucidate some overall approaches to doubt, even if I don’t pause to look at individual minds.
When I was in my second year of grad school, I chatted over convenience store cookies in the grad student lounge with a new student entering our program that year, like myself, to study the Renaissance. He poked fun at the philosophers of the Middle Ages. He asked me, “How could anybody possibly be interested in going on and on and on and on like that about God?” And in that moment of politeness, and newness, and fun, I laughed, and nodded. But, happily, we had a good teacher who made us look more at the Medieval, without which we can’t understand the Renaissance, and now I would never laugh at such a comment.
Set aside your modern mindset for a moment, and your modern religious concepts, and see if you can jump into the Medieval mind. To start with, there is a Being of infinite power, Whose existence is known with certainty. (Take that as given — a big given, I know, but it’s a given in this context.) Such a Being created everything that ever has existed or will exist. Everything that happens: events, births, storms, falling objects, thoughts; all were conceived by this Being and exist according to this Being’s script. The Being possesses all knowledge, and all good things are good because they resemble this Being. Everything in the material world is fleeting and imperfect and will someday be destroyed and forgotten, including the entire Earth. But — this Being has access to another universe where all things are eternal and perfect, which will last beyond the end of the material universe, and with this Being’s help there might be some way for us to reach that universe as well. The Being created humans with particular care, and is trying to communicate with us, but direct communication is a difficult process, just as it is difficult for an entomologist to communicate directly with his ants, or for a computer programmer to communicate directly with the artificial intelligences that she has programmed.
Now, the facetious question I laughed at in early grad school comes back, but turned on its head. How could you ever want to study anything other than this Being? It explains everything. You want to know the cause of weather, astronomical events, diseases, time? The answer is this Being. You want to know where the world came from, how thought works, why there is pain? The answer is this Being. History is a script written by this Being, the stars are a diagram drawn by this Being, the suitability and adaptation of animals and plants to their environments is the ingenuity of this Being, and the laws that make rocks sink and wood float and fire burn and rain fall are all decisions made by this Being. If you have any intellectual curiosity at all, wouldn’t it be an act of insanity to dedicate your life to anything other than understanding this Being? And in a world in which there has been, for centuries, effective universal consensus on all these premises, what society would want to fund a school that didn’t study them? Or pay tuition for a child to study something else? Theology dominated other sciences in the Middle Ages, not because people were backward, or closed-minded, or lacked curiosity, but because they were ambitious, keenly intellectual and fixed on the a subject from which they had every reason to expect answers, not just to theological questions, but to all questions. They didn’t have blinders, they had their eyes on the prize, and they felt that choosing to study Natural Philosophy (i.e. the world, nature, biology, plants, animals) instead of Theology was like trying to study toenail clippings instead of the being they were clipped from.
To put it another way: have you ever watched a fun, formulaic, episodic genre show like Buffy the Vampire Slayer, or the X-Files? There’ll be one particular episode where the baddie-of-the-day is Christianity-flavored, and at some point a manifest miracle happens, or an angel or a ghost shows up, and then we have to reset the formula and move onto the next episode, but you spend that whole next episode thinking, “You know, they just found proof of the existence of the afterlife and the immortality of the soul. You’d think they’d decide that’s more important than this conspiracy involving genetically-modified corn.” That’s how people in the Middle Ages felt about people who wanted to study things that weren’t God.
Doubt comes into this in important ways, but not the ways that modern rhetoric about the Middle Ages leads most people to expect.
Wikipedia, at the time of writing, defines Scholasticism as “a method of critical thought which dominated teaching by the academics (“scholastics,” or “schoolmen”) of medieval universities in Europe from about 1100 to 1700. ” It was “a program of employing that [critical] method in articulating and defending dogma in an increasingly pluralistic context.” It “originated as an outgrowth of, and a departure from, Christian monastic schools at the earliest European universities.” Philosophy students traditionally define Scholasticism as “that incredibly boring hard stuff about God that you have to read between the classics and Descartes”. Both definitions are true. Scholasticism is an incredibly tedious, exacting body of philosophy, intentionally impenetrable, obsessed with micro-detail, and happy to spend three thousand words proving to you that Good is good, or to set out a twenty step argument it is better to exist than not exist (this is presumably why Hamlet still hadn’t graduated at age 30). Scholasticism was also so incredibly exciting that, apart from the ever-profitable medical and law schools, European higher education devoted itself to practically nothing else for the whole late Middle Ages, and, even though the intellectual firebrands of both the Renaissance and the 17th and 18th centuries devoted themselves largely to fiercely attacking the scholastic system, it did not truly crumble until deep into the Enlightenment.
Why was Scholasticism so exciting? Even if people who believed in an omnipotent God had good reason to devote their studies pretty-exclusively to Theology, why was this one particularly dense and intentionally difficult method the method for hundreds of years? Why didn’t they write easy-to-read, penetrable treatises, or witty philosophical tales, or even a good old fashioned Platonic-type dialog?
The answer is that Christianity changes the stakes for being wrong. In antiquity, if you’re wrong about philosophy, and the philosophical end of theology, you’ll make incorrect decisions, possibly lead a sadder or less successful life than you would otherwise, and it might mean your legacy isn’t what you wanted it to be, but that’s it. If you’re really, really wrong you might offend Artemis or something and get zapped, but it’s pretty easy to cover your bases by going to the right festivals. By the logic of antiquity, if you put a Platonist and an Epicurean in a room, one of of them will be wrong and living life the wrong way, at least in some ways, but they can both have a nice conversation, and in the end, either they’ll both reincarnate and the Epicurean will have another chance to be right later, or they’ll both disperse into atoms and it won’t matter. OK. In Medieval Christianity, if you’re wrong about theology, your immortal soul goes to Hell forever, where you’ll be tormented by unspeakable devils for the rest of eternity, and everyone else who believes your errors is also likely to lose the chance of eternal paradise and absolute knowledge, and will be plunged into a pit of absolute misery and despair, irrevocably, forever. Error is incredibly dangerous, to you and to everyone around you who might get pulled down with you. If you’re really bad, you might even bring the wrath of God down upon your native city, or if you’re really bad then, while you’re still alive, your soul might depart your body and sink down to Hell, leaving your body to be a house for a devil who will use you to visit evil on the Earth (see Inferno Canto 27). But leaving aside those more extreme and superstition-tainted possibilities, error became more pernicious because of eternal damnation. If people who read your theologically incorrect works go to Hell, you’re infinitely culpable, morally, since every student misled to damnation is literally an infinite crime.
So, if you are a Medieval person, Theology is incredibly valuable, the only kind of study worth doing, but also incredibly dangerous. You want to tread very carefully. You want a lot of safety nets and spotters. You want ways to avoid error. And you know error is easy! Errors of logic, errors of failing senses. Enter Aristotle, or more specifically enter Aristotle’s Organon, a translation of the poetic works of Aristotle completed by dear Boethius, part of the latter’s efforts to preserve Greek learning when he realized Greek and other relics of antiquity were fading. The Organon explains in great detail, how you can go about constructing chains of logic in careful, methodical ways to avoid error. Use only clearly defined unequivocal vocabulary, and strict syllogistic and geometric reasoning. Here it is, foolproof logic in 50 steps, I’ll show you! Sound familiar? This is Aristotle’s old Criterion of Truth, but it’s also the Medieval Theologian’s #1 Christmas Wish List. The Criterion of Truth which was, for Aristotle, a path through the dark woods and a solution to Zeno and the Stick in Water, is, to our theologian, a safety net over a pit of eternal Hellfire. That is why it was so exciting. That was why people who wanted to do theology were willing to train for five years just in logic before even looking at a theological question, just as Astronauts train in simulators for a long time before going out into the deadly vacuum of space! That is even why scholastic texts are so hard to read and understand – they were intentionally written to be difficult to read, partly because they’re using an incredibly complicated method, but even more because they don’t want anyone to read them who hasn’t studied their method, because if you read them unprepared you might misunderstand, and then you’d go to Hell forever and ever and ever, and it would be Thomas Aquinas’s fault. And he would be very sad. When Thomas Aquinas was presented for canonization, after his death, they made the argument that every chapter of the Summa Theologica was itself a miracle. It’s easy to laugh, but if you think about how desperately they wanted perfect logic, and how good Aquinas was at offering it, it’s an argument I understand. If you were dying of thirst in the desert, wouldn’t a glass of water feel like a miracle?
To give credit where credit is due, the mature application of Aristotle’s formal logic to theological questions was not pioneered by Aquinas but by a predecessor: Peter Abelard, the wild rockstar of Medieval Theology. People crowded in thousands and lived in fields to hear Peter Abelard preach, it was like Woodstock, only with more Aristotle. Why were people so excited? Did Abelard finally have the right answer to all things? “Yes and No,” as Peter Abelard would say, “Sic et Non“, that being the the title of his famous book, a demonstration of his skill. (Wait, yes AND no, isn’t that even scarier and worse and more damnable than everything else? This is the most dangerous person ever! Bernard of Clairvaux thought so, but the Woodstock crowd at the Paraclete, they don’t.) Abelard’s skill was taking two apparently contradictory statements and showing, by elaborate roundabout logic tricks, how they agree. Why is this so exciting? Any troll on the internet can do that! No, but he did it seriously, and he did it with Authorities. He would take a bit of Plato that seemed to contradict a bit of Aristotle, and show how they actually agree. Even ballsier, he would take a bit of Plato that pretty manifestly DOES contradict another bit of Plato, and show how they both agree. Then, even better, he would take a bit from St. Augustine that seems to contradict a bit from St. Jerome and show how the two actually agree. “OH THANK GOD!” cries Medieval Europe, desperately perplexed by the following conundrum:
The Church Fathers are saints, and divinely inspired; their words are direct messages from God.
If you believe the Church Fathers and act in accordance with their teachings, they will show you the way to Heaven; if you oppose or doubt them, you are a heretic and damned for all eternity.
The Church Fathers often disagree with each other.
Abelard rescued Medieval Europe from this contradiction, not necessarily by his every answer, but by his technique by which seemingly-contradictory authorities could be reconciled. Plato with Aristotle is handy. Plato with Plato sure is helpful. Jerome with Augustine is eternal salvation. And if he does it with the bits of Scripture that seem to contract the other bits? He is now the most exciting thing since the last time the Virgin Mary showed up in person.
Abelard had a lover–later, wife, but she preferred ‘lover’–the even more extraordinary Heloise, and I consider it immoral to mention him without mentioning her, but her life, her stunningly original philosophical contributions and her terrible treatment at the hands of history are subjects for another essay in its own right. For today, the important part is this: Abelard was exciting for his method, more than his ideas, his way of using Reason to resolve doubts and fears when skepticism loomed. Thus even Scholasticism, the most infamously dogmatic philosophical method in European history, was also in symbiosis with skepticism, responding to it, building from it, developing its vast towers of baby-step elaborate logic because it knew Zeno was waiting.
Proofs of the Existence of God
We are all very familiar with the veins of Christianity which focus on faith without proof as an important part of the divine plan, that God wants to test people, and there is no proof of the existence of God because God wants to be unknowable and elusive in order to test people’s faith. The most concise formula is the facetious one by Douglas Adams, where God says: “I refuse to prove that I exist, because proof denies faith and without faith I am nothing.” It’s a type of argument associated with very traditional, conservative Christianity, and, often, with its more zealous, bigoted, or “medieval” side. I play a game whenever I run into a new scholar who works on Medieval or early modern theological sources, any sources, any period, any place, from pre-Constantine Rome to Renaissance Poland. I ask: “Hey, have you ever run into arguments that God’s existence can’t be proved, or God wants to be known by faith alone, before the Reformation?” Answers: “No.” “Nope.” “Naah.” “No, never.” “Uhhh, not really, no.” “Nope.” “No.” “Nothing like that.” “Hmm… no.” “Never.” “Oh, yeah, one time I thought I found that in this fifth-century guy, but actually it was totally not that at all.” Like biblical literalism, it’s one of these positions that feels old because it’s part of a conservative position now, but it’s actually a very recent development from the perspective of 2,000 years of Christianity plus centuries more of earlier theological conversations. So, that isn’t what the Middle Ages generally does with doubt; it doesn’t rave about faith or God’s existence being elusive. Europe’s Medieval philosophers were so sure of God’s existence that it was considered manifestly obvious, and doubting it was considered a mental illness or a form of mental retardation (“The fool said in his heart ‘there is no God’,” => there must be some kind of brain deficiency which makes people doubt God; for details on this a see Alan C. Kors, Atheism in France, vol. 1). And when St. Anselm and Thomas Aquinas and Duns Scotus work up technical proofs of the existence of God they’re doing it, not because they or anyone was doubting the existence of God, but to demonstrate the efficacy of logic. If you invent a snazzy new metal detector you first aim it at a big hunk of metal to make sure it works. If you design a sophisticated robot arm, you start the test by having it pick up something easy to grab. If you want to demonstrate the power of a new tool of logic, you test it by trying to prove the biggest, simplest, most obvious thing possible: the existence of God.
(PARENTHESIS: Remember, I am skipping many Medieval things of great importance. *cough*Averroes*cough* This is a snapshot, not a survey.)
Three blossoms on the thorny rose of this Medieval trend toward writing proofs of the existence of God are worth stopping to sniff.
The first blossom is the famous William of Ockham (of “razor” fame) and his “anti-proof” of the existence of God. Ockham was a scholastic, writing in response to and in the same style and genre as Abelard, Aquinas, Scotus, and their ilk. But, when one read along and got to the bit where one would expect him to demonstrate his mastery of logic by proving the existence of God, he included instead a plea (paraphrase): Please, guys, stop writing proofs of the existence of God! Everyone believes in Him already anyway. If you keep writing these proofs, and then somebody proves your proof wrong by pointing out an error in your logic, reading the disproof might make people who didn’t doubt the existence of God start to doubt Him because they would start to think the evidence for His Existence doesn’t hold up! Some will read into this Anti-Proof hints of the beginning of “God will not offer proof, He requires faith…” arguments, and perhaps it does play a role in the birth of that vein of thinking. (I say this very provisionally, because it is not my area, and I would want to do a lot of reading before saying anything firm). My gut says, though, that it is more that Ockham thought everyone by nature believed in God, that God’s existence was so incredibly obvious, that God was not trying to hide, rather that he didn’t want the weakness of fractious scholastic in-fighting to erode what he thought was already there in everyone: belief.
Aside: While we are on the subject of Ockham, a few words on his “razor”. Ockham is credited with the principle that the simplest explanation for a thing is most likely to the correct one. That was not, in fact, a formula he put forward in anything like modern scientific terms. Rather, what we refer to as Ockham’s Razor is a distillation of his approach in a specific argument: Ockham hated the Aristotelian-Thomist model of cognition, i.e. the explanation of how sense perception and thoughts work. Hating it was fair, and anyone who has ever studied Aristotle and labored through the agent intellect, and the active intellect, and the passive intellect, and the will, and the phantasm, and innate ideas, and eternal Ideas, and forms, and categories, and potentialities, shares William of Ockham’s desire to pick Thomas Aquinas up and shake him until all the terminology falls out like loose change, and then tell him he’s only allowed to have a sensible number of incredibly technical terms (like 10, 10 would be a HUGE reduction!). Ockham proposed a new model of cognition which he set out to make much simpler, without most of the components posited by Aristotle and Aquinas, and introduced formal Nominalism. (Here Descartes cheers and sets off a little firecracker he’s been saving). Nominalism is the idea that “concepts” are created by the mind based on sense experience, and exist ONLY in the mind (like furniture in a room, adds Sherlock Holmes) rather than in some immaterial external sense (like Platonic forms). Having vastly simplified and revolutionized cognition, Ockham then proceeded to describe the types of concepts, vocabulary terms and linguistic categories we use to refer to concepts in infuriating detail, inventing fifty jillion more technical terms than Aquinas ever used, and driving everyone who read him crazy. (If you are ever transported to a dungeon where you have to fight great philosophers personified as Dungeons & Dragons monsters, the best weapon against Ockham is to grab his razor of +10 against unnecessary terminology and use it on the man himself). One takeaway note from this aside: while “Ockham’s Razor” is a popular rallying cry of modern (post-Darwin) atheism, and more broadly of modern rationalism, that is a modern usage entirely unrelated to the creator himself. He thought that the existence of God was so incredibly obvious, and necessary to explain so many things, from the existence of the universe to the buoyancy of cork, that if you presented him with the principle that the simplest explanation is usually best, he would agree, and happily assume that you believed, along with him, that “God” (being infinitely simple, see Plotinus and Aquinas) is therefore a far simpler answer to 10,000 technical scientific questions than 10,000 separate technical scientific answers. Like Machiavelli, Aristotle and many more, Ockham would have been utterly stunned (and, I think, more than a little scared) if he could have seen how his principles would be used later.
The second blossom (or perhaps thorn?) of this Medieval fad of proving God’s existence was, well, that Ockham was 110% correct. Here again I cite Alan Kors’ masterful Atheism in France; in short, his findings were that, when proving the existence of God became more and more popular, as the first field test to make sure your logical system worked, (a la metal detector…beep, beep, beep, yup it’s working!), it created an incentive for competing logicians to attack people’s proofs of the existence of God (i.e. if it can’t find a giant lump of iron the size of a house it’s not a very good metal detector, is it?) Thus believers spent centuries writing attacks on the existence of God, not because they doubted, but to prove their own mastery of Aristotelian logic superior to others. This then generated thousands of pages of attacks on the existence of God, and, by a bizarre coincidence *cough*cough*, when, in the 17th and 18th centuries, we finally do start getting writings by actual overt “I really think there is no God!” atheists, they use many of the same arguments, which were waiting for them, readily available in volumes upon volumes of Church-generated books. Dogmatism here fed and enriched skepticism, much as skepticism has always fed and enriched dogmatism, in their ongoing and fruitful symbiosis.
The third blossom is, of course, sitting with us dolling out eclairs. Impatient Descartes has been itching, ever since I mentioned Anselm, to leap in with his own Proof of the Existence of God, one which uses a more mature form of Ockham’s Nominalism, coupled with the tools of skepticism, especially doubt of the senses. But Descartes may not speak yet! (Don’t make that angry face at me, Monsieur, you’ll agree when you hear why.) It won’t be Descartes’ turn until we have reviewed a few more details, a little Renaissance and Reformation, and introduced you to Descartes’ great predecessor, the fertile plain on whom Descartes will erect his Cathedral. Smiling now, realizing that we draw near the Illustrious Father of Skeptics whom he has been waiting for, Descartes sits back content, until next time.
But do not fear, the wait will be short this time. Socrates is in more suspense than Descartes, and if I stop writing he’ll start demanding that I define “illustrious” or “next” or “man”, so I’d better plunge straight in. Meanwhile, I hope you will leave this little snapshot with the following takeaways:
Medieval thought was notdominated by the idea that logic and inquiry are bad and Blind Faith should rule; much more often, Medieval thinkers argued that logic and inquiry were wonderful because they could reinforce and explain faith, and protect people from error and eternal damnation. Medieval society threw tons of energy into the pursuit of knowledge (scientia, science), it’s just that they thought theology was 1000x more important than any other topic, so concentrated the resources there.
When you see theologians discussing whether certain areas of knowledge are “beyond human knowledge” or “unknowable”, before you automatically call this a backwards and closed-minded attitude, remember that it comes from Plato, Epicurus and Aristotle, who tried to differentiate knowledge into areas that could be known with certainty, and areas where our sources (senses/logic) are unreliable, so there will always be doubt. The act of dividing certain from uncertain only becomes close-minded when “that falls outside what can be known with certainty” becomes an excuse for telling the bright young questioner to shut up. This happened, but not always.
Even when there were not many philosophers we could call “skeptics” in the formal sense, and the great ancient skeptics were not being read much, skepticism continued to be a huge part of philosophy because the tools developed to combat it (Aristotle’s logical methods, for example) continued to be used, expanded and re-purposed in the ongoing search for certainty.
Welcome to a new feature here on Ex Urbe — the promoted comment.
From time to time, Ada makes a long substantive chewy comment, which could almost be its own post. Making it into an actual post would take valuable time. The comment is already written and fascinating — but hidden down in a comment thread where many people may not notice it. From now on, when this happens, I will extract it and promote it. I may even go back and do this with some older especially awesome comments. You’ll be able to tell the difference between this and a real post, because it’ll say it’s posted by Bluejo, and not by Exurbe, because it will say “a promoted comment”, and also because it won’t be full of beautiful relevant carefully selected art but will have just one or two pieces of much more random art.
I thoroughly enjoyed reading this new post. As I am reviewing macroeconomics, especially the different variations of Solow Model, I cannot help but link “intellectual technology” with the specific endogenous growth model, which attempts to led the model itself generate technological growth without an exogenous “manna from heaven”. In this model, technology growth is expressed endogenously by the factor capital as “productive externalities”, and individual workers, through “learning by doing,” obtain more “skills” as the capital grows. Of course, the “technology factor” in the model I learned is vaguely defined and does not cover the many definitions and various effects of “intellectual technology” not directly related to economic production.
Your conversation with Michael reminds of me the lectures and seminars I took with you at Texas A&M. By the time I took your Intellectual History from Middle Ages to 17th Century, I have already taken some classes on philosophy. Sadly, my fellow philosophy students and I usually fell into anachronism and criticized early thinkers a bit “unfairly” on many issues. That is why your courses were like a beam of light to me, for I was never aware of the fact that we have different logic, concepts, and definition of words from our predecessors and should hence put those thinkers back into their own historical context.
It seems to me that Prof. Peter E. Gordon’s essay “What is intellectual history’ captures the different angles from which you and Michael construe Machiavelli: Michael seems more like a philosophy/political science student who attempts to examine how and why early thinkers’ ideas work or not work for our society based on our modern definitions, concepts, and logic, thus raising more debates on political philosophy and pushing the progress of philosophical innovation; your role as an intellectual historian requires one to be unattached from our own understanding of ideas and concepts and to be aware of even logic that seems to be rooted in our subconsciousness so that to examine a past thinker fairly without rash judgement. Michael is like the one who attempts to keep building the existing tower upward, while you are examining carefully the foundation below. For me personally, it would be nice to have both of these two different ways of thinking.
I have a question: I have been attempting to read a bit of Karl Marx whenever time allows. He argues that our thinking and ideology are a reflection of our material conditions. If we accept his point of view, would it be useful to connect intellectual history with economic history?
Nahua, I think you have hit it spot on with your discussion of Peter Gordon’s essay. When I worked with him at Harvard (I had the privilege of having him on my committee, as well as being his teaching assistant for a course) I remember being struck by how, even when we were teaching thinkers far outside my usual scope like Heidegger, I found his presentation of them welcoming and approachable despite my lack of background, because he approached them in the same context-focused way that I did, evaluating, not their correctness or not or their applicability to the present, but their roots in their contemporary historical contexts and the reasons why they believed what they believed.
For Marx’s comment that “our thinking and ideology are a reflection of our material conditions” I think it is often very useful to connect intellectual history with economic history, not in a strictly deterministic way, but by considering economic changes as major environmental or enabling factors that facilitate or deter intellectual change and/or the dissemination of new ideas. I already discussed the example of how I think the dissemination of feminism in the 19th century was greatly facilitated by the economic liberation of female labor because of the development of industrial cloth production, more efficient ways of doing laundry, cleaning, cooking etc. Ideas about female equality existed in antiquity. They enjoyed a large surge in conversation and support from the intellectual firebrands of the Enlightenment, through figures like Montesquieu, Voltaire and Wollstonecraft. But mass movements and substantial political changes, like female suffrage, came when the economic shift had occurred. To use the “intellectual technology” concept, the technology existed in antiquity and was revived and refined in the 18th century, but it required economic shifts as well to help reach a state when large portions of the population or whole nations/governments could embrace and employ it.
As I work on Renaissance history, I constantly feel the close relationship between economics and the intellectual world as well. Humanism as I understand it began when Petrarch called for a revival of antiquity. Economics comes into this in two ways. First, the reason he thought a revival of antiquity was so desperately necessary was because Italy had become so politically tumultuous and unstable, and was under such threat of cultural or literal invasion from France–these are the consequences, largely, of economic situations, since Italy’s development of banking and its central position as a trade hub for the Mediterranean had filled its small, vulnerable citystates with incomparable wealth, creating situations where powerful families could feud, small powers could hire large mercenary armies, and every king in Europe wanted to invade Italy for a piece of its plump pie. Then after Petrarch, humanism’s ability to spread and succeed was also economically linked. You can’t have a humanist without books, you just can’t, it’s about reading, studying, correcting and living the classics. But in an era when a book cost as much as a house, and more than a year’s salary for a young schoolmaster, a library required a staggering investment of capital. That required wealthy powers–families or governments–to value humanism and have the resources to spend on it. Powers like the Medici, and Florence’s Republican government, were convinced to spend their money on libraries and humanism because they believed it would bring them glory, strength, respect, legitimacy, the love of the people, that it would improve life, heal their souls, bring peace, and make their names ring in posterity, but they couldn’t have made the investment if they hadn’t had the money to invest, and they wouldn’t have believed humanism could yield so much if not for the particular (and particularly tumultuous) economic situation in which Renaissance Italy found itself.
Yesterday I found myself thinking about the history of the book in this light, and comparing it to some comments I heard a scientist make on a panel about space elevators. We all want a space elevator–then space exploration will become much, much less expensive, everyone can afford satellites, space-dependent technologies will become cheap, and we can have a Moon Base, and a Mars program, and all the space stations we want, and all our kids can have field trips to space (slight exaggeration). To have a space elevator, we need incredibly strong cables, probably produced using nanofibers. Developing nanofibers is expensive. What the engineer pointed out is that he has high hopes for nanofiber devlopment, because nanofibers have the ideal demand pattern for a new technology. A new technology like this has the problem that, even if there are giant economic benefits to it later on, the people who pay for its development need a short-term return on that, which is difficult in the new baby stages of a technology when it’s at its most expensive. (Some of you may remember the West Wing episode where they debate the price of a cancer medication, arguing that producing each pill costs 5 cents so it’s unfair to charge more, to which the rebuttal is that the second pill cost 5 cents, but the first pill cost $300 million in research.) Once nanofiber production becomes cheap, absolutely it will be profitable, but while it’s still in the stage of costing $300 million to produce a few yards of thread, that’s a problem, and can be enough to keep a technology from getting support. One of the ways we work around this as a society today is the university system, which (through a form of patronage) supports researchers and gives them liberty to direct research toward avenues expected to be valuable independent of profit. Another is grant funding, which gives money based on arguments for the merit of a project without expecting to be paid back. A third is NASA, which develops new technologies (like velcro, or pyrex) to achieve a particular project (Moon!), which are then used and reused in society for the benefit of all. But looking at just the private sector, at the odds of a technology getting funding from investors rather than non-profits, what the scientist said is that, for a technology to receive funding, you want it to have a big long-term application which will show that you’ll make a steady profit once you can make lots of the thing, but it needs to also to have a short-term application for which a small number of clients will be prepared to pay an enormous amount, so you can sell it while it still costs $300 million, as well as expecting to sell it when it costs 5 cents. Nanofibers, he said, hit this sweet spot because of two demands. The first is body armor, since it looks like nanofibers can create bullet-proof fabric as light as normal fabric, and if we can do that then governments will certainly pay an enormous amount to get bullet-proof clothing for a head of state and his/her bodyguards, and elite military applications. The second is super-high-end lightweight golf clubs, which may seem like a frivolous thing, but there are people who will pay thousands of dollars for an extremely high end golf club, and that is something nanofibers can profit from even while expensive (super lightweight bicycles for racing also qualify). So nanofibers can depend on the excitement of the specific investors who want the expensive version now, and through their patronage develop toward the ability to produce things cheaply.
In this sense the history of the book, especially in the Renaissance, was very similar to the situation with nanofibers. In the early, manuscript stage when each new book cost the equivalent of $50,000 (very rough estimate), libraries were built and humanism was funded because wealthy people like Niccolo Niccoli and Cosimo de Medici believed that humanist libraries would give them and their home city political power and spiritual benefits, helping them toward Heaven. That convinced them to invest their millions. Their investments then created the libraries which could be used later on by larger populations, and reproduced cheaply through printing once it developed, but printing would not have developed if patrons like them weren’t around to make there be demand for the volume of books printing could produce. It took Petrarch, Niccoli and Cosimo to fund a library which could raise a generation of people who could read the classics before there was enough demand to sell the 300-1500 copies of a classical book that a printing press could print. And, working within current capitalism, it may take governments who really want bullet-proof suit jackets to give us our space elevator, though universities, NASA, and private patronage of civilian space programs are certainly also big factors pushing us forward.
In sum, I would say that economics sometimes sparks the generation of new ideas–as the economically-driven strife Petrarch experienced enabled the birth of humanism–but it also strongly affects how easily or quickly a new idea can disseminate, whether it gets patronage and support, or whether its champions have to spread it without the support of elites, patrons or government. Thus, in any given era, an intellectual historian needs to have a sense of funding patterns and patronage systems, so we can understand how ideas travel, where, and why.
One more thought from last night, or rather a test comparison showing how the concept “intellectual technology” can work. I was thinking about comparing atomism and steel.
Steel is a precursor for building skyscrapers. Despite urban demand, we didn’t get a transition to huge, towering metropoles until the development of good steel which could raise our towers of glittering glass. Of course, steel is not the ONLY precursor of the skyscraper–it also requires tempered glass, etc. And it isn’t the only way to build skyscrapers, you can use titanium, or nanotech, but you are very unlikely to get either of those things without going through steel first. Having steel does not guarantee that your society will have skyscrapers. Ancient Rome had steel. In the Middle Ages Europe lost it (though pretty-much everywhere except Europe still had steel). When steel came back in the Renaissance it still didn’t lead immediately to skyscrapers, it required many other developments first, and steel had to combine with other things, including social changes (growth of big cities). But when we look at the history of city development, studying steel is extremely important because the advent of steel-frame construction is a very important phase, and a central enabling factor for the development of modern cities.
My Lucretius book looks at the relationship between atomism and atheism in the same way that this analysis looks at steel and skyscrapers. Atomism was around for a long time, went away, came back, etc. And you can have non-atomic atheism, we have lots of it now. But atomism, as the first fully-developed mechanical model of the working of Nature (the first not dependent on God/gods to make the world work) was, in my opinion, one of the factors that you needed to combine with other developments to reach a situation in which an intellectual could combine mechanical models of nature with skepticism with other factors to develop the first fully functional atheistic model of the world. It’s one of the big factors we have to trace to ask “Why did atheism become a major interlocutor in the history of thought when it did, and not before or after?” just as tracing steel helps us answer “Why did skyscrapers start being built when they did?” There had almost certainly been atheisms before and independent of atomism (just as you can make really tall things, like pyramids or cliff-face cities, without steel-frame construction) but it was rare, and didn’t have the infrastructural repeatability necessary to let it become widespread. Modern atheists don’t use Epicurus, they more frequently use Darwin, just as modern skyscrapers use titanium, but the history of skyscrapers becomes clear when we study the history of steel. Just so, the history of atheism becomes much clearer when we study atomism. Of course, we now use steel for lots of things that aren’t skyscrapers (satellite approaching Pluto!), and similarly atomism has lots of non-atheist applications, but we associate atomism a lot with atheism, just as we think a lot about “towers of glass and steel” and usually think less about the steel bolts in our chairs or the steel spoons we eat with. All applications of steel, or epicuranism, can be worth studying, but skyscrapers/ atheism will never stop being one of the biggest and most interesting, at least in terms of how they changed the face of our modern world. And finally, while minority of buildings are skyscrapers, and a minority of contemporary people are atheists, the study of both is broadly useful because the presence of both in the lives of everyone is a defining factor in our current world.
Hello, patient friends. The delight of brilliant and eager students, the siren call of a new university library, the massing threat of conjoining deadlines, and the thousand micro-tasks of moving across the country have caused a very long gap between posts. But I have several pieces of good news to share today, as well as new thoughts on Machiavelli:
The next installment of my Sketches of a History of Skepticism series is 2/3 finished, and I hope to have it up in a week or three, deadlines permitting.
I have an excellent new assistant named Mack Muldofsky, who is helping me with Ex Urbe, music, research and many other projects. So we have him to thank in a big way if the speed of my posting picks up this summer.
Because I have a lot of deadlines this summer, I have asked some friends to contribute guest entries here, and we have a few planned treating science, literature and history, so that’s something we can look forward to together.
For those following my music, the Sundown Kickstarter is complete, and it is now possible to order online the CD and DVD of my Norse Myth song cycle Sundown: Whispers of Ragnarok. In addition to the discs, you can also order two posters, one of my space exploration anthem “Somebody Will” and one which is a detailed map of the Norse mythological cosmos. CD sales go to supporting the costs of traveling to concerts.
I have several concerts and public events lined up for the summer:
At Mythcon (July 31-Aug 2), Lauren Schiller and myself, performing as the duo “Sassafrass: Trickster and King” will join Guest of Honor Jo Walton for “Norse Hour,” in which she will read Norse myth-themed poetry in alternation with our Norse-themed songs.
Sunday August 9th, I have been invited do a reading of the freshly-polished opening chapters of my novel Too Like the Lightning (due out in Summer 2016) at the Tiptree Award Ceremony event honoring Jo Walton, who couldn’t make it to the initial ceremony but received the Tiptree this year for her novel My Real Children. The event is being held at Borderlands in San Francisco at 3 PM, and will feature readings by local authors, and music performed by myself and Lauren.
Monday August 17th, at 7 PM, I am joining Jo and Lauren again at Powell’s, where Jo will read from her books, Lauren and I will sing, and I will interview Jo and talk about my writing as well as hers.
Finally at Sasquan (Worldcon, Aug 19-23) Lauren and I will have a full concert, I will do another reading from Dogs of Peace, and I will be on several exciting panels.
Meanwhile, I have a little something to share here. I continue to receive frequent responses to my Machiavelli series, and recently one of them sparked such an interesting conversation in e-mail that I wanted to post it here, for others to enjoy and respond to. These are very raw thoughts, and I hope the discussion will gain more participants here in the comment thread (I have trimmed out parts not relevant to the discussion):
In this discussion, I use a term I often use when trying to introduce intellectual history as a concept, and which I have been meaning to write about here for some time, “Intellectual Technology.”
A little conversation about Machiavelli:
I have been reading your blog posts on Machiavelli. You write with tremendous learning, clarity and colour, and really bring past events alive in a brilliant way. But…….. I think you’re far too soft on Machiavelli!!!
I’m working on a PhD about him and it’s fascinating to see that nearly all present-day academics, and indeed academics during much of the second half of the 20th century, have a largely if not completely uncritical admiration for him and his works. He is lauded, for example as a forerunner of pluralism, and supporter of republicanism/democracy, yet his clear inspiration of Italian fascism is almost completely overlooked. The fact that Gramsci revered Machiavelli is dealt with by many scholars, but Mussolini’s admiration for him is hurriedly passed over.
Your post on Machiavelli and atheism is really interesting – in that context the 2013 book Machiavelliby Robert Black would be of interest to you…
Best regards, Michael Sanfey, IEP/UCP Lisbon.
Reply from Ada:
Michael,Thank you for writing in to express your enjoyment of my blog posts. I think your criticisms of Machiavelli are interesting and largely fair, and my own opinions overlap with yours in many ways, though not in others. I agree with you completely that there are inappropriate tendencies in a lot of scholars to praise Machiavelli inappropriately as a proto-modern champion of Democracy, republicanism, pluralism, modern national pride etc., all of which are characterizations are deeply inappropriate and also deeply presentist, reading anachronistic values back into him. But there is also a tendency, dominant earlier in the 20th century, to villify Machiavelli too much in precisely the same anachronistic and presentist way, characterizing him as a fascist or a Nazi and reading back into his work the things that were done in the 20th century by people who used some of his ideas but mixed them with many others. My way of approaching Machiavelli focuses above all on trying to distance him from the present and place him in his context, to show that he is neither a modern hero nor a modern villain since he isn’t modern at all. The question is separate, which you bring up, of how much to blame him or criticize him for opening up the direction of reasoning which led to later consequentialism, and also to fascism which certainly used him as one of its foundational texts. Here I find myself uncomfortable with the idea of historical blame at all, particularly when it’s blame over such a long span of time.
I tend to think of thinkers as toolmakers, or inventors of “intellectual technology”, innovators who have created a new thing which can then be used by many people. New inventions can be used in many ways, and in anticipatable and unanticipatable ways. Just as, for example, carbon steel can be used to raise great towers and send train lines across continents, it can be used to build weapons and take lives, so it is a complex question how much to blame the inventor of carbon steel for its many uses. In this sense, I do believe we can see Machiavelli as a weapon-maker, since the ideas he was generating were directly intended to be used in war and politics. We can compare him very directly to the inventor of gunpowder in this sense. I also see him–and this is much of the heart of my critique–as a defensive weapon maker, i.e. someone working in a period of danger and siege trying to create something with which to defend his homeland. So, imagine now the inventor of gunpowder creating it to defend his homeland from an invasion. Is he responsible for all later uses of gunpowder as well? Is he guilty of criminal negligence for not thinking through the fact that long-term many more people will be killed by his invention than live in his home town? Do the lives saved by gunpowder throughout its history balance out against the lives saved in some kind of (Machiavellian/consequentialist) moral calculus? I don’t think “yes” or “no” are fair answers to such a complex question, but I do think it is important, when we think about Machiavelli and what to hold him responsible for, to remember the circumstances in which he created gunpowder (i.e. consequentialist ethics), and that he invented other great things too, like political science and critical historical reasoning. The debts are complicated, as is the culpability for how inventions are used after the inventor’s death. So while I join you wholeheartedly in wanting to fight back against the distortion of Machiavelli the Mythical proto-modern Republican, I also think it’s valuable to battle against the myth of Machiavelli the proto-Fascist, and try to create a portrait of the real man as I see him, Machiavelli the frightened Florentine.
I do know Bob Black’s Machiavelli book, but disagree with some of his fundamental ideas about humanism itself – another fun topic, and one I enjoy discussing with him at conferences. He’s a challenging interlocutor. There is a very good recent paper by James Hankins on Academia.edu now about the “Virtue Politics” of humanists, which I recommend that you look at if you’re interested in responses to Black.
Best, Ada Palmer, University of Chicago
More from Michael:
First, I want to thank you for this fantastically detailed and brilliant response… I’d like to “come back at you” on consequentialism and some other points:
* Regarding your point about Machiavelli not being modern at all, I see what you mean, albeit you do say of Machiavelli in the post on atheism that “he is in other ways so very modern”. Leo Strauss certainly thought he had a lot to do with the introduction of what we know as “modernity”.
* When you seek to balance the need to fight against the Proto-republican myth and against the Proto-fascist myth, the first of those “myths” enjoys immeasurably wider currency than the second, and I ask myself, why is this?
* On the “intellectual technology” point below, and its being essentially neutral, in this case I wouldn’t agree with you, because we are not talking here about an object like gunpowder, it’s actually concerning something much more important. In ethical terms, Machiavelli took transcendent values out of the equation. As you put it, Machiavelli created “an ethics which works without God” – except that it doesn’t work!!!
* Machiavelli has had a questionable impact in regard to “realism” in International relations. You mention in one of the posts that he backed an alliance with Borgia so as to protect Florence, agreeing to offer money and resources to help Borgia conquer more – a very good example of Machiavelli‘s undoubted sympathy for imperialism.
PPS On the question of Machiavelli being an atheist or not, I really was fascinated by that part of your Ex Urbe writings. I’ve concluded that, whatever about him being an atheist or not, one could certainly describe him as “ungodly” would you agree?
Quick response from Ada:
I think “ungodly” does work for Machiavelli depending on how you define it; it has a connotation of being immoral–which does not fit–but if instead you mean it literally as someone who makes his calculations without thinking much about the divine then it fits.
A supplementary comment on “Intellectual Technology”:
I find “intellectual technology” a very useful concept when I try to describe what I study. Broadly my work is “intellectual history” or “the history of ideas” but what I actually study is a bit more specific: how particular kinds of ideas come into existence, disseminate, and come to be regulated at different points in time. The types of ideas I investigate–atomism, determinism, utilitarianism–move through human culture very much the same way technological innovations do. They come into being in a specific place and time, as a result of a single inventor or collaboration. They spread from that point, but their spread is neither inevitable nor simple. Sometimes they are invented separately by independent people in independent places, and sometimes they exist for centuries before having a substantial impact. When a new idea enters a place and comes into common use, it completely changes the situation and makes actions or institutions which worked before no longer viable. I compare Machiavelli’s utilitarianism to gunpowder above, but here are some other examples of famous cases of technological inventions, and ideas which disseminated in similar patterns:
The Bicycle and Atomism
Leonardo da Vinci sketched a design for a bicycle in the Renaissance, and may have seriously tried to construct one, but afterward no one did so for a very long time. Then many other factors changed: the availability of rubber and light-weight strong metals, the growth of large, centralized cities and a working population in need of inexpensive transit, and suddenly the bicycle was able to combine with these other factors to revolutionize life and society in a huge rush, first across Europe and then well beyond. We have moved on from it to develop more complex technologies that achieve the same function, but still use it and develop it more, and even where we don’t, and cities would not have the shapes they do now without it, and it is still transforming parts of the world it has touched more slowly. Similarly atomism was developed and used for a little while, then languished in notebooks for a long time, before combining with the right factors to spread and rapidly transform society and culture.
The Unity of All Life and Calculus
Newton and Leibnitz developed Calculus independently at the same time. Similarly, both classical Stoicism in Greece and Buddhism in India roughly simultaneously and independently, as far as we can tell, developed the idea that all living things–humans, insects, ancients, people not yet born–are, in fact, parts of one contiguous, interconnected, sacred living thing. This enormously rich and complex concept had a huge number of applications in each society, but seems to have been independently developed to meet the demands for metaphysical and emotional answers of societies at remarkably similar developmental stages. The circumstances were right, and the ideas then went on to be applied in vastly different but still similar ways.
Feminism and the Aztec Wheel
For a long time we thought the Aztecs didn’t have the wheel. More recently we discovered that they had children’s toys which used the wheel, but never developed it beyond that. Which means someone thought of it, and it disseminated a bit and was used in a very narrow way, but not developed further because what we think of as more “advanced” or “industrial” applications (wagon, wheelbarrow) just weren’t compatible with the Aztec world (largely because it was incredibly hilly and didn’t have the elaborate road system Europe developed, relying instead on human legs, stairs, and raw terrain, which were sufficient to let it develop a robust and complex economy and empire of its own. The wheel became more useful in the Americas when European-style city plans and roads were built). Similarly Plato voiced feminism in his Republic, arguing that women and men were fundamentally interchangeable if educated the same way, and people who read the Republic discussed it as a theory among many other elements of the book, but didn’t develop it further (again, I would argue, this was at least in part because the economic and social structures of the classical world depended on the gendered division of labor, particularly for the production of thread in the absence of advanced spinning technology, which is why literally all women in Rome spent tons of time spinning–spinning quotas were even sometimes required by law of prostitutes since if there was a substantial sliver of the female population employed without spinning Rome would run out of cloth. Feminism was better able to become revolutionary in Europe when (among other changes) industrialization reduced the number of hours required for the maintenance of a household and the production of cloth, making it more practical to redirect female labor, and question why it had been locked into that in the first place).
In sum, there is a concreteness to the ideas whose movements I study, a distinct and recognizable traceability. Interpretive analyses, comparative, subjective analyses, analyses of technique, aesthetics, authorial intent, authenticity, such analyses are excellent, but they aren’t intellectual history as I practice and teach it. I trace intellectual technology. Just as the gun, or carbon steel, or the moldboard plow came in at a particular time and had an impact, I study particular ideas whose dissemination changed what it was possible for human beings to do, and what shapes human society can be. It is meaningful to talk about being at an “intellectual tech level” or at least about being pre- or post- a particular piece of intellectual technology (progress, utilitarianism, the scientific method) just as much as we can talk about being pre- or post-computer, gunpowder, or bronze. Such things cannot be un-invented once they disseminate through a society, though some societies regulate or restrict them, and they can be lost, or spend a long time hidden, or undeveloped. Elites often have a legal or practical monopoly on some (intellectual) technologies, but nothing can stop things from sometimes getting into the hands or minds of the poor or the oppressed. Sometimes historians are sure a piece of (intellectual) technology was present because we have direct records of it: a surviving example, a reference, a drawing, something which was obviously made with it. Other times we have only secondary evidence (they were farming X crop which, as far as we know, probably requires the moldboard plow; they described a strange kind of unknown weapon which we think means gun; they were discussing heretics of a particular sort which seems to have involved denial of Providence).
I realize that it would be easy to read my use of “intellectual technology” as an attempt to climb on the pro-science-and-engineering bandwagon, presenting intellectual history as quasi-hard-science, much as we joke that if poets started calling themselves “syllabic engineers” they would suddenly be paid more. But it isn’t a term I’m advocating as a label, necessarily. It’s a term I use for thinking, a semantic tool for describing the specific type of idea history I practice, and linking together my different interests into a coherent whole. When I spell out what I’m working on right now as an historian, it’s actually a rather incoherent list: “the history of atheism, atomic science, skepticism, Platonic and Stoic theology, soul theory, homosexuality, theodicy, witchcraft, gender construction, saints and heavenly politics, Viking metaphysics, the Inquisition, utilitarianism, humanist self-fashioning, and what Renaissance people imagined ancient Rome was like. And if you give me an hour, I can sort-of explain what those things have to do with each other.” Or I can say, “I study how particularly controversial pieces of new intellectual technology come into being and spread over time.”
In that light, then, we can think of Machiavelli as the inventor of a piece of intellectual technology, or rather of several pieces of intellectual technology, since consequential ethics is one, but his new method of historical analysis (political science) is another. We might compare him to someone who invented both the gun and the calculator. How do we feel about that contribution? Positive? Negative? Critical? Celebratory? I think the only universal answer is: we feel strongly.