I was recently interviewed for a piece in the Times on why the philosophy of stoicism has become very popular in the Silicon Valley tech crowd. Only a sliver of my thoughts made it into the article, but the question from Nellie Bowles was very stimulating so I wanted to share more of my thoughts.
To begin with, like any ancient philosophy, stoicism has a physics and metaphysics–how it thinks the universe works–and separately an ethics–how it advises one to live, and judge good and bad action. The ethics is based on the physics and metaphysics, but can be divorced from it, and the ethics has long been far more popular than the metaphysics. This is a big part of why stoic texts surviving from antiquity focus on the ethics; people transcribing manuscripts cared more about these than about the others. And this is why thinkers from Cicero to Petrarch to today have celebrated stoicism’s moral and ethical advice while following utterly different cosmologies and metaphysicses. (For serious engagement with stoic ontology & metaphysics you want Spinoza.) The current fad for stoicism, like all past fads for stoicism (except Spinoza) focuses on the ethics.
Thinking Spots: Stoic Metaphysics and Ontology
Stoic ontology and metaphysics are sufficiently awesome that I must give it a couple paragraphs before I move to the ethics, though the ethics are the core of its popularity today. Stoics were monists; if dualists like Plato and Descartes believe there are fundamentally two things (matter and non-matter, for example), monists believe there is fundamentally one thing. Not just one category of thing (Epicurean atomists, for example, think there is only one kind of thing: atoms) but actually one single thing. The stoics posited that the universe is one enormous contiguous single object. Different parts of it manifest different qualities, but are the same. Just as polkadot fabric may manifest blueness here and whiteness there but remains the same object, so the part of the universe which is your hand manifests firmness and warmth and opacity, and the part which is the air manifests softness and transparency, but they are the same object. And when you seem to move your hand, in fact there is no motion, rather the part of the universe that was manifesting the transparency and softness of air before is manifesting the firmness and opacity of arm now and vice versa. Think of the pixels on a screen: what seem to be objects moving are in fact different parts of the screen changing color (i.e. changing quality) in sequence, creating the illusion of motion whereas in fact there is only variation in the surface of an object. (This is the stoic solution to Zeno’s paradoxes of motion discussed here). The stoic living universe is thus somewhat like the skin of a mimic octopus, able to seem to be become a myriad different things while it remains one. And in addition to blueness, and whiteness, and opacity, and warmth, other attributes the universe manifests more in some places than others include what we would call in modern terms sentience, self-awareness, and reason–thus the human being is a spot of sentience against a background of less sentient substance, like a white spot on blue. But, the stoics argue, any property which is possessed by the part is possessed by the whole, so while sentience and reason are concentrated in the spots which are living humans, the whole thing is a vast, intelligent, rational whole, and when we die we merge back into it. Thus there is no individual immortality, but we are all part of something greater which is eternal, wise, and infinite.
Stoicism was likely influenced by Buddhism through contact with India during the wars of Alexander the Great, and shares a lot with Buddhism: the whole universe is one vast, living, divine whole. Life is full of suffering, but that suffering is a path to understanding a larger good. And there is a universal justice on the large scale beyond what a human from our limited P.O.V. can understand. In Buddhism this is karma, while for the stoics it is Providence, the same concept of Providence that Christianity later borrowed, which argues that everything in the world which seems bad is actually good in a way we cannot fully understand because of our limited perspective. It is as if we are a fingertip; we cannot understand why we must suffer the evil of being repeatedly banged against a hard, unyielding surface, because we don’t have the means to understand that the larger organism is typing up a blog post about stoicism, but if we did have the means to understand we would recognize that it’s worth-it on the large scale. The stoic justification for claiming the universe is perfect is the patterns we see in nature: trees have roots to drink the water they need, woodpeckers that eat bugs have beaks the shape they need to be, woodland animals have woodland camouflage, desert animals have desert camouflage, everything fits together in a vast, functional whole which (without Darwin to offer an alternative) the Greeks agreed implied intelligence, either in a creator (Aristotle’s demiurge), a source (Plato’s Good), or, for the stoics, the universe itself.
The stoics also argued (followed by some Christian thinkers) that there is no self-determination. We will all end up going where the Plan will have us go no matter what, but the one thing we do have power over is our own inner responses to the path fate gives us: do we curse, complain, fight, shake our fists at the heavens, or do we ascent, accept, relax, and gaze in happy awe on the vastness of which we are a part? A classic stoic image (and after this I’ll turn to ethics and the tech crowd) is that the human being is like a dog tied behind a cart. The cart is going somewhere, and there is absolutely nothing the dog can do to change the course the cart will take. The dog has freewill only in one thing: the dog can fight, snarl, tug at the collar, gnaw on the rope, dig its claws into the dirt until it bleeds, and exhaust itself with fighting, or it can trot along contentedly and trust the driver.
An Action Ethics
The majority of surviving stoic writings focus, not on the metaphysics, but on the actionable conclusion: given all this, how do we teach ourselves to assent? To become the contented dog who trusts in Providence enough to follow where our paths lead without being made miserable by anxiety, fear, and resistance? Stoics therefore teach self-mastery and detachment: you can’t keep terrible things from happening but you can control your own internal reaction to them and work on preventing yourself from being overwhelmed by them. You wake up to some terrible news in the morning: do you brood on it all day and lose your productivity and wellbeing? Or do you take control and carry on?
A lot of the surviving stoic writings are maxims, short pieces for contemplation designed to help you dwell less on bad things that are happening, sometimes more imagery than argument. Imagine–for example–that life is like being a guest at a banquet. Platters are being passed around and people are reaching out and taking what is offered them. Some platters come to you and you take of them–other platters never make it to you, or are empty when they do. But you are a guest, these things were not yours, they were offered as gifts, so you have no reason to be angry that you can only taste some of them–better to enjoy the platters that do reach you, and remember that the host who offered them is kind.
This is where stoicism serves very much like a self-help book, or more generally as philosophical therapy, which is what classical philosophies largely aimed to provide. Stoicism’s recommendations for how to resist pain are exquisite, as in this example from the Meditations. And the metaphysics crops up mainly as a way to justify the advice:
XXV. What a small portion of vast and infinite eternity it is, that is allowed unto every one of us, and how soon it vanisheth into the general age of the world: of the common substance, and of the common soul also what a small portion is allotted unto us: and in what a little clod of the whole earth (as it were) it is that thou doest crawl. After thou shalt rightly have considered these things with thyself; fancy not anything else in the world any more to be of any weight and moment but this, to do that only which thine own nature doth require; and to conform thyself to that which the common nature doth afford.
The ontology here serves the therapy. You lost the election? Got passed over for promotion? Got a bad review? These things are small and fleeting within the larger whole, all wealth will perish in infinity, all fame fade, nothing serious was really lost. You lost your arm to disease? It was not yours to begin with, it was leant to you be a kind universe which has a right to take it back again. You lost your best friend? Again, this was a brief good thing the universe leant you, don’t dwell on it, look instead to the other good things that still surround you. Goods are real, evils an illusion, and if you can believe that it becomes easier–the stoics promise–to let go. The approach works, sometimes. Scientific studies tell us that pain is more emotionally terrible when we know/believe it’s actually damaging us, i.e. that the same number of nerves firing off is more upsetting when than when we believe it’s permanently damaging a body part than when we know it’s hot wax or an electroshock and the effects won’t be lingering. So if you can actually convince yourself that nothing really important has been destroyed when something affects your fame, or fortune, it does hurt less. And millions of people over thousands of years have found stoicism a comfort on life’s tumultuous sea.
An Ethics for the Rich and Powerful
At this point I want to remind the reader that I personally love stoicism. It’s gorgeous. It’s brilliant. Revisiting it I find it always challenges assumptions, pushes me to hold myself to high standards, gives me new ideas to chew on. Its major texts, Epictetus, Marcus Aurelius, Seneca, are ones I love to teach, love to revisit, love to grapple with again and again. I will continue to praise, and teach, and write about, and read, and make use of stoicism all the days of my life. But…
The new popularity of stoicism among the tech crowd, and also on Wall Street which is another place that’s been reading and naming things after stoics recently, is strikingly similar to stoicism’s popularity among the powerful elites of ancient Rome. In Hellenistic Greece, stoicism had been one of several different popular philosophical schools, along with Platnonism, skepticism, Pythagoreanism, cynicism, Aristotelianism, Hedonism etc. (Quick tip: names of philosophical systems are generally capitalized when named after people, not capitalized when named after other stuff, as in cynicism from cynos, dog; stoicism from stoa, porch, where the first stoics held their classes.) And like the rest of these ancient schools, stoicism focused on eudaimonia, i.e. happiness or the good life, the idea that the purpose of philosophical study was not primarily to understand everything, or to achieve power through knowledge, but to achieve personal happiness, usually through inner tranquility and armoring the soul against the slings and arrows of outrageous fortune (see my essay on eudaimonia for more.) Stoicism was one of a number of methods to attempt this, so like all ancient schools it fulfilled the roles of a self-help book and Science 101 textbook rolled into one.
But in Rome stoicism surged in popularity compared to all the other systems, because it was the one Greek ethics which worked well for the rich and powerful. Other schools like Platonism, cynicism, and Epicureanism warned followers that participation in politics and the pursuit of wealth, power, or honor would only lead to stress and risk, and were incompatible with happiness. Epicurus said the happy life was found by leaving the political urban world to sit in a secluded garden, eating a simple meal while conversing with good friends. Cynicism advocated the more extreme step of renouncing personal property and living like a stray dog scrounging beside the road, which has no fear of being robbed or losing its status because it has nothing to lose. The Pythagoreans and many other sects lived in isolated communities not unlike monastic orders, and used strict diets, ascetic dress codes, even vows of silence. Plato too specified that the philosopher kings of the republic are made unhappy by the stress of having to rule, and a number of ancient figures even used the stress of rule to argue that the gods can’t possibly hear and act on human prayers or else the gods would be perpetually harassed and unhappy.
Stoicism, on the other hand, stressed the idea that everyone is part of a large perfect whole and thus that it’s everyone’s duty to fulfill the role Fate allots. In the ordering of nature the woodpecker should peck, the deer should graze, the bee should pollinate, and the wolf should hunt and kill. We too as humans have a duty to fulfill our roles, be that as servant, merchant, slave, or king. Some stoic authors were slaves themselves, like Epictetus author of the beautiful stoic handbook Enchiridion, and many stoic writings focus on providing therapies for armoring one’s inner self against such evils as physical pain, illness, losing friends, disgrace, and exile. But other stoic philosophers were great leaders of states, including leading statesmen like Cicero and Seneca, and the Emperor Marcus Aurelius.
Stoicism caught on among Roman elites because it was the one form of philosophical guidance that didn’t urge them to renounce wealth or power. Politics is stressful, but rather than giving it up to live like a monk or a dog, stoicism says you should continue the hard work seek to attain an inner attitude in which you will not suffer misery when you do fail, when you do lose the election, face the criticism, suffer the setback, feel the blows of fortune. Stoicism alone recommended inner detachment, not walking away. For Roman patrician statesmen with long family traditions of political leadership, walking away from civic participation was a non-starter (especially since Roman ancestor worship meant that achieving a name in politics was also a religious duty which your very afterlife depended on!). Stoicism finally offered a philosophical ethics useful to the statesman, which is why Cicero–a skeptic who engaged with many sects of philosophy–favors it in his dialogs more consistently than any other sect (this may not sound like a strong endorsement but is a high a very high bar for Cicero. And Cicero is a very big deal).
Thus, turning to the questions that Nellie asked me for her article, when I see a fad for stoicism among today’s rising rich, I see a good side and a bad side. The good side is that stoicism, sharing a lot with Buddhism, teaches that the only real treasures are inner treasures–virtue, self-mastery, courage, charity–and that all things in existence are part of one good, divine, and sacred whole, a stance which can combat selfishness and intolerance by encouraging self-discipline and teaching us to love and value every stranger as much as we love our families and ourselves. But on the negative side, stoicism’s Providential claim that everything in the universe is already perfect and that things which seem bad or unjust are secretly good underneath (a claim Christianity borrowed from Stoicism) can be used to justify the idea that the rich and powerful are meant to be rich and powerful, that the poor and downtrodden are meant to be poor and downtrodden, and that even the worst actions are actually good in an ineffable and eternal way. Such claims can be used to justify complacency, social callousness, and even exploitative or destructive behavior.
Seen in the best light, a wealthy person excited by stoicism is seeking a philosophy that helps the mind resist greed and the capitalist rat race and offers a wiser perspective and inner happiness; seen in the worst light it can be a tool for justifying keeping one’s wealth and power and not trying to help others. In that sense it reminds me of the profession of wealth therapists who help the uber-wealthy stop feeling guilty about spending $2,000 on bed sheets or millions on a megayacht. Wealth does come with real emotional challenges, but as society calls more and more for fundamental reform to close the wealth gap and reduce the power wielded by the 1%, cultivating a pro-status-quo attitude can also be a way to deflect pressure to try to address society’s ills.
Seneca, an author I absolutely love, wrote exquisite maxims about selflessness and virtue which have been backbones of moral and political education for two millenia. So powerful are his arguments that Petrarch, when comparing the strengths of the ancient Romans and ancient Greeks in different arts (Homer v. Virgil in poetry, Demosthenes v. Cicero in oratory, Thucydides v. Livy in history etc.) concluded that Seneca alone makes the Latins wholly superior to the Greeks in matters of ethics. Seneca also risked his life trying to curb the tyranny of Nero, and eventually died for it. But for all Seneca’s powerful advice about the big picture and the meaninglessness of wealth, he was also a slave-owner who, when alerted that his male slaves were sexually abusing his female slaves, set up a brothel in his estate so he could make his male slaves pay him for the privilege of abusing his female slaves–not quite the behavior we imagine when Seneca says money is meaningless and all living beings are sacred. But stoicism urges us to turn our critical eye inward and improve ourselves, not to turn it outward and improve our worlds. It gave Seneca the courage and resolve to face the danger of Nero’s deadly whims day by day in order to do his duty to the Roman political elite, but it didn’t encourage him to question his world order.
Stoicism is an intellectually rich and stimulating system, and wonderful therapy against grief, against dwelling on setbacks, and against getting caught up in the chase for fame and fortune and the blinders of the rat race. It reminds us to zoom out from a world of praise, and blame, and status, and cruel things people said on Twitter, and the competition to see made the most sales, or had the most hits, or got the largest raise, all things which can be genuinely emotionally devastating if we let ourselves get too caught up in them. In all those ways stoicism is a great match for Silicon Valley, for Wall Street, also for my world of academia and tenure and their stresses and injustices. It’s also a great match for congresspeople, authors, journalists, actors, entrepreneurs, everyone whose life contains stresses and setbacks and moments when we need help to let us to take a deep breath and let it go. But Cicero was not Voltaire, and did not look at the evils and injustices around him and conclude that he should wield his power to make a fundamentally better world–he focused only on coping with the world as it already was, and fulfilling his duties within it. Stoicism predates the concept of human-generated progress by more than a millennium. It doesn’t teach us how to change the terrible aspects of the world, it teaches us how to adapt ourselves to them, and to accept them, presuming that they fundamentally cannot be fixed. But we have two millenia more experience than Seneca. We know many of life’s evils can’t be fixed, but we also know, with human teamwork and the scientific method and a dose of Bacon and Voltaire, some of them can.
That’s why when I hear that rich, powerful people are into stoicism I think it’s great that people are excited by the idea that we should hold all life sacred and look for meaning beyond wealth and worldly power. I think it’s a great philosophy for anyone, and certainly for those who need help zooming out from a high-stress, high-competition world to think about the human and humane big picture, and to pay more attention to self-care, and loving others. But it also makes me a little wary. Because I think it’s important that we mingle some Voltaire in with our Seneca, and remember that stoicism’s invaluable advice for taking better care of ourselves inside can–if we fail to mix it with other ideas–come with a big blind spot regarding the world outside ourselves, and whether we should change it. An activist can be a stoic–activism absolutely needs some way to help cope with the pain when we pour our hearts and hours into trying to help someone, or pass new legislation, or resist, and fail. For such moments, stoicism is a precious remedy against despair and burn-out, but it doesn’t in itself offer us the impetus toward activism and resistance in the first place. That we need to get from somewhere else.
It’s spring 2019 and crises are coming thick and fast, but one of them which may have an extra deep, extra wide, extra lasting, and extra invisible impact is the new proposed EU Copyright Directive, whose Article 11 and Article 13 propose, among other things, to (A) radically and permanently change who owns news and has a right to circulate and report it, (B) demand filters to preemptively censor content that will be expensive, automated, easy for trolls to exploit, and difficult for people to appeal, and (C) put huge expensive requirements on creators of online content which will make it basically impossible for individuals or small groups to create and launch new web spaces, making it much harder for anyone but established media giants to create new content.
I wrote a short essay about the issue this morning, “How #Article13 is like the Inquisition: John Milton Against the EU #CopyrightDirective” looking at how this crisis resembles the print revolution, but it ended up being posted on BoingBoing, instead of here, so I hope you enjoy it. I’ll also add that, after a year looking at how information revolutions stimulate new kinds of censorship, my take-away is this: information revolutions democratize speech and thus make marginal voices louder. Whether today or 400 years ago, many of these are radical voices, voices which were marginalized and silenced in traditional fora and thus have an extra incentive to go to the effort to adopt new methods. these radical voices tend to be at all fringes of politics: radical religion, radical conservatism, radical progressivism, radical sexualities and identities, fire & brimstone preachers and civil rights advocates, Calvinist visionaries and GLBT groups, Voltaire and the KKK. This has a thousand consequences, but one is that it scares governments, and makes publics easy to rile up against frightening new voices. And that makes it easy for corporations and other profit-seeking actors to lobby for policies that they claim are to protect speech, or protect journalism, or protect the country, or protect children, etc. but are actually framed to maximize their own profits. Frightened governments and alarmed populaces are very vulnerable to this manipulation. It happened in the 1640s. It’s happening right now, and we in the digital revolution need to look very seriously at John Milton who fought against this (and failed) during the print revolution if we want to learn from earlier mistakes and protect the internet as the most powerful engine of public knowledge and empowerment ever created. So please, read up about the crisis, and, if you can, take action!
This year I was honored to present the 2018 John W. Campbell Award for Best New Writer at Worldcon’s Hugo Awards Ceremony, and several people have asked me to post my presentation speech, in which I used Japanese examples to talk about the invaluable impact of new authors expanding the breadth of what gets explored in genre fiction’s long conversation. Here is the speech, followed by some expanded comments:
First awarded in 1973, this award was named for John W. Campbell, the celebrated editor of Astounding and Analog who introduced many beloved new authors to the field. This is not a Hugo award, but is sponsored by Dell Magazines, and administered by Worldcon. Spring Schoenhuth of Springtime Studios created the Campbell pin, and the tiara made by Amanda Downum was added in 2005/2006. This award is unusual for considering short fiction and novels together, providing a cross-section of innovation in the field, and, often, offering a first personal welcome to new writers unfamiliar with the social world of fandom.
I’m currently curating an exhibit on the history of censorship around the world, and one section of the exhibit keeps coming to mind as I consider the Campbell Award. Immediately after World War II, in Japan authors and journalists were effectively forbidden to talk about the war, due to censorship exercised by both the reformed Japanese government and American occupation forces. This left a generation of kids desperate to understand the events which had shattered their world and families, but with no one willing to have that conversation, and no books to turn to. Enter Osamu Tezuka whose 1952 Astro Boy (Tetsuwan Atomu, 1952-68) bypassed censors who saw it as merely a kids’ science fiction story, while it depicted a civil rights movement for robot A.Is., including anti-robot hate-crimes, hate-motivated international wars, nuclear bombs, and the rise of the robot-hating dictator “Hitlini.”
Tezuka’s science fiction became the tool a generation used to understand the roots of World War II and how to work toward a more peaceful and cooperative future, but what makes this relevant to the Campbell Award is the next step. Many autobiographies of those who were kids in Japan in the 1950s describe reading and re-reading Tezuka’s early science fiction until the cheap paperbacks fell apart, but by the later 1960s these same young readers became young authors, like Yoshihiro Tatsumi, Keiji Nakazawa, and their peers. They in turn led a movement to push the envelope of what could be depicted in popular genre fiction in Japan, writing grittier more adult works, battling censorship and backlash, and ultimately opening a space for more serious genre fiction. These new voices didn’t just contribute their works, they changed speculative fiction to let Tezuka and other authors they had long looked up to write new works too, finally depicting the war directly, and producing some of the best works of their careers, including Tezuka’s Buddhist science fiction masterpiece Phoenix.
These authors I’m discussing are all manga authors, comic book authors, but the difference between prose and comics doesn’t matter here, their world like ours was and is a self-conscious community of speculative fiction readers and writers dedicated to imagining different presents, pasts, and futures, and thereby advancing a conversation which injects imagination, hope, and caution into our real world efforts to and build the best future possible. It is in that spirit that the John W. Campbell award welcomes to our field not only today’s new voices but the ways that these voices will change the field, stimulating new responses from everybody, from those like John Varley and George R. R. Martin who were Campbell finalists more than forty years ago, to next year’s finalists. This year’s finalists are Katherine Arden, Sarah Kuhn, Jeannette Ng, Vina Jie-Min Prasad, Rebecca Roanhorse, and Rivers Solomon.
The examples I discussed in this speech come from my exhibit’s case on the censorship of comic books and graphic novels, which are targeted by censorship more often than text fiction because of their visual format (which makes obscenity charges easier to advance), their association with children, and the power of political cartoons.
Tezuka’s manga I discuss in the exhibit with the chilling title “Childhood Without Books” since during World War II a generation of Japanese kids grow up in a broken school system which had all but shut down or been transformed into a military pre-training program, while censored presses produced only war propaganda, and Japan even had a ban on “frivolous literature” which generally meant anything that wasn’t for the war. In effect, a generation of kids grew up with no access to literature, and plunged straight from that to the new era of post-war censorship. Numerous autobiographies by members of this generation vividly recount the arrival of the first bright, colorful books by “God of Manga” Osamu Tezuka, such as New Treasure Island, Lost World, Nextworld, and above all Astro Boy whose depictions of anti-robot voter suppression tactics are very powerful today, while its repeated engagement nuclear bombs and other weapons of mass destruction were, for adults and kids alike, often the first and only available literary discussion of nuclear warfare. Tezuka also made a point of discussing racism as a global issue, and Astro Boy depicts lynch mobs in America, the Cambodian genocide, and post-colonial exploitation in Africa.
Thus, while being perceived as “for kids” often brings comics under extra fire, in the case of Astro Boy, censors ignored a mere science fiction comic, which let Tezuka kick start the conversation about the mistakes of the past and the possibilities of a better future.
Making Room for Adults: One young reader who read and reread Tezuka’s early manga until they fell apart was Yoshihiro Tatsumi, whose autobiography A Drifting Life begins with Tezuka’s impact on him in his early post-war years. As Tatsumi himself began to publish manga in the 1950s-70s, Japan experienced its own wave of public and parental outrage about comics harming children similar to that which had affected the English-speaking world slightly earlier. Since the Japanese word for comic books, manga, literally means “whimsical pictures” critics argued that manga must by definition be light and funny. Tatsumi coined the alternate term gekiga(“dramatic pictures”) adopted by a wave of serious and provocative authors who set out to depict serious dramatic topics, such as crime stories, suicide, sexuality, prostitution, the debt crisis, alienation, the psychology of evil, and the dark and uncomfortable social issues and tensions affecting Japanese society.
By the 1970s, the efforts of Tatsumi and his peers to make space for mature manga helped to expand the range of what artists dared to depict, contributing to the loosening of censorship and social pressure, which in turn let thethe authors Tatsumi and others had looked up to as children to finally treat the war directly. Thus Tatsumi’s efforts moving forward from his childhood model Osamu Tezuka in turn paved the way for Tezuka to finally own including Message to Adolfwhich depicts how racism gradually poisons individuals and society, Ayako which depicts the degeneration of traditional Japanese society during the post-war occupation, MW which depicts government corruption and the human impact of weapons of mass destruction, sections of his beloved medical drama Black Jackwhich treat war and exploitation, Ode to Kirihitowhich treats medical dehumanization and apartheid in South Africa, Alabasterwhich treats ideas of race and beauty in the USA, and his epic Phoenix, considered one of the great masterpieces of the manga world.
Another of Tezuka’s avid early readers was Hiroshima survivor Keiji Nakazawa, who found in art and manga hope for a universal medium which could let his pleas for peace and nuclear disarmament cross language barriers. Many of the grotesque images of gory melting faces in Nakazawa’s harrowing autobiography Barefoot Gen are indistinguishable from the imagery in violent horror comics advocates of comics censorship so often denounce as harmful to children.
Our impulse to place political works like Barefoot Gen in a separate category from graphic horror or pornography despite their identical visual content is reflected in many governments’ obscenity laws, which ban vaguely-defined “obscene” or “indecent” content and often demand that works accused obscenity prove they have “artistic merit” to refute the charge, a rare situation where even legal systems with “innocent until proven guilty” standards put the burden of proof on the defendant. Some modern democracies which have state censorship, such as New Zealand, have worked to improve this by creating legislation which defines very clearly what can be censored (for example depictions of sexual exploitation of minors, or of extreme torture) rather than banning “indecent” content in the abstract. (I strongly recommend the New Zealand Chief Censors’ endlessly fascinating censorship ratings office blog which offers a vivid portrait of the trends in modern censorship, and what censorship would probably look like in the USA without the First Amendment).
If you’re interested in looking at some of these works, beyond Astro Boy, my top recommendations are Tezuka’s Message to Adolf and the work of another giant of the early post-war, Shigeru Mizuki, best known for his earlier Kitaro series which collects Japanese oral tradition yokai ghost stories. After the efforts of Tatsumi and others broadened the scope of what manga was allowed to depict, Mizuki published his magnificent Showa: a History of Japan, recently published in English by Drawn & Quarterly.
The first volume depicts the lead up to WWII in the 1920s-30s, and is fascinating to compare to the current political world, since it shows how Japanese society was became gradually more militarized and toxic due to tiny incremental short-term political and social decisions which feel very much like many one sees today, but paralleled by severe restrictions on speech and suppression of active resistance different from what one sees today. Ferociously critical of Japan’s government and warmongers, Mizuki’s history is also autobiography, depicting himself as a child, and how the day to day games kids played on the street became more violent and military, playing soldier instead of house, as the society drifted toward fascism.
It’s an extraordinarily powerful read, and particularly captures how, parallel to political events, moments of celebrity controversy and sensational news reflect and propel cultural shifts – think of how 100 years from now someone writing a history of the rise of America’s alt right movement would not include Milo Yiannopoulos, who had no demonstrable direct political role, yet for those living on the ground in this era he was clearly a factor/ indicator/ ingredient in the tensions of the times. Mizuki includes incidents and figures like that which parallel the political events and his family’s experiences, recreating the on-the-ground experience in a way unlike any other history I’ve read. I can’t recommend it enough to anyone interested in what fascism’s rise can teach us about today, and about how cultures change.
The idea: Revolutions in information technology always trigger innovations in censorship and information control, so we’re bringing together 25 experts on information revolutions past and present to create a filmed series of discussions (which we will post online for all to enjoy!) which we hope will help people understand the new forms of censorship and information control that are developing as a result of the digital revolution. And we’ve put together a museum exhibit on the history of censorship, a printed catalog with 200+ pages of full color images of banned and censored books, which you can get as a Kickstarter thank-you. More publications will follow.
For those who’ve wondered why there haven’t been many Ex Urbe posts recently, the work for this project has been a big part of it, though other real reasons include my chronic pain, and the tenure scramble (victory!), and racing to finish Terra Ignota book 4, and female faculty being put on way too many committees (12! seriously?!). But now that the preparatory work of the project is done, I should be able to share more here over the coming weeks and months.
The project was born out of Cory Doctorow and me sitting down at conventions from time to time and chatting about our work, and over and over something he was seeing current corporations or governments try out with digital regulation would be jarringly similar to something I saw guilds or city-states try during the print revolution. One big issue in both eras, for example, was/is the difference between systems that try to regulate content before it is released, i.e. requiring books to have licenses before they could be printed, or content to be vetted before it is published (think the Inquisition, the Comics Code Authority, or movie ratings in places like New Zealand where it’s illegal to screen unrated films), vs. systems that allow things to be released without oversight but create apparatus for policing/ removing/ prosecuting them after release if they’re found objectionable (like England in the 16th century, or online systems that have users flag content). Past information revolutions–from the printing press, to radio and talkies–give us test cases that show us what effects different policies had, so by looking, for example, at where the book trade fared better, Paris or Amsterdam, we can also look at what effects different regulations are likely to have on current information economies, and artistic output. We’ve got people who work on the Inquisition, digital music, the birth of copyright, ditto machines, Google, banned plays, burnings of Jewish books, comic book censorship, an amazing list!
There will more to share over the next months as the videos go online, but today I want to share one of the fun little pieces I wrote for exhibit on Book Burning. Writing for exhibits is always an extra challenge, since only so much can fit on a museum wall or item label, so, 2+ millenia of of book burning… can I do it justice in 550 words?
We can divide book burnings into three kinds: eradication burnings which seek to destroy a text, collection burnings which target a library or archive, and symbolic burnings which primarily aim to send a message.
The earliest known book burnings are one mentioned in the Hebrew Bible (Jeremiah 36), then the burning of Confucian works (and execution of Confucian scholars) in Qin Dynasty China, 213-210 BC. Christian book burning began after the Council of Nicaea, when Emperor Constantine ordered the burning of works of Arian (non-Trinitarian) Christianity. In the manuscript era eradication burnings could destroy all copies of a text—as in 1073 when Pope Gregory VII ordered the burning of Sappho—but after 1450 the movable type printing press made eradication burnings of published material effectively impossible unless one seized the whole print run before copies were dispersed. This was difficult for even the Inquisition, but it still practiced frequent symbolic book burning, especially in the Enlightenment, when a condemnation from Rome required Paris to publicly burn one or a few copies of a book, while all knew many more remained. When the beloved Encyclopédie was condemned, the French authorities tasked to burn it burned Jansenist theological writings in its place, a symbolic act two steps removed from harming the original.
Since print’s advent eradication burnings have diminished, though collection burnings continue, often targeting communities such as Protestant or Jewish communities, language groups such as indigenous texts in Portuguese-held Goa (India), universities whose organized collections are unique even if individual items are not, or state or institutional archives which contain unique content even in an age of print. Regime changes and political unrest have long been triggers for archive burnings, such as the burning of the National Archives of Bosnia and Herzegovina in 2014. Some book burnings result from smaller scale conflicts, as in 1852 when Armand Dufau, in charge of the school for the blind in Paris, ordered the burning of all books in the newly-invented braille system, of which he disapproved. Nazi burnings of Jewish and “un-German” material employed eradication rhetoric but were mainly collection burnings, as when youth groups burned 25,000 books from university libraries in 1933, or symbolic burnings, performing destruction to spread fear among foes and excitement among supporters while many party members retained or sold valuable books stolen from Jewish collections rather than destroying them.
Today, archived documents and historic manuscript collections remain most vulnerable to eradication burning, such as those burned in Iraq’s national Library in 2003, in two libraries in Tumbuktu in 2013, and others recently burned by ISIS. Large-scale book burnings in America have included the activities of the New York Society for the Suppression of Vice (founded 1873) which boasted of burning 15 tons of books and nearly 4 million “lewd” pictures, burnings of comic books in 1948, and burning of communist material during the Second Red Scare of the 1950s. Since then, most book burnings in America have been small-scale symbolic burnings of works such as Harry Potter, books objected to in schools or college classrooms, or of Bibles or Qur’ans. In a rare 2010 case of an attempted eradication burning, the Pentagon bought and burned nearly the whole print run of Antony Shaffer’s Operation Dark Heart, which—authorities said—contained classified information.
In the comments to the Progress post, a reader asked for clarification on what was so awful about Hobbes, and this was Ada’s response, which I am reposting as a post so that it doesn’t stay buried down there:
The Hobbes reference referred, not to my opinion of him or modern opinions on him, but contemporary opinions of him, how hated and feared he was by his peers in the mid-17th century. I’ll treat him more in the next iteration(s) of my skepticism series, but in brief Hobbes was a student of Bacon (he was actually Bacon’s amanuensis for a while) and used Bacon’s new techniques of observation and methodical reasoning with absolute mastery, BUT used them to come to conclusions that were absolutely terrifying to his peers, attacking the dignity of the human race, the foundations of government, the pillars of morality of his day, in ways whose true terror are hard for us to feel when we read Leviathan in retrospect, having accepted many of Hobbes’s ideas and being armored against the others by John Locke. But among his contemporaries, “The Beast of Malmsbury” as he was called, held an unmatched status as the intellectual terror of his day. In fact there are few thinkers ever in history who were so universally feared and hated–it’s only a slight exaggeration to say that for the two decades after the publication of Leviathan, the sole goal of western European philosophy was to find some way to refute Thomas Hobbes WITHOUT (here’s the tricky part) undermining Bacon. Because Bacon was light, hope, progress, the promise of a better future, and Hobbes was THE BEST wielder of Bacon’s techniques. So they couldn’t just DISMISS Hobbes without undermining Bacon, they had to find a way to take Hobbes on in his own terms and Bacon better than Hobbes did. It took 20 years and John Locke to achieve that, but in the meantime Hobbes so terrified his peers that they literally rewrote the laws of England more than once to extend censorship enough to silence Hobbes.
Also the man Just. Wouldn’t. Die. They wanted him dead and gone so they could forget him and move on but he lived to be 91, a constant reminder of the intellectual terror whose shadow had loomed so long over all of Europe. To give a sample of a contemporary articulation of the fear and amazement Hobbes caused in his peers, here is a satirical broadside published to celebrate his death:
My favorite verse from it is:
“Leviathan the Great is dead! But see
The small Behemoths of his progeny
Survive to battle all divinity!”
So I chose Hobbes as an example because he’s really the first “backfire” of Bacon, the first unexpected, unintended consequence of the new method. Hobbes’s book didn’t cause any atrocities, didn’t result in wars or massacres, but it did spread terror through the entire intellectual world, and was the first sniff of the scarier places that thought would go once Bacon’s call to examine EVERYTHING genuinely did examine everything… even things people did NOT want anyone to doubt. So while Hobbes is wonderful, from the perspective of his contemporaries he was the first warning sign that progress cannot be controlled, and that, while it will change parts of society we think are bad, it will change the parts we value too.
Hope that helps clear it up? I’ll discuss Hobbes more in later works.
Is progress inevitable? Is it natural? Is it fragile? Is it possible? Is it a problematic concept in the first place? Many people are reexamining these kinds of questions as 2016 draws to a close, so I thought this would be a good moment to share the sort-of “zoomed out” discussions the subject that historians like myself are always having.
There is a strange doubleness to experiencing an historic moment while being a historian one’s self. I feel the same shock, fear, overload, emotional exhaustion that so many are, but at the same time another me is analyzing, dredging up historical examples, bigger crises, smaller crises, elections that set the fuse to powder-kegs, elections that changed nothing. I keep thinking about what it felt like during the Wars of the Roses, or the French Wars of Religion, during those little blips of peace, a decade long or so, that we, centuries later, call mere pauses, but which were long enough for a person to be born and grow to political maturity in seeming-peace, which only hindsight would label ‘dormant war.’ But then eventually the last flare ended and then the peace was real. But on the ground it must have felt exactly the same, the real peace and those blips. That’s why I don’t presume to predict — history is a lesson in complexity not predictability — but what I do feel I’ve learned to understand, thanks to my studies, are the mechanisms of historical change, the how of history’s dynamism rather than the what next. So, in the middle of so many discussions of the causes of this year’s events (economics, backlash, media, the not-so-sleeping dragon bigotry), and of how to respond to them (petitions, debate, fundraising, art, despair) I hope people will find it useful to zoom out with me, to talk about the causes of historical events and change in general.
Two threads, which I will later bring together. Thread one: progress. Thread two: historical agency.
Part 1: The Question of Progress As Historians Ask It
“How do you discuss progress without getting snared in teleology?” a colleague asked during a teaching discussion. This is a historian’s succinct if somewhat technical way of asking a question which lies at the back of a lot of the questions people are wrestling with now. Progress — change for the better over historical time. The word has many uses (social progress, technological progress), but the reason it raises red flags for historians is the legacy of Whig history, a school of historical thought whose influence still percolates through many of our models of history. Wikipedia has an excellent opening definition of Whig history:
Whig history… presents the past as an inevitable progression towards ever greater liberty and enlightenment, culminating in modern forms of liberal democracy and constitutional monarchy. In general, Whig historians emphasize the rise of constitutional government, personal freedoms, and scientific progress. The term is often applied generally (and pejoratively) to histories that present the past as the inexorable march of progress towards enlightenment… Whig history has many similarities with the Marxist-Leninist theory of history, which presupposes that humanity is moving through historical stages to the classless, egalitarian society to which communism aspires… Whig history is a form of liberalism, putting its faith in the power of human reason to reshape society for the better, regardless of past history and tradition. It proposes the inevitable progress of mankind.
In other words, this approach presumes a teleology to history, that human societies have always been developing toward some pre-set end state: apple seeds into apple trees, humans into enlightened humans, human societies into liberal democratic paradises.
Some of the problems with this approach are transparent, others familiar to those of my readers who have been engaging with current discourse about the problems/failures/weaknesses of liberalism. But let me unpack some of the other problems, the ones historians in particular worry about.
Developed in the earlier the 20th century, Whig history presents a particular set of values and political and social outcomes as the (A) inevitable and (B) superior end-points of all historical change — political and social outcomes that arise from the Western European tradition. The Eurocentric distortions this introduces are obvious, devaluing all other cultures. But even for a Europeanist like myself, who’s already studying Europe, this approach has a distorting effect by focusing our attentions onto historical moments or changes or people that were “right” or “correct,” that took a step “forward.” When one attempts to write a history using this kind of reasoning, the heroes of this process (the statesman who founded a more liberal-democratic-ish state, the scientist whose invention we still use today, the poet whose pamphlet forwards the cause) loom overlarge in history, receiving too much attention. On the one hand, yes, we need to understand those past figures who are keystones of our present — I teach Plato, and Descartes, and Machiavelli with good reason — but if we study only the keystones, and not the other less conspicuous bricks, we wind up with a very distorted idea of the whole edifice.
Whig history also makes it dangerously easy to stray into placing moral value on those things which advanced the teleologicaly-predetermined future. Such things seem to be “correct” thus “good” thus “better” while those whose elements which did not contribute to this teleological development were “dead ends” or “mistakes” or “wrong” which quickly becomes “bad.” In such a history whole eras can be dismissed as unworthy of study for failing to forward progress (The Middle Ages did great stuff, guys!) while other eras can be disproportionately celebrated for advancing it (The Renaissance did a lot of dumb stuff too!). And, of course, whole regions can be dismissed for “failing” to progress (Africa, Asia) as can sub-regions (Poland, Spain).
To give an example within the realm of intellectual history, teleological intellectual histories very often create the false impression that the only figures involved in a period’s intellectual world were heroes and villains, i.e. thinkers we venerate today, or their nasty bad backwards-looking enemies. This makes it seem as if the time period in question was already just previewing the big debates we have today. Such histories don’t know what to do with thinkers whose ideas were orthogonal to such debates, and if one characterizes the Renaissance as “Faith!” vs. “Reason!” and Marsilio Ficino comes along and says “Let’s use Platonic Reason to heal the soul!” a Whig history doesn’t know what to do with that, and reads it as a “dead end” or “detour.” Only heroes or villains fit the narrative, so Ficino must either become one or the other, or be left out. Teleological intellectual histories also tend to give the false impression that the figures we think are important now were always considered important, and if you bring up the fact that Aristotle was hardly read at all in antiquity and only revived in the Middle Ages, or that the most widely owned author in the Enlightenment was the now-obscure fideist encyclopedist Pierre Bayle, the narrative has to scramble to adopt.
Teleological history is also prone to “presentism” <= a bad thing, but a very useful term! Presentism is when one’s reading of history is distorted by one’s modern perspective, often through projecting modern values onto past events, and especially past people. An essay about the Magna Carta which projects Enlightenment values onto its Medieval authors would be presentist. So are histories of the Renaissance which want to portray it as a battle between Reason and religion, or say that only Florence and/or Venice had the real Renaissance because they were republics, and only the democratic spirit of republics could foster fruitful, modern, forward-thinking people. Presentism is also rearing its head when, in the opening episodes of the new Medici: Masters of Florence TV series, Cosimo de Medici talks about bankers as the masterminds of society, and describes himself as a job-creator, not the conceptual space banking was in in 1420. Presentism is sometimes conscious, but often unconscious, so mindful historians will pause whenever we see something that feels revolutionary, or progressive, or proto-modern, or too comfortable, to check for other readings, and make triple sure we have real evidence. Sometimes things in the past really were more modern than what surrounded them. I spent many dissertation years assembling vast grids of data which eventually painstakingly proved that Machaivelli’s interest in radical Epicurean materialism was exceptional for his day, and more similar to the interests of peers seventy years in his future than his own generation — that Machiavelli was exceptional and forward-thinking may be the least surprising conclusion a Renaissance historian can come to, but we have to prove such things very, very meticulously, to avoid spawning yet another distorted biography which says that Galileo was fundamentally an oppressed Bill Nye. Hint: Galileo was not Bill Nye; he was Galileo.
These problems, in brief, are why discussions of progress, and of teleology, are red flags now for any historian.
Unfortunately, the bathwater here is very difficult to separate from an important baby. Teleological thinking distorts our understanding of the past, but the Whig approach was developed for a reason. (A) It is important to have ways to discuss historical change over time, to talk about the question of progress as a component of that change. (B) It is important to retain some way to compare societies, or at least to assess when people try to compare societies, so we can talk about how different institutions, laws, or social mores might be better or worse than others on various metrics, and how some historical changes might be positive or negative. While avoiding dangerous narratives of triumphant [insert Western phenomenon here] sweeping through and bringing light to a superstitious and backwards [era/people/place], we also want to be able to talk about things like the eradication of smallpox, and our efforts against malaria and HIV, which are undeniably interconnected steps in a process of change over time — a process which is difficult to call by any name but progress.
So how do historians discuss progress without getting snared in teleology?
And how do I, as a science fiction writer, as a science fiction reader, as someone who tears up every time NASA or ESA posts a new picture of our baby space probes preparing to take the next step in our journey to the stars, how do I discuss progress without getting snared in teleology?
I, at least, begin by being a historian, and talking about the history of progress itself.
Part 2: A Brief History of Progress
In the early seventeenth century, Francis Bacon invented progress.
Let me unpack that.
Ideas of social change over time had existed in European thought since antiquity. Early Greek sources talk about a Golden Age of peaceful, pastoral abundance, followed by a Silver Age, when jewels and luxuries made life more opulent but also more complicated. There followed a Bronze Age, when weapons and guards appeared, and also the hierarchy of have and have-nots, and finally an Iron Age of blood and war and Troy. Some ancients added more detail to this narrative, notably Lucretius in his Epicurean epic On the Nature of Things. In his version the transition from simple, rural living to luxury-hungry urbanized hierarchy was explicitly developmental, caused, not by divine planning or celestial influences, but by human invention: as people invented more luxuries they then needed more equipment–technological and social — to produce, defend, control, and war over said luxuries, and so, step-by-step, tranquil simplicity degenerated into sophistication and its discontents.
Lucretius’s developmental model of society has several important components of the concept of progress, but not all of them. It has the state of things vary over the course of human history. It also has humanity as the agent of that change, primarily through technological innovation and social changes which arise in reaction to said innovation. It does not have (A) intentionality behind this change, (B) a positive arc to this change, (C) an infinite or unlimited arc to this change, or–perhaps most critically–(D) the expectation that any more change will occur in the future. Lucretius accounts for how society reached its present, and the mythological eras of Gold, Silver, Bronze and Iron do the same. None of these ancient thinkers speculate — as we do every day — about how the experiences of future generations might continue to change and be fundamentally different from their own. Quantitatively things might be different — Rome’s empire might grow or shrink, or fall entirely to be replaced by another — but fundamentally cities will be cities, plows will be plows, empires will be empires, and in a thousand years bread will still be bread. Even if Lucan or Lucretius speculate, they do not live in our world where bread is already poptarts, and will be something even more outlandish in the next generation.
Medieval Europe came to the realization — and if you grant their starting premises they’re absolutely right — that if the entire world is a temporary construct designed by an omnipotent, omniscient Creator God for the purpose of leading humans through their many trials toward eternal salvation or damnation, then it’s madness to look to Earth history for any cause-to-effect chains, there is one Cause of all effects. Medieval thought is no more monolithic than modern, but many excellent examples discuss the material world as a sort of pageant play being performed for us by God to communicate his moral lessons, and if one stage of history flows into another — an empire rises, prospers, falls — that is because God had a moral message to relate through its progression. Take Dante’s obsession with the Emperor Tiberius, for example. According to Dante, God planned the Crucifixion and wanted His Son to be lawfully executed by all humanity, so the sin and guilt and salvation would be universal, so He created the Roman Empire in order to have there be one government large enough to rule and represent the whole world (remember Dante’s maps have nothing south of Egypt except the Mountain of Purgatory). The empire didn’t develop, it was crafted for God’s purposes, Act II scene iii the Roman Empire Rises, scene v it fulfills its purpose, scene vi it falls. Applause.
Did the Renaissance have progress? No. Not conceptually, though, as in all eras of history, constant change was happening. But the Renaissance did suddenly get closer to the concept too. The Renaissance invented the Dark Ages. Specifically the Florentine Leonardo Bruni invented the Dark Ages in the 1420s-1430s. Following on Petrarch’s idea that Italy was in a dark and fallen age and could rise from it again by reviving the lost arts that had made Rome glorious, Bruni divided history into three sections, good Antiquity, bad Dark Ages, and good Renaissance, when the good things lost in antiquity returned. Humans and God were both agents in this, God who planned it and humans who actually translated the Greek, and measured the aqueducts, and memorized the speeches, and built the new golden age. Renaissance thinkers, fusing ideas from Greece and Rome with those of the Middle Ages, added to old ideas of development the first suggestion of a positive trajectory, but not an infinite one, and not a fundamental one. The change the Renaissance believed in lay in reacquiring excellent things the past had already had and lost, climbing out of a pit back to ground level. That change would be fundamental, but finite, and when Renaissance people talk about “surpassing the ancients” (which they do) they talk about painting more realistic paintings, sculpting more elaborate sculptures, perhaps building more stunning temples/cathedrals, or inventing new clever devices like Leonardo’s heated underground pipes to let you keep your potted lemon tree roots warm in winter (just like ancient Roman underfloor heating!) But cities would be cities, plows would be maybe slightly better plows, and empires would be empires. Surpassing the ancients lay in skill, art, artistry, not fundamentals.
Then in the early seventeenth century, Francis Bacon invented progress.
If we work together — said he — if we observe the world around us, study, share our findings, collaborate, uncover as a human team the secret causes of things hidden in nature, we can base new inventions on our new knowledge which will, in small ways, little by little, make human life just a little easier, just a little better, warm us in winter, shield us in storm, make our crops fail a little less, give us some way to heal the child on his bed. We can make every generation’s experience on this Earth a little better than our own. There are — he said — three kinds of scholar. There is the ant, who ranges the Earth and gathers crumbs of knowledge and piles them, raising his ant-mound, higher and higher, competing to have the greatest pile to sit and gloat upon–he is the encyclopedist, who gathers but adds nothing. There is the spider, who spins elaborate webs of theory from the stuff of his own mind, spinning beautiful, intricate patterns in which it is so easy to become entwined — he is the theorist, the system-weaver. And then there is the honeybee, who gathers from the fruits of nature and, processing them through the organ of his own being, produces something good and useful for the world. Let us be honeybees, give to the world, learning and learning’s fruits. Let us found a new method — the Scientific Method — and with it dedicate ourselves to the advancement of knowledge of the secret causes of things, and the expansion of the bounds of human empire to the achievement of all things possible.
Bacon is a gifted wordsmith, and he knows how to make you ache to be the noble thing he paints you as.
“How, Chancellor Bacon, do we know that we can change the world with this new scientific method thing, since no one has ever tried it before so you have no evidence that knowledge will yield anything good and useful, or that each generation’s experience might be better than the previous?”
It is not an easy thing to prove science works when you have no examples of science working yet.
Bacon’s answer — the answer which made kingdom and crown stream passionate support and birthed the Academy of Sciences–may surprise the 21st-century reader, accustomed as we are to hearing science and religion framed as enemies. We know science will work–Bacon replied–because of God. There are a hundred thousand things in this world which cause us pain and suffering, but God is Good. He gave the cheetah speed, the lion claws. He would not have sent humanity out into this wilderness without some way to meet our needs. He would not have given us the desire for a better world without the means to make it so. He gave us Reason. So, from His Goodness, we know that Reason must be able to achieve all He has us desire. God gave us science, and it is an act of Christian charity, an infinite charity toward all posterity, to use it.
They believed him.
And that is the first thing which, in my view, fits every modern definition of progress. Francis Bacon died from pneumonia contracted while experimenting with using snow to preserve chickens, attempting to give us refrigeration, by which food could be stored and spread across a hungry world. Bacon envisioned technological progress, medical progress, but also the small social progresses those would create, not just Renaissance glories for the prince and the cathedral, but food for the shepherd, rest for the farmer, little by little, progress. As Bacon’s followers reexamined medicine from the ground up, throwing out old theories and developing…
I’m going to tangent for a moment. It really took two hundred years for Bacon’s academy to develop anything useful. There was a lot of dissecting animals, and exploding metal spheres, and refracting light, and describing gravity, and it was very, very exciting, and a lot of it was correct, but–as the eloquent James Hankins put it–it was actually the nineteenth century that finally paid Francis Bacon’s I.O.U., his promise that, if you channel an unfathomable research budget, and feed the smartest youths of your society into science, someday we’ll be able to do things we can’t do now, like refrigerate chickens, or cure rabies, or anesthetize. There were a few useful advances (better navigational instruments, Franklin’s lightning rod) but for two hundred years most of science’s fruits were devices with no function beyond demonstrating scientific principles. Two hundred years is a long time for a vastly-complex society-wide project to keep getting support and enthusiasm, fed by nothing but pure confidence that these discoveries streaming out of the Royal Society papers will eventually someday actually do something. I just think… I just think that keeping it up for two hundred years before it paid off, that’s… that’s really cool.
…okay, I was in the middle of a sentence: As Bacon’s followers reexamined science from the ground up, throwing out old theories and developing new correct ones which would eventually enable effective advances, it didn’t take long for his followers to apply his principle (that we should attack everything with Reason’s razor and keep only what stands) to social questions: legal systems, laws, judicial practices, customs, social mores, social classes, religion, government… treason, heresy… hello, Thomas Hobbes. In fact the scientific method that Bacon pitched, the idea of progress, proved effective in causing social change a lot faster than genuinely useful technology. Effectively the call was: “Hey, science will improve our technology! It’s… it’s not doing anything yet, so… let’s try it out on society? Yeah, that’s doing… something… and — Oh! — now the technology’s doing stuff too!” Except that sentence took three hundred years.
We know now, as Bacon’s successors learned, with harsher and harsher vividness in successive generations, that attempts at progress can also cause negative effects, atrocious ones. Like Thomas Hobbes. And the Terror phase of the French Revolution. And the life-expectancy in cities plummeting as industrialization spread soot, and pollutants, and cholera, and mercury-impregnated wallpaper, and lead-whitened bread, Mmmmm lead-whitened bread… And just as technological discoveries had their monstrous offspring, like lead-whitened bread, the horrors of colonization were some of the monstrous offspring of the social applications of Reason. Monstrous offspring we are still wrestling with today.
Part 3: Progresses
We now use the word “progress” in many senses, many more than Bacon and his peers did. There is “technological progress.” There is “social progress.” There is “economic progress.” We sometimes lump these together, and sometimes separate them.
Thus the general question “Has progress failed?” can mean several things. It can mean, “Have our collective efforts toward the improvement of the human condition failed to achieve their desired results?” This is being asked frequently these days in the context of social progress, as efforts toward equality and tolerance are facing backlash.
But “Has progress failed?” can also mean “Has the development of science and technology, our application of Reason to things, failed to make the lived experience of people better/ happier/ less painful? Have the changes been bad or neutral instead of good?” In other words, was Bacon right that human’s using Reason and science can change our world, but wrong that we can make it better?
I want to stress that it is no small intellectual transformation that “progress” can now be used in a negative sense as well as a positive one. The concept as Bacon crystallized it, and as the Enlightenment spread it, was inherently positive, and to use it in a negative sense would be nonsensical, like using “healing” in a negative sense. But look at how we actually use “progress” in speech today. Sometimes it is positive (“Great progress this year!”) and sometimes negative (“Swallowed up by progress…”). This is a revolutionary change from Bacon’s day, enabled by two differences between ourselves and Bacon.
First we have watched the last several centuries. For us, progress is sometimes the first heart transplant and the footprints on the Moon, and sometimes it’s the Belgian Congo with its Heart of Darkness. Sometimes it’s the annihilation of smallpox and sometimes it’s polio becoming worse as a result of sanitation instead of better. Sometimes it’s Geraldine Roman, the Phillipines’ first transgender congresswoman, and sometimes it’s Cristina Calderón, the last living speaker of the Yaghan language. Progress has yielded fruits much more complex than honey, which makes sentences like “The prison of progress” sensical to us.
We have also broadened progress. For Bacon, progress was the honey and the honeybees, hard, systematic, intentional human action creating something sweet and useful for mankind. It was good. It was new. And it was intentional. In its nascent form, Bacon’s progress did not differentiate between progress the phenomenon and progress the concept. If you asked Bacon “Was there progress in the Middle Ages?” he would have answered, “No. We’re starting to have progress right now.” And he’s correct about the concept being new, about intentional or self-aware progress, progress as a conscious effort, being new. But if we turn to Wikipedia it defines “Progress (historical)” as “the idea that the world can become increasingly better in terms of science, technology, modernization, liberty, democracy, quality of life, etc.” Notice how agency and intentionality are absent from this. Because there was technological and social change before 1600, there were even technological and social changes that undeniably made things better, even if they came less frequently than they do in the modern world. So the phenomenon we study through the whole of history, far before the maturation of the concept.
As “progress” broadened to include unsystematic progress as well as the modern project of progress, that was the moment we acquired the questions “Is progress natural?” and “Is progress inevitable?” Because those questions require progress to be something that happens whether people intend it or not. In a sense, Bacon’s notion of progress wasn’t as teleological as Whig history. Bacon believed that human action could begin the process of progress, and that God gave Reason to humanity with this end in mind, but Bacon thought humans had to use a system, act intentionally, gather the pollen to make the honey, he didn’t think they honey just flowed. Not until progress is broadened to include pre-modern progress, and non-systematic, non-intentional modern progress, can the fully teleological idea of an inescapable momentum, an inevitability, join the manifold implications of the word “progress.”
Now I’m going to show you two maps.
This is map of global population, rendered to look like a terrain. It shows the jagged mountain ranges of south and east Asia, the vast, sweeping valleys of forest and wilderness. The most jagged spikes may be a little jarring, the intensity of India and China, but even those are rich brown mountains, while the whole thing has the mood of a semi-untouched world, much more pastoral wilderness than city, and almost everywhere a healthy green. This makes progress, or at least the spread of population, feel like a natural phenomenon, a neutral phenomenon.
This is the Human Ooze Map. This map shows exactly the same data, reoriented to drip down instead of spiking up, and to be a pus-like yellow against an ominous black background. Instantly the human metropolises are not natural spikes within a healthy terrain, but an infection clinging to every oozing coastline, with the densest mega-cities seeming to bleed out amidst the goop, like open pustules.
Both these maps show one aspect of ‘progress’. Whether the teeming cities of our modern day are an apocalyptic infection, or a force as natural as the meandering of shores and tree-lines, depends on how we present the narrative, and the moral assumptions that underlie that presentation. Presentism and the progress narrative in general have very similar distorting effects. When we examine past phenomena, institutions, events, people, ideas, some feel viscerally good or viscerally bad, right or wrong, forward-moving or backward-moving, values they acquire from narratives which we ourselves have created, and which orient how we analyze history, just as these mapmakers have oriented population up, or down, resulting in radically different feelings. Jean-Jacques Rousseau’s model of the Noble Savage, happier the rural simplicity of Lucretius’s Golden Age rather than in the stressful ever-changing urban world of progress, is itself an image progress presented like the Human Ooze Map, reversing the moral presentation of the same facts.
Realizing that the ways we present data about progress are themselves morally charged can help us clarify questions that are being asked right now about liberalism, and nationalism, and social change, and opposition to social change. Because when we ask whether the world is experiencing a “failure” or a “revolution” or a “regression” or a “backlash” or a “last gasp” or a “pendulum swing” or a “prelude to triumph” etc., all these characterizations reorient data around different facets of the concept of progress, positive or negative, natural or intentional, just as these two maps reorient population around different morally-charged visualizations.
In sum: post colonialism, post industrialization, post Hobbes, we can no longer talk about progress as a unilateral, uncomplicated, good, not without distorting history, and ignoring the terrible facets of the last several centuries. Bacon thought there would be only honey, he was wrong. But we can’t not discuss progress because, during these same centuries, each generation’s experience has been increasingly different from the last generation, and science and human action are propelling this change. And there has been some honey. We need ways to talk about that.
But not without bearing in mind how we invest progress with different kinds of moral weight (the terrain or the ooze…)
And not without a question Bacon never thought to ask, because he did not realize (as we do) that technological and social change had been going on for many centuries before he made the action conscious. So Bacon never thought to ask: Do we have any power over progress?
Part 4: Do Individuals Have the Power to Change History?
Feelings of helplessness and despair have also been big parts of the shock of 2016. Helplessness and despair are questions, as well as feelings. They ask: Am I powerless? Can I personally do anything to change this? Do individuals have any power to shape history? Are we just swept along by the vast tides of social forces? Are we just cogs in the machine? What changes history?
Within a history department this divide often manifests methodologically.
Economic historians, and social historians, write masterful examinations of how vast social and economic forces, and their changes, whether incremental or rapid, have shaped history. Let’s call that Great Forces history. Whenever you hear people comparing our current wealth gap to the eve of the French Revolution, that is Great Forces history. When a Marxist talks about the inevitable interactions of proletariat and bourgeoisie, or when a Whig historian talks about the inevitable march of progress, those are also kinds of Great Forces history.
Great Forces history is wonderful, invaluable. It lets us draw illuminating comparisons, and helps us predict, not what will happen but what could happen, by looking at what has happened in similar circumstances. I mentioned earlier the French Wars of Religion, with their intermittent blips of peace. My excellent colleague Brian Sandberg of NIU (a brilliant historian of violence) recently pointed out to me that France during the Catholic-Protestant religious wars was about 10% Protestant, somewhat comparable to the African American population of the USA today which is around 13%. A striking comparison, though with stark differences. In particular, France’s Protestant/Calvinist population fell disproportionately in the wealthy, politically-empowered aristocratic class (comprising 30% of the ruling class), in contrast with African Americans today who fall disproportionately in the poorer, politically-disempowered classes. These similarities and differences make it very fruitful to look at the mechanisms of civil violence in 16th and 17th century France (how outbreaks of violence started, how they ended, who against whom) to help us understand the similar-yet-different ways civil violence might operate around us now. That kind of comparison is, in my view, Great Forces history at its most fruitful. (You can read more by Brian Sandberg on this issue in his book, on his blog, and on the Center for the Study of Religious Violence blog; more citations at the end of this article.)
But are we all, then, helpless water droplets, with no power beyond our infinitesimal contribution to the tidal forces of our history? Is there room for human agency?
History departments also have biographers, and intellectual historians, and micro-historians, who churn out brilliant histories of how one town, one woman, one invention, one idea reshaped our world. Readers have seen me do this here on Ex Urbe, describing how Beccaria persuaded Europe to discontinue torture, how Petrarch sparked the Renaissance, how Machiavelli gave us so much. Histories of agents, of people who changed the world. Such histories are absolutely true — just as the Great Forces histories are — but if Great Forces histories tell us we are helpless droplets in a great wave, these histories give us hope that human agency, our power to act meaningfully upon our world, is real. I am quite certain that one of the causes of the explosive response to the Hamilton musical right now is its firm, optimistic message that, yes, individuals can, and in fact did, reshape this world — and so can we.
This kind of history, inspiring as it is, is also dangerous. The antiquated/old-fashioned/bad version of this kind of history is Great Man history, the model epitomized by Thomas Carlyle’s Heroes, Hero-Worship and the Heroic in History (a gorgeous read) which presents humanity as a kind of inert but rich medium, like agar ready for a bacterial culture. Onto this great and ready stage, Nature (or God or Providence) periodically sends a Great Man, a leader, inventor, revolutionary, firebrand, who makes empires rise, or fall, or leads us out of the black of ignorance. Great Man history is very prone to erasing everyone outside a narrow elite, erasing women, erasing the negative consequences of the actions of Great Men, justifying atrocities as the collateral damage of greatness, and other problems which I hope are familiar to my readers.
But when done well, histories of human agency are valuable. Are true. Are hope.
So if Great Forces history is correct, and useful, and Human Agency history is also correct, and useful… how do we balance that? They are, after all, contradictory.
Part 5: The Papal Election of 2016
Every year in my Italian Renaissance class, here at the University of Chicago, I run a simulation of a Renaissance papal election, circa 1490-1500. Each student is a different participant in the process, and they negotiate, form coalitions, and, eventually, elect a pope. And then they have a war, and destroy some chunk of Europe. Each student receives a packet describing that students’ character’s goals, background, personality, allies and enemies, and a packet of resources, cards representing money, titles, treasures, armies, nieces and nephews one can marry off, contracts one can sign, artists or scholars one can use to boost one’s influence, or trade to others as commodities: “I’ll give you Leonardo if you send three armies to guard my city from the French.”
Some students in the simulation play powerful Cardinals wielding vast economic resources and power networks, with clients and subordinates, complicated political agendas, and a strong shot at the papacy. Others are minor Cardinals, with debts, vulnerabilities, short-term needs to some personal crisis in their home cities, or long-term hopes of rising on the coattails of others and perhaps being elected three or four popes from now. Others, locked in a secret chamber in the basement, are the Crowned Heads of Europe — the King of France, the Queen of Castile, the Holy Roman Emperor — who smuggle secret orders (text messages) to their agents in the conclave, attempting to forge alliances with Italian powers, and gain influence over the papacy so they can use Church power to strengthen their plans to launch invasions or lay claim to distant thrones. And others are not Cardinals at all but functionaries who count the votes, distribute the food, the guard who keeps watch, the choir director who entertains the churchmen locked in the Sistine, who have no votes but can hear, and watch, and whisper.
There are many aspects to this simulation, which I may someday to discuss here at greater length (for now you can read a bit about it on our History Department blog), but for the moment I just want to talk about the outcomes, and what structures the outcomes. I designed this simulation not to have any pre-set outcome. I looked into the period as best I could, and gave each historical figure the resources and goals that I felt accurately reflected that person’s real historical resources and actions. I also intentionally moved some characters in time, including some Cardinals and political issues which do not quite overlap with each other, in order to make this an alternate history, not a mechanical reconstruction, so that students who already knew what happened to Italy in this period would know they couldn’t have the “correct” outcome even if they tried, which frees everyone to pursue goals, not “correct” choices, and to genuinely explore the range of what could happen without being too locked in to what did. I set up the tensions and the actors to simulate what I felt the situation was when the election begin, then left it free to flow.
I have now run the simulation four times. Each time some outcomes are similar, similar enough that they are clearly locked in by the greater political webs and economic forces. The same few powerful Cardinals are always leading candidates for the throne. There is usually also a wildcard candidate, someone who has never before been one of the top contenders, but circumstances bring a coalition together. And, usually, perhaps inevitably, a juggernaut wins, one of the Cardinals who began with a strong power-base, but it’s usually very, very close. And the efforts of the wildcard candidate, and the coalition that formed around that wildcard, always have a powerful effect on the new pope’s policies and first actions, who’s in the inner circle and who’s out, what opposition parties form, and that determines which city-states rise and which city-states burn as Italy erupts in war.
And the war is Always. Totally. Different.
Because as the monarchies race to make alliances and team up against their enemies, they get pulled back-and-forth by the ricocheting consequences of small actions: a marriage, an insult, a bribe traded for a whisper, someone paying off someone else’s debts, someone taking a shine to a bright young thing. Sometimes France invades Spain. Sometimes France and Spain unite to invade the Holy Roman Empire. Sometimes England and Spain unite to keep the French out of Italy. Sometimes France and the Empire unite to keep Spain out of Italy. Once they made a giant pan-European peace treaty, with a set of marriage alliances which looked likely to permanently unify all four great Crowns, but it was shattered by the sudden assassination of a crown prince.
So when I tell people about this election, and they ask me “Does it always have the same outcome?” the answer is yes and no. Because the Great Forces always push the same way. The strong factions are strong. Money is power. Blood is thicker than promises. Virtue is manipulable. In the end, a bad man will be pope. And he will do bad things. The war is coming, and the land — some land somewhere — will burn. But the details are always different. A Cardinal needs to gather fourteen votes to get the throne, but it’s never the same fourteen votes, so it’s never the same fourteen people who get papal favor, whose agendas are strengthened, whose homelands prosper while their enemies fall. And I have never once seen a pope elected in this simulation who did not owe his victory, not only to those who voted, but to one or more of the humble functionaries, who repeated just the right whisper at just the right moment, and genuinely handed the throne to Monster A instead of Monster B. And from that functionary flow the consequences. There are always several kingmakers in the election, who often do more than the candidate himself to get him on the throne, but what they do, who they help, and which kingmaker ends up most favored, most influential, can change a small war in Genoa into a huge war in Burgundy, a union of thrones between France and England into another century of guns and steel, or determine which decrees the new pope signs. That sometimes matters more than whether war is in Burgundy or Genoa, since papal signatures resolve questions such as: Who gets the New World? Will there be another crusade? Will the Inquisition grow more tolerant or less toward new philosophies? Who gets to be King of Naples? These things are different every time, though shaped by the same forces.
Frequently the most explosive action is right after the pope is elected, after the Great Forces have thrust a bad man onto Saint Peter’s throne, and set the great and somber stage for war, often that’s the moment that I see human action do most. That’s when I get the after-midnight message on the day before the war begins: “Secret meeting. 9AM. Economics cafe. Make sure no one sees you. Sforza, Medici, D’Este, Dominicans. Borgia has the throne but he will not be master of Italy.” And together, these brave and haste-born allies, they… faicceed? Fail and succeed? They give it all they have: diplomacy, force, wealth, guile, all woven together. They strike. The bad pope rages, sends forces out to smite these enemies. The kings and great thrones take advantage, launch invasions. The armies clash. One of the rebel cities burns, but the other five survive, and Borgia (that year at least) is not Master of Italy.
We feel it, the students as myself, coming out of the simulation. The Great Forces were real, and were unstoppable. The dam was about to break. No one could stop it. But the human agents — even the tiniest junior clerk who does the paperwork — the human agents shaped what happened, and every action had its consequences, imperfect, entwined, but real. The dam was about to break, but every person there got to dig a channel to try to direct the waters once they flowed, and that is what determined the real shape of the flood, its path, its damage. No one controlled what happened, and no one could predict what happened, but those who worked hard and dug their channels, most of them succeeded in diverting most of the damage, achieving many of their goals, preventing the worst. Not all, but most.
And what I see in the simulation I also see over and over in real historical sources.
This is how both kinds of history are true. There are Great Forces. Economics, class, wealth gaps, prosperity, stagnation, these Great Forces make particular historical moments ripe for change, ripe for war, ripe for wealth, ripe for crisis, ripe for healing, ripe for peace. But individuals also have real agency, and our actions determine the actual consequences of these Great Forces as they reshape our world. We have to understand both, and study both, and act on the world now remembering that both are real.
So, can human beings control progress? Yes and no.
Part 6: Ways to Talk About Progress in the 21st Century
Few things have taught me more about the world than keeping a fish tank.
You get some new fish, put them in your fish tank, everything’s fine. You get some more new fish, the next morning one of them has killed almost all the others. Another time you get a new fish and it’s all gaspy and pumping its gills desperately, because it’s from alkeline waters and your tank is too acidic for it. So you put in a little pH adjusting powder and… all the other fish get sick from the Ammonia that releases and die. Another time you get a new fish and it’s sick! So you put fish antibiotics in the water, aaaand… they kill all the symbiotic bacteria in your filter system and the water gets filled with rotting bacteria, and the fish die. Another time you do absolutely nothing, and the fish die.
What’s happening? The same thing that happened in the first two centuries after Francis Bacon, when the science was learning tons, but achieving little that actually improved daily life. The system is more complex than it seems. A change which achieves its intended purpose also throws out-of-whack vital forces you did not realize were connected to it. The acidity buffer in the fish tank increases the nutrients in the water, which causes an algae bloom, which uses up the oxygen and suffocates the catfish. The marriage alliance between Milan and Ferrara makes Venice friends with Milan, which makes Venice’s rival Genoa side with Spain, which makes Spain reluctant to anger Portugal, which makes them agree to a marriage alliance, and then Spain is out of princesses and can’t marry the Prince of Wales, and the next thing you know there are soldiers from Scotland attacking Bologna. A seventeenth-century surgeon realizes that cataracts are caused by something white and opaque appearing at the front of the eye so removes it, not yet understanding that it’s the lens and you really need it.
So when I hear people ask “Has social progress has failed?” or “Has liberalism failed?” or “Has the Civil Rights Movement failed?” my zoomed-in self, my scared self, the self living in this crisis feels afraid and uncertain, but my zoomed-out self, my historian self answers very easily. No. These movements have done wonders, achieved tons! But they have also done what all movements do in a dynamic historical system: they have had large, complicated consequences. They have added something to the fish tank. Because the same Enlightenment impulse to make a better, more rational world, where everyone would have education and equal political empowerment BOTH caused the brutalities of the Belgian Congo AND gave me the vote. And that’s the sort of thing historians look at, all day.
But if the consequences of our actions are completely unpredictable, would it be better to say that change is real but progress controlled by humans is just an idea which turned out to be wrong? No. I say no. Because I gradually got better at understanding the fish tank. Because the doctors gradually figured out how the eye really does function. Because some of our civil rights have come by blood and war, and others have come through negotiation and agreement. Because we as humans are gradually learning more about how our world is interconnected, and how we can take action within that interconnected system. And by doing so we really have achieve some of what Francis Bacon and his followers waited for through those long centuries: we have made the next generation’s experience on this Earth a little better than our own. Not smoothly, and not quickly, but actually. Because, in my mock papal election, the dam did break, but those students who worked hard to dig their channels did direct the flood, and most of them managed to achieve some of what they aimed at, though they always caused some other effects too.
Is it still blowing up in our faces?
Is it going to keep blowing up in our faces, over and over?
Is it going to blow up so much, sometimes, that it doesn’t seem like it’s actually any better?
Is that still progress?
Because there was a baby in the bathwater of Whig history. If we work hard at it, we can find metrics for comparing times and places which don’t privilege particular ideologies. Metrics like infant mortality. Metrics like malnutrition. Metrics like the frequency of massacres. We can even find metrics for social progress which don’t irrevocably privilege a particular Western value system. One of my favorite social progress metrics is: “What portion of the population of this society can be murdered by a different portion of the population and have the murderer suffer no meaningful consequences?” The answer, for America in 2017, is not 0%. But it’s also not 90%. That number has gone down, and is now far below the geohistorical norm. That is progress. That, and infant mortality, and the conquest of smallpox. These are genuine improvements to the human condition, of the sort that Bacon and his followers believed would come if they kept working to learn the causes and secret motions of things. And they were right. While Whig history privileges a very narrow set of values, metrics which track things like infant mortality, or murder with impunity, still privilege particular values — life, justice, equality — but aim to be compatible with as many different cultures, and even time periods, as possible. They are metrics which stranded time travelers would find it fairly easy to explain, no matter where they were dumped in Earth’s broad timeline. At least that’s our aim. And such metrics are the best tool we have at present to make the comparisons, and have the discussions about progress, that we need to have to grapple with our changing world.
Because progress is both a concept and a phenomenon.
The concept is the hope that collective human effort can make every generation’s experience on this Earth a little better than the previous generation’s. That concept has itself become a mighty force shaping the human experience, like communism, iron, or the wheel. It is valuable thing to look at the effects that concept has had, to talk about how some have been destructive and others constructive, and to study, from a zoomed-out perspective, the consequences, successes, and failures of different movements or individuals who have acted in the name of progress.
The phenomenon is also real. My own personal assessment of it is just that, a personal assessment, with no authority beyond some years spent studying history. I hope to keep reexamining and improving this assessment all the days of my life. But here at the beginning of 2017 I would say this:
Progress is not inevitable, but it is happening.
It is not transparent, but it is visible.
It is not safe, but it is beneficial.
It is not linear, but it is directional.
It is not controllable, but it is us. In fact, it is nothing but us.
Progress is also natural, in my view, not in the sense that it will inevitably triumph over its doomed opposition, but in the sense that the human animal is part of nature, so the Declaration of the Rights of Man is as natural as a bird’s nest or a beaver dam. There is no teleology, no inevitable correct ending locked in from time immemorial. But I personally think there is a certain outcome to progress, gradual but certain: the decrease of pain in the human condition over time. Because there is so much desire in this world to make a better one. Bacon was right that we ache for it. And the real measurable changes we have made show that he was also right that we can use Reason and collective effort to meet our desires, even if the process is agonizingly slow, imperfect, and dangerous. But we know now how to go about learning the causes and secret motions of things. And how to use that knowledge.
We are also learning to understand the accidental negative consequences of progress, looking out for them, mitigating them, preventing them, creating safety nets. We’re getting better at it. Slowly, but we are.
Zooming back in hurts. It’s easy to say “the French Wars of Religion” and erase the little blips of peace, but it’s hard to feel fear and pain, or watch a friend feel fear and pain. Sometimes I hear people say they think that things today are worse than they’ve ever been, especially the hate, or the race relations in the USA, that they’re worse now than ever. That we’ve made no progress, quite the opposite. Similarly, I think a person who grew up during one of the peaceful pauses in the French Wars of Religion might say, when the violence restarted, that the wars were worse now than they had ever been, and farther than ever from real peace. They aren’t actually worse now. They genuinely were worse before. But they are really, really bad right now, and it does really, really hurt.
The slowness of social progress is painful, I think especially because it’s the aspect of progress that seemed it would come fastest. During that first century, when Bacon’s followers were waiting in maddening impatience for their better medical knowledge to result in any actual increase in their ability to save lives, social progress was already working wonders. The Enlightenment did extend franchise, end torture on an entire continent, achieved much, and had this great, heady, explosive feeling of victory and momentum. It seemed like social progress was already half-way-done before tech even got started. But Charles Babbage kicked off programmable computing in 1833 and now my pocket contains 100x the computing power needed to get Apollo XI to the Moon, so why, if Olympe de Gouges wrote the Declaration of the Rights of Woman and the Citizen in 1791, do we still not have equal pay?
Because society is a very complicated fish tank. Because we still have a lot to learn about the causes and secret motions of society.
But if there is a dam right now, ready to break and usher in a change, Great Forces are still shaped by human action. Our action.
Studying history has proved to me, over and over, that things used to be worse. That they are better now. Progress is real. That’s a consolation, but a hollow one while we’re still here facing the pain. What fills its hollowness, for me at least, is remembering that secret meeting in the Economics cafe, that hasty plan, diplomacy, quick action — not a second chance after the disaster, but a next chance. And a next. And a next, to take actions that really did achieve things, even if not everything. Human action combining with the flood is not powerlessness. And that’s how I think progress really works.
And as promised, more citations on the demographics of religious violence in France, with thanks to Brian Sandberg:
Brian Sandberg, Warrior Pursuits: Noble Culture and Civil Conflict in Early Modern France (Baltimore, MD: Johns Hopkins University Press, 2010).
Philip Benedict, “The Huguenot Population of France, 1600-85,” in The Faith and Fortunes of France’s Huguenots, 1600-85 (Aldershot: Ashgate, 2001), 39-42, 92-95.
Arlette Jouanna, La France du XVIe siècle, 1483-1598 (Paris: Presses Universitaires de France, 1996), 325-340.
Jacques Dupâquier, ed., De la Renaissance à 1789, vol. 2 of Histoire de la population française (Paris: Presses Universitaires de France, 1988), 81-94.
Off to Italy again. This seems like a good time to share a link to a video of an illustrated talk Ada gave at the Lumen Christi institute in Chicago in February. It’s a fascinating overview of the place of San Marco in Florence, with lots of excellent pictures. It’s like an audio version of an Ex Urbe post, with Fra Angelico, the meaning of blue, the Magi, the Medici, Savonarola, confraternities, and the complexities of Renaissance religious and artistic patronage.
And here’s one of the pictures mentioned but not shown in the presentation, a nine panel illustration by Filippo Dolcaiati “The History of Antonio Rinaldeschi.” It depicts the real historical fate of Rinaldeschi, who became drunk while gambling and threw manure at an icon of the Virgin Mary. A fascinating incident for demonstrating the functions of confraternities, and for demonstrating how seriously the people of Florence took the protection offered by saints and icons.
Second, due to a recent policy change in Italy’s national museums I was able to finally take literally thousands of photos of artifacts and spaces in museums that have been forbidden to cameras for years. I’ve started sharing the photos on Twitter (#historypix) so follow me on Twitter if you would enjoy random photos of cool historical artifacts twice a day.
Meanwhile I don’t yet have another full essay ready to post here, but I’m happy to say the reason is that I’m working away on the page proofs of Too Like the Lightning, the final editing step before the books go to press. I’ve even received a photo from my editor of the Advanced Release Copies for book reviewers sitting in a delicious little pile! It’s fun seeing how many different baby steps the book is taking on its long path to becoming real: cover art, page count, typography, physicality in many stages, first the pre-copy-edit Advanced Bound Manuscripts, then the post-copy-edit but pre-page-proof Advanced Release Copies, evolving toward the final hardcover transformation by transformation. My biggest point of suspense at this point is wondering how fat it will be, how heavy in the hand…
And now, a quick piece of history fun:
There is a dimly-lit hallway half way through the Vatican museum (after you’ve looked at 2,000 Roman marbles, 1,000 Etruscan vases and enough overwhelming architecture to make you start feeling slightly punchy) hung on the left-hand side with stunning tapestries of scenes from the life of Christ based on cartoons by Raphael. But on the right-hand side in the same hallway, largely ignored by the thousands of visitors who stumble through, is my favorite Renaissance tapestry cycle, a sequence of images of The Excessively Exciting Life of Pope Urban VIII. My best summary of these images is that, when I showed them to my excellent friend Jonathan (author of our What Color is Pluto? guest post) he scratched his chin and said, “I think the patronage system may have introduced some bias.” And it’s very true, these are an amazing example of Renaissance art whose sole purpose is excessive flattery of the patron, a genre common in all media: histories, biographies, dedications, sculptures, paintings, verses, and, in this case, thread.
These tapestries are fragile and quite faded, and the narrow hallway thronging with Raphael-admirers makes it awkward to get a good angle, but with much effort I think these capture the over-the-top absurdity which makes these tapestries such a delight. Urban VIII now is best known for engaging in unusually complicated military and political maneuvering, expanding and fortifying the papal territories, pushing fiercely against Hapsburg expansion into Italy, finishing the canonization of St. Ignatius of Loyola, persecuting Galileo, commissioning a lot of Bernini sculptures, and spending so much on military and artistic expenses that he got the papacy so head over heels in debt that the Roman people hated him, the Cardinals conspired to depose him (note: it usually takes a few high-profile murders and/or orgies to get them to do that, so this was a LOT of debt), and his successor was left spending 80% of the Vatican’s annual income on interest repayments alone. But let’s see what scenes from his life he himself wanted us to remember:
My favorite is the first: Angels and Muses descend from Heaven to attend the college graduation of young Maffeo Barberini (not yet pope Urban VIII) and give him a laurel crown. If all graduation ceremonies were this exciting, we’d never miss them! Also someone there has a Caduceus, some weird female version of Hermes? Hard to say. And look at the amazing fabric on the robe of the man overseeing the ceremony.
Second, Maffeo Barberini receives the Cardinal’s Hat, attended by an angel, while Pope Paul V who is giving him the hat points in a heavy-handed foreshadowing way to his own pope hat nearby. What could it mean?!
Next, the fateful election! Heavenly allegories of princely virtues come to watch as the wooden slips are counted and the vote counter is astonished by the dramatic result! Note how, propaganda aside, this is useful for showing us what the slips looked like.
In the one above I particularly like the guy who’s peering into the goblet to make absolutely sure no slips are stuck there:
On the other side of the same scene, our modest Urban VIII is so surprised to be elected he practically swoons! And even demands a recount, while the nice acolyte kneels before him with the (excessively heavy) papal tiara on a silver platter.
Now Urban’s adventures as pope! He breaks ground for new construction projects in Rome, attended by some floating cupid creature holding a book for the flying allegorical heart of the city:
He builds new fortresses to defend Rome:
He makes peace between allegorical ladies representing Rome and Etruria (the area right next to Rome: note, if there is strife between Rome and Etruria in the first place, things in Italy are VERY VERY BAD! But the tapestries aren’t going into that):
And finally, Urban VIII defends Rome from Famine and Plague by getting help from St. Peter, St. Paul, Athena, and St. Sebastian. Well done, your Holiness!
How about that for the exciting life of a late Renaissance pope? You get to hang out with lots of allegorical figures, and vaguely pagan deities as well as saints, and everyone around you is always gesturing gracefully! No matter they fought so hard for the papal tiara. Also, no bankers or moneylenders or interest repayment to be found!
More seriously, another century’s propaganda rarely makes it into our canon of what art is worth reproducing, teaching and discussing, but I often find this kind of artifact much more historically informative than most: we can learn details of clothing, spaces and items like how papers are folded, or what voting slips looked like. We can learn which acts a political figure wanted to be remembered for, what seemed important at the time, so different from what we remember. A tapestry of him canonizing St. Ignatius of Loyola would certainly be popular now, but in his day people cared more about immediate military matters, and he had no way to predict how important St. Ignatius would eventually become. Pieces like this are also a good way to remind ourselves that the Renaissance art we usually see on calendars and cell phone cases isn’t representative, it’s our own curated selection of that tiny venn diagram intersection of art that fits the tastes of BOTH then AND now. And a good reminder that we should always attend graduation ceremonies, since you never know when Angels and Muses might descend from Heaven to attend.
My own period I will treat the most briefly in this survey. This may seem like a strange choice, but I can either do a general overview, or get sidetracked discussing individual philosophers, theologians and commentators and their uses of skepticism for another five posts. So, in brief:
In the later Middle Ages, within the philosophical world, the breadth of disagreement within scholarship, how different the far extreme theories were on any given topic, was rather circumscribed. A good example of a really fractious fight is the question of, within your generally Aristotelian tripartite rational immortal soul, which of the two decision-making principles is more powerful, the Intellect or the Will? It’s a big and important question – without it we will starve to death like Buridan’s ass, and be unable to decide whether to send our second sons to Franciscan or a Dominican monasteries, plus we need it to understand how Original Sin, Grace and salvation work. But the breadth of answers is not that big, and the question itself presumes that everyone involved already believes 90% the same thing.
Enter Petrarch, “Let’s read the classics! They’ll make us great like the Romans!” Begin 250 years of working really hard to find, copy, correct, translate, edit, print and proliferate every syllable surviving from antiquity. Now we discover that Epicurus says there’s no afterlife and the universe is made of atoms; Stoics say the universe is one giant contiguous object without motion or individual existence; Plato says there’s reincarnation (What? The Plato we used to have didn’t say that!); and Aristotle totally doesn’t say what we thought he said, it turns out the Organon was a terrible translation (Sorry, Boethius, you did your best, and we love you, but it was a terrible translation.) Suddenly the palette of questions is much broader, and the degree to which people disagree has opened exponentially wider. If we were charting a solar system before, now we’re charting a galaxy. But the humanists still tried hard to make them all agree, much as the scholastics and Peter Abelard had, since the ancients were ALL wonderful and ALL brilliant and ALL right, right? Even the stuff that contradicts the other stuff? Hence Renaissance Syncretism, attempts by philosophers like Marsilio Ficino and Giovanni Pico della Mirandola to take all the authors of antiquity, and Aquinas and a few others in the mix, and show how they were all really saying the same thing, in a roundabout, hidden, glorious, elusive, poetic, we-can-make-like-Abelard-and-make-it-all-make-sense way.
Before you dismiss these syncretic experiments as silly, or as slavish toadying, there is a logic to it if you can zoom out from modern pluralistic thinking for a minute and look at what Renaissance intellectuals had to work with.
To follow their logic chain you must begin–as they did–by positing that Christianity is true, and there is a single monotheistic God who is the source of all goodness, virtue, and knowledge. Wisdom, being wise and good at judgment, helps you tell true from false and right from wrong, and what is true and right will always agree with and point toward God. Therefore all wise people in history have really been aiming toward the same thing–one truth, one source. Plato and Aristotle and their Criteria of Truth are in the background of this, Plato’s description of the Good which is one divine thing that all reasoning minds tend toward, and Aristotle’s idea that reasoning people (philosophers, scientists) working without error will come to identical conclusions even if they’re on opposite sides of the world, because the knowable categories (fish, equilateral triangle, good) are universal. Thus, as Plato and Aristotle say we use reason to gradually approach knowledge, all philosophers in history have been working toward the same thing, and differ only in the errors they make along the way. This is the logic, but they also have evidence, and here you have to remember that Renaissance scholars did not have our modern tools for evaluating chronology and influence. They looked at early Christian writings, and they looked at Plato and Aristotle, and they said, as we do, “Wow, Plato and Aristotle have a lot of ideas in common with these early Christians!” but while we conclude, “Early Christians sure were influenced by Plato and Aristotle,” they instead concluded, “This proves that Plato and Aristotle were aiming toward the same things as Christianity!” And they had further evidence from how tangled their chronologies were. There were certain key texts like the Chaldean Oracles which they thought were much much older than we now think they are, which made it look like ideas we attribute to Plato had independently existed well before Plato. They looked at Plotinus and other late antique Neoplatonists who mixed Plato and Aristotle but claimed the Aristotelian bits were really hidden inside Plato the whole time, and they concluded, “See, Plato and Aristotle were basically saying the same thing!” Similarly confusing were the works of the figure we now call Pseudo-Dionysius, who we think was a late antique Neoplatonist voicing a mature hybrid of Platonism and Aristotelianism with some Stoicism mixed in, but who Renaissance scholars believed was a disciple of Saint Paul, leading them to conclude that Saint Paul believed a lot of this stuff, and making it seem even more like Plato, Aristotle, Stoics, ancient mystics, and Christianity were all aiming at one thing. So any small differences are errors along the way, or resolvable with “sic et non.”
The problem came when they translated more and more texts, and found more contradictions than they could really handle. Ideas much wilder and more out there than they expected suddenly had authoritative possibly-sort-of-proto-Christian authors endorsing them. Settled questions were unsettled again, sleeping dragons woken. For example, it wasn’t until the Fifth Lateran Council in 1513 that the Church officially made belief in the immortality of the soul a required doctrine for all Christians, which does not mean that lots of Christians before 1513 didn’t believe in the afterlife, but that Christians in 1513 were anxious about belief in the afterlife, feeling that it and many other doctrines were suddenly in doubt which had stood un-threatened throughout the Middle Ages. The intellectual landscape was suddenly bigger and stranger.
Remember how I said Cicero would be back? All these humanists read Cicero constantly, including the philosophical dialogs with his approach of presenting different classical sects in dialog, all equally plausible but incompatible, leading to… skepticism. And as they explored those same sects more and more broadly, Cicero the skeptic became something of the wedge that started to expand the crack, not overtly stating “Hey, guys, these people don’t agree!” but certainly pressing the idea that they don’t agree, in ways which humanists had more and more trouble ignoring as more texts came back.
Aaaaaand the Reformation made this more extreme, a lot more extreme, by (A) generating an enormous new mass of theological claims made by contradictory parties, adding another arm to our galactic spiral, and (B) developing huge numbers of fierce and damning counter-arguments to all these claims, which in turn meant developing new tools for countering and eroding belief. Thus, as we reach the 1570s, the world of philosophy is a lot bigger, a lot deadlier (as the Reformation and Counter-Reformation killed many more people for their ideas than the Middle Ages did), and a lot scarier, with vast swarms of arguments and counter-arguments, many of them powerful, persuasive, beautifully reasoned, and completely incompatible. And when you make a beautiful yes-and-no attempt to make Plato and Epicurus agree, you don’t have the men themselves on hand to say “Excuse me, in fact, we don’t agree.” But you did have real live Reformation and Counter-Reformation theologians running around responding to each other in real time, that makes syncretic reconciliation the more impossible.
Remember how Abelard, who able to make St. Jerome and St. Augustine seem to agree, drew followers like Woodstock? Well, now his successors–Scholastic and Humanist, since the Humanists were all ALSO reading Scholasticism all the time–have a thousand times as many authorities to reconcile. You think Jerome and Augustine is hard? Try Calvin and Epicurus! St. Dominic and Zwingli! Thomas Aquinas is a saint now, let’s see if you can Yes-and-No the entire Summa Theologica into agreeing with Epictetus, Pseudo-Dionysius and the Council of Trent at the same time! And remember, in the middle of all this, that most if not all of our Renaissance protagonists still believe in Hell and damnation (or at least something similar to it), and that if you’re wrong you burn in Hellfire forever and ever and ever and so do all your students and it’s your fault. Result: FEAR. And its companion, freethought. Contrary to what we might assume, this is not a case where fear stifled inquiry, but where it stimulated more, firing Renaissance thinkers with the burning need to have a solution to all these contradictions, some way to sort out the safe path amid a thousand pits of Hellfire. New syntheses were proposed, new taxonomies of positions and heresies outlined, and old beliefs reexamined and refined or reaffirmed. And this period of intellectual broadening and competition brought with it an increasing inability to believe that any one of these options is the only right way when there are so many, and they are so good at tearing each other down.
And in the middle of this, experimental and observational science is advancing rapidly, and causing more doubt. We discover new continents that don’t fit in a T-O map (Ptolemy is wrong), new plants that don’t fit existing plant taxonomy (Theophrastus is wrong), details about Animals which don’t match Aristotle (we’d better hope he’s not wrong!), the circulation of the blood which turns the four humors theory on its head (Not Galen! We really needed him!), and magnification lets us finally see the complexity of a flea, and realize there is a whole unexplored micro-universe of detail too small for the naked eye to experience, raising the question “If God made the Earth for humans, why did God bother to make things humans can’t even perceive?”
Youth: “But, Socrates, why did experimental and observational science advance in that period? Discovering new stuff that isn’t in the classics doesn’t have anything to do with reconstructing antiquity, or with the Reformation, does it?”
Good question. A long answer would be a book, but I can make a quick stab at a short one. I would point at several factors. First, after 1300, and increasingly as we approach 1600, European rulers began competing in new ways, many of them cultural. As more and more nobles were convinced by the humanist claim that true nobility and power came from the lost arts of the ancients, so scholarship and unique knowledge, including knowledge of ancient sciences, became mandatory ornaments of court, and politically valuable as ways of advertising a ruler’s wealth and power. Monarchs and newly-risen families who had seized power through war or bribery could add a veneer of nobility by surrounding themselves with libraries, scholars, poets, and scientists, who studied the ancient scientific sources of Greece and Rome but, in order to understand them more fully, also studied newer sources coming from the Middle East, and did new experiments of their own. A new astronomical model of the heavens proclaimed the power of the patron who had paid for it, just as much as a fur-lined cloak or a diamond-studded scepter.
Add to this the increase of the scales of wars caused by increased wealth which could raise larger armies, generating a situation in which new tools for warfare, and especially fortress construction, were increasingly in demand (when you read Leonardo’s discussions of his abilities, more than 75% of the inventions he mentions are tools of war). Add to that the printing press which makes it possible for novelties–whether a rediscovered manuscript or a newly-discovered muscle–to spread exponentially faster, and which makes books much more affordable, so that if only one person in 50,000 could afford a library before now it is one in 5,000, and even merchants could afford a few texts. Education was easier, and educated men were in demand at courts eager to fill themselves with scholars, and advertise their greatness with discoveries.
These are the main facilitators, but I would also cite another fundamental shift. I have talked before about Petrarch, and the humanist project to improve the world by reconstructing a lost golden age. This is the first philosophical movement since ancient stoicism that has had anything to do with the world, since medieval theology’s (perfectly rational in context!) desire to study the Eternal instead of the ephemeral meant that most scholars for many centuries had considered natural philosophy, the study of impermanent natural phenomena, as useless as studying the bathwater instead of the baby. Humanism generated a lot of arguments about why Earth and earthly things were worth more than nothing, even if they agreed Heaven and eternal things were more important, and I think the mindset which said it was a pious and worthwhile thing to translate Livy or write a treatise on good government contributed to the mindset which said it was a pious and worthwhile thing to measure mountains or write a treatise on metallurgy. Thought turned, just a little bit, toward Earth.
There, that’s the Renaissance and Reformation, oversimplified by necessity, but Descartes is chomping at the bit for what comes next. For those who want more, I shall do the crass thing here and say: for more detail, see my book Reading Lucretius in the Renaissance, or Popkin’s History of Skepticism, or wait.
At last, Montaigne!
Like the world which basked in his writings, and shuddered in his “crisis,” I love Montaigne. I love his sentences, his storytelling, his sincerity, his quips, his authorial voice. Reading Montaigne is like like slowly enjoying a glass of whatever complex, rich and subtle beverage you most enjoy a glass of (wine for many, fresh goat milk for me!). Especially because, at the end, your glass is empty. (I see a contented Descartes nodding). When I set about starting to write this series, getting to Montaigne was, in fact, my secret end goal, since, if there is a founder of modern skepticism, it is Michel Eyquem de Montaigne.
Montaigne was unique, an experiment, the natural experiment to follow at the maturation of the Renaissance classical project but still, a unique child, raised as an overt pedagogical experiment outlined by his father: Montaigne grew up speaking only Latin. He was exposed to French in his first three years by country nurses, but from three on he was only allowed contact with people–his tutor, parents and servants–speaking Latin. He was a literal attempt to raise a Cicero or Caesar, formed exclusively by classical ideas, the ideal man that the humanists had been hoping to create. Greek was later added, not with textbooks and the rod as was usual in those days but with games and music, and studies were always made to seem pleasant and wonderful by surrounding him with music (even waking the child every morning with delightful live music). He grew up to be about as perfect a Platonic Philosopher King as one could hope to imagine, studying law and entering politics, as his father wished, achieving the highest honors, but preferring life alone in his library, and frequently retiring to do just that, only to be dragged back into politics actually by popular demand of people who would come bang on his library door demanding that he come out to take up office and rule them. I think often about what it must have been like to be Montaigne, to be so immersed, enjoy these things so much, and only later discover that he was alone in a world with literally no other native speaker of his language. It must have been as difficult as it was wonderful to be Montaigne. But I think I understand why, when he lost his best friend Étienne de la Boétie, Montaigne wrote of his grief, his loss, the pain of solitude, with an intensity rarely approached in the history of human literature. He also wrote Essais, meandering writings, the source of the modern word “essay”, for which every schoolchild has the right to playfully curse him.
I will now go about explaining why Montaigne was so wonderful by describing Voltaire. Yes, it is an odd way to go about it, but the Voltaire example is clearer and more concise than any Montaigne example I have on hand, and, in this, Voltaire was a student of Montaigne, and Montaigne will only smile to see such a beautiful development of his art, as Bacon smiles on Newton, and Socrates on all of us.
At the beginning of this sequence, I outlined two potential sources of knowledge: either (A) Sense Perception i.e. Evidence, or (B) Logic/Reason. The classical skeptics were born when the reliability these two sources of knowledge were drawn into doubt, Sense Perception by the stick in water, Logic by Xeno’s Paradoxes of Motion. Responses included the skeptics’ conclusion “We can’t know anything if we can’t trust Reason or the Senses,” and the various other classical schools’ Criteria of Truth (Plato’s Ideas, Aristotle’s Categories, Epicurus’s weak empiricism, etc.) All refutations we have seen along our long path have been based on undermining one of these types of knowledge sources: so when Duns Scotus fights with Aquinas, he picks on his logic, and when Ockham fights with him he, often, picks on his material sensory evidence. (“Where is the phantasm? Huh? Huh?”)
Everybody, I’d like to introduce you to Leibniz. Leibniz, this is everybody. “Hello!” says Leibniz, “Very nice to meet you all.” We are going to viciously murder Leibniz in about three minutes. “It’s no trouble,” says Leibniz, “I’m quite used to it.” Thank you, Leibniz, we appreciate it.
Leibniz here made many great contributions to philosophy and mathematics, but one particular one was extraordinarily popular, I would go so far as to say faddy, a fad argument which swept Europe in the first half of the 18th century. You have almost certainly heard it before in mocking form, but I will do my best to be fair as we line up our target in our sites:
God is Omnipotent, Omniscient and Omnbenevolent. (Given.) “Grrrr,” quoth Socrates.
Given that God is Omniscient, He knows what the best of all possible worlds is.
Given that God is Omnipotent, He can create the best of all possible worlds.
Given that God is Omnibenevolent, He wants to create the best of all possible worlds.
Any world such a God would make must logically be the best of all possible worlds
This is the best of all possible worlds.
Now, this was a proof written, just like Anselm’s and Aquinas’s, by a philosopher expecting a readership who all believe, both in God, and in Providence. It is a comfortable proof of the logical certainty that there is Providence, that this universe is perfect (as the Stoics first theorized), and anything in it that seems to be bad or evil must, in fact, be part of a greater long-term good that we fail to see because of our limited human perspective. The proof made a huge number of people delighted to have such an elegant and simple argument for something they enthusiastically believed.
But, the proof also the side-effect that arguments about Providence often do, of making people start to try to reason out what the good was behind hidden evils. “Oh, that guy was struck with disease because he did X bad thing.” “Wolves exist to make us live in villages.” “That plague happened because those people were bad.” It was (much like Medieval proofs of the existence of God) a way philosophers could show off their cleverness to an appreciative audience, make themselves known, and put forward theories about right and wrong and what God might want.
In 1755 an enormous earthquake struck the great port city of Lisbon (Portugal), wiping out tens of thousands of people (some estimate up to 100,000) and leveling one of the great gems of European civilization. It remains to this day one of the deadliest earthquakes in recorded history, and many parts of Lisbon are still in ruins almost 300 years later. The shock and horror, to a progressive, optimistic Europe, was stunning. And immediately thereafter, fans of Leibniz started publishing essays about how it was GOOD that this had happened, because of XYZ reason. For example, one argument was that they were persecuting people for their religion, and this was God saying he disapproved <= REAL argument. (Note: Leibniz himself is innocent of all this, having died years before the earthquake – we are speaking of his followers.) Others argued that it was a bad minor effect of God’s general laws, that the physical rules of the Earth which make everything wonderful for humankind also make earthquakes sometimes happen, but that the suffering they cause is negligible against the greater goods that Providence achieves. And if one person in Europe could not stand these noxious, juvenile, pompous, inhumane, self-serving, condescending, boastful, heartless, self-congratulatory responses to unprecedented human suffering, that person was the one pen mightier than any sword, Voltaire.
Would words like these to peace of mind restore
The natives sad of that disastrous shore?
Grieve not, that others’ bliss may overflow,
Your sumptuous palaces are laid thus low;
Your toppled towers shall other hands rebuild;
With multitudes your walls one day be filled;
Your ruin on the North shall wealth bestow,
For general good from partial ills must flow;
You seem as abject to the sovereign power,
As worms which shall your carcasses devour.
No comfort could such shocking words impart,
But deeper wound the sad, afflicted heart.
When I lament my present wretched state,
Allege not the unchanging laws of fate;
Urge not the links of the eternal chain,
’Tis false philosophy and wisdom vain.
The God who holds the chain can’t be enchained;
By His blest Will are all events ordained:
He’s Just, nor easily to wrath gives way,
Why suffer we beneath so mild a sway:
This is the fatal knot you should untie,
Our evils do you cure when you deny?
Men ever strove into the source to pry,
Of evil, whose existence you deny.
If he whose hand the elements can wield,
To the winds’ force makes rocky mountains yield;
If thunder lays oaks level with the plain,
From the bolts’ strokes they never suffer pain.
But I can feel, my heart oppressed demands
Aid of that God who formed me with His hands.
Sons of the God supreme to suffer all
Fated alike; we on our Father call.
No vessel of the potter asks, we know,
Why it was made so brittle, vile, and low?
Vessels of speech as well as thought are void;
The urn this moment formed and that destroyed,
The potter never could with sense inspire,
Devoid of thought it nothing can desire.
The moralist still obstinate replies,
Others’ enjoyments from your woes arise,
To numerous insects shall my corpse give birth,
When once it mixes with its mother earth:
Small comfort ’tis that when Death’s ruthless power
Closes my life, worms shall my flesh devour.
This (in the William F. Fleming translation) is an excerpt from the middle of Voltaire’s Poem on the Lisbon Earthquake, which I heartily encourage you to read in its entirety. The poem summarizes the arguments of Camp Leibniz , and juxtaposes them with heart-wrenching descriptions of the sufferings of the victims, and with Voltaire’s own earnest and passionate expression of exactly why these kinds of arguments about Providence are so difficult to choke down when one is really on the ground suffering and feeling. The human is not a senseless pottery vessel, it is a thinking thing, it feels pain, it asks questions, it feels the special kind of pain that unanswered questions cause, the same pain the skeptics have been trying to help us escape for 3,000 years. But we don’t escape, and the poem captures it. The poem swept across Europe like a firestorm. People read it, people felt it, people recognized in Voltaire’s words the cries of anger in their own hearts. And they agreed. He won. The Leibniz fad ended. An entire continent-wide philosophical movement, slain.
And he used neither Logic nor Evidence.
Did you feel it? The poem persuaded, attacked, undermined, eroded away the respectability of Leibniz, but it did it without using EITHER of the two pillars of argument. There was no chain of reasoning. And there was no empirical observation. You could say there was some logic in the way he juxtaposed claims “God is a kind Maker” with counter-claims “I am not a potter’s jar, I am a thinking thing! I need more!”. You could say there was some empiricism or evidence-based argument in his descriptions of things he saw, or things he felt, since feelings too are sense-perceptions in a way, so reporting how one feels is reporting a sensory fact. But there was nothing in this so rigorous or so real that any of our ancient skeptics would recognize it as the empiricism they were attacking. Those people Voltaire describes – he did not see them, he just imagines them, reaching across the breadth of Europe with the strength of empathy. That potter’s wheel is a metaphor, not a syllogism. Voltaire has used a third thing, neither Reason nor Evidence, as a tool of skepticism.
What do we name this Third Thing? I have heard people propose “common sense” but that’s a terribly vexed term, going back to Cicero at least, which has been used by this point to mean 100 things that are not this thing, so even if you could also call this thing “common sense” it would just create confusion (we don’t need Aristotle looming with a lecture on the dangers of unclear vocabulary). I have heard people propose “sentiment” and I like how galling it feels to try to suggest that “sentiment” should enjoy coequal respect and power with Reason and Evidence, but it isn’t quite that either. I am not yet happy with any name for this Third Thing, and am playing around with many. All I will say is that it is real, it is powerful, it is as effective at persuading one to believe or disbelieve as Reason and Evidence are. And, even if there were shadows of this Third Thing earlier in human history, Montaigne was the smith who sharpened the blade and handed it to Voltaire, and to the rest of us.
Montaigne’s Essais are lovely, meandering, personal, structure-less, rambling musings in which topics flow one upon another, he summarizes an argument made for or against some heresy, then, rather than voicing an opinion, tells you a story about his grandmother that one time, or retells a bit of one of Virgil’s pastorals, or an anecdote about some now-obscure general, and then flows on to a different topic, never stating his opinion on the first but having shaped your thinking, through his meanders, until you feel an answer, a belief or, more often, disbelief, even if he never voiced one. And then he keeps going, taking up another argument, making it feel silly with an allegory about two bakers, another and–have you heard the news from Spain?–another, and another, and oh, the loves of Alexander, another, and another. And as it flows along you get to know him, feel you’re having a conversation with him, and somewhere toward the end you no longer believe any of the philosophical arguments he has just summarized are plausible at all, but he never once argued directly against any of them. It is a little bit like our skeptical Cicero, juxtaposing opposing views and leaving us convinced by none, but it is one level less structured, not actually a dialog with arguments and refutations. Skepticism, without Reason, without Evidence, just with the human honesty that is Montaigne, his doubts, his friendship, his communication to you, dear reader, across the barrier of page, and time, and language, this strange French-Roman, this only native Latin speaker born in a millennium, this alien, has made you realize all the philosophical convictions, everything in that broad spectrum that scholasticism plus the Renaissance plus the Reformation and Counter-Reformation ferocity have laid before you, none of it is what a person really feels deep down inside, not Montaigne, and not you. And so he leaves you a skeptic, in a completely different way from how the ancient skeptics did it, not with theses, or exercises, or lists, or counterarguments, just with… humanity?
Montaigne did it. His contemporaries found it… odd at first, a bit self-centered, this autobiographical meandering, but it was so beautiful, so entrancing, so powerful. It reared a new generation, armed with Reason and Evidence and This Third Thing, and deeply skeptical. Students at universities started raising their hands in class to ask the teachers to prove the school existed. Theologians advising princes started saying maybe it didn’t matter that much what the difference was between the different Christian faiths if they were close enough. A new age of philosophy was born, not a new school, but a new tool for dogmatism’s ancient symbiotic antagonist: doubt.
And, where doubt grows stronger and richer, so does dogmatic philosophy, having that much more to test itself against. Just as, in antiquity, so many amazing schools and ideas were born from trying to respond to Zeno and the Stick in Water, so Montaigne’s new tools of Skepticism, his revival and embellishment of skepticism, the birth, as we call it, of Modern Skepticism, was also the final ingredient necessary for an explosion of new ideas, new schools, new universes described by new philosophers trying to build systems which can stand up against a new skepticism armed, not just against Reason and Evidence, but with That Third Thing.
Thus, as 1600 approaches, the breakneck proliferation of new ideas and factions make Montaigne’s skepticism so popular that students in scholastic and Jesuit schools are starting to raise their hands and demand that the professor prove the existence of the classroom before expecting them to attend class. A “skeptical crisis” takes center stage in Europe’s great intellectual conversation, and multiplying doubt seems to have all the traditional Criteria of Truth in flight. It is onto this stage that Descartes will step, and craft, alongside his contemporaries, the first new systems which will have to cope, not with two avenues of attacking certainty, but, thanks to Montaigne, three. And will fight back against them with Montaigne’s arts as well. Next time.
For now, I will leave you with one more little snippet of the future: I lied to you, about a simple happy ending to Voltaire’s quarrel with Leibniz. Oh, Leibniz was quite dead, not just because the man himself had died but because no philosopher could take his argument seriously after the poem. Ever. Again. In fact, a few years ago I went to a talk at at a philosophy department in which a young scholar was taking on Leibniz’s Best of All Possible Worlds thesis, and picking it apart using beautiful logical argumentation, and at the end everyone applauded and congratulated him, but when the Q&A started the first Q was “Well, um, this was all quite fascinating, but, isn’t Leibniz, I mean, no one takes that argument seriously anymore…” But the young philosopher was correct to point out that, in fact, no one had ever actually directly refuted it with logic. No one saw the need. But if Voltaire’s victory over logical Leibniz was complete, Leibniz was not the most dangerous of foes. Voltaire had contemporaries, after all, armed with Montaigne’s Third Thing just as Voltaire was. Rousseau will fire back, sweet, endearing, maddening Rousseau, not in defense of Leibnitz, but against the poem which he sees as an attack on God. But this battle of two earnest and progressive deists must wait until we have brought about the brave new world that has such creatures in it. For that we need Descartes, Francis Bacon, grim Hobbes, John Locke, and the ambidextrous Bayle.
Socrates, Sartre, Descartes and our Youth have, among them, consumed twelve thousand, six hundred and forty two hypothetical eclairs in the fourteen months since we left them contemplating skepticism on the banks of a cheerily babbling imaginary brook. Much has changed in the interval, not in the land of philosophical thought-experiments (which is ever peaceful unless someone scary like Ockham or Nietzsche gets inside), but in a world two layers of reality removed from theirs. The changes appear in the world of material circumstances which shape and foster this author, who in turn shapes and fosters our philosophical picnickers. Now, having recovered from my transplant shock of being moved to the new and fertile country of University of Chicago, and with my summer work done, and Too Like the Lightningfully revised and on its way toward its May 10th release date (YES!), it is time at last to return to our hypothetical heroes, and to my sketches of the history of philosophical skepticism.
When last we saw them, Socrates, Sartre, Descartes and our Youth had rescued themselves from the throes of absolute doubt by developing Criteria of Truth, which allowed them to differentiate arenas of knowledge where certainty is possible from arenas of knowledge where certainty is not possible. (See their previous dramatic adventures with in Sketches of a History of Skepticism Part 1 and Part 2). To do this, they looked at three systems: Epicureanism, which suggests that we have certain knowledge of the world perceived by the senses, but no certain knowledge of the imperceptible atomic reality beneath; Platonism, which suggests that we have knowledge of the eternal structures that create the material world, i.e. Forms or Ideas, but not of the flawed, corruptible material objects which are the shadows of those eternal structures; and Aristotelianism, which suggests that we can have certain knowledge of logical principles and of categories within Nature, but not of individual objects.
Notably, neither Epicurus nor Aristotle was invited to our picnic, and, while you never know when any given Socrates will turn out to be a Plato in disguise, our particular Socrates seems to be staying safely in the camp of doubt: he knows that he knows nothing. Our object is not to determine which of these classical camps has the correct Criterion of Truth. In fact, our distinguished guests, Descartes and Sartre, aren’t interested in rehashing these three classical systems all of whose criteria are not only familiar, but, to them, long defunct. They have not come through this great distance in time to watch Socrates open the doors of skepticism to our Youth to just meet antiquity’s familiar dogmatists; the twinkle in Descartes’ eye (and his infinite patience dolling out eclairs) tells me he’s waiting for something else.
Descartes and Sartre expect Cicero next — Cicero, whom many might mistake as a voice for the Stoic school (the intellectual party conspicuously missing from the assembly of Plato, Aristotle, and Epicurus) but who is actually more often read by modern scholars as a new and promising kind of Skeptic. Unfortunately, Cicero is currently busy answering a flurry of letters from someone called Petrarch, so has declined to join our little gathering (or possibly he’s just miffed hearing that I’m doing an abbreviated finale to this series, so he’d only get a couple paragraphs, even if he came). So we must do our concise best to cover his contribution on our own. Pyrrho, Zeno and other early skeptical voices argued in favor of doubt by demonstrating the fallibility of the senses and of pure reason: the stick in water that looks bent, the paradoxes of motion which show how logic and reality don’t match. Cicero achieves unbelief (and aims at the eudaimonist tranquility beyond) by a different route, a luxurious one made possible by the fact that he is writing three centuries into the development of philosophy and has many different dogmatic schools to fall back on. In his philosophical dialogs, Cicero presents different interlocutors who put forth different dogmatic positions: Stoic, Platonist, Epicurean; all in dialog with each other, presenting evidence for their own positions and counter-arguments against the conclusions of others. Each interlocutor works strictly by his own Criterion of Truth, and all argue intelligently and well. But they all disagree. When you read them all together, you are left uncertain. No particular voice seems to overtop the others, and the fact that there are so many different equally plausible positions, defended with equally well-defined Criteria of Truth, leaves one with no confidence that any of them is reliable. At no point does Cicero say “I am a skeptic, I think there is no certainty,” — but the effect of reading the dialog is to be left with uncertain feelings. Cicero himself does not seem to have been a Pyrrhonist skeptic, and certainly does seem to hold some philosophical positions, especially moral principles, quite strongly. There is certainly a good case to be made that he has strong Stoic leanings, and there is validity to the Renaissance argument that he should be vaguely clustered in with Seneca and Cato, who subscribe to a mixed-together digest of Roman paganism, Stoicism, some Platonic and a few Aristotelian elements. But especially on big questions of epistemology, ontology and physics, Cicero remains solidly, frustratingly, elusive.
There are many important aspects of Cicero’s work, but for our purposes the most important is this: he has achieved doubt without actually making any skeptical arguments, or counter-arguments. He has not attacked the fundamentals of Stoicism, Platonism or Epicureanism. Instead, he has used the strengths of the three schools to undermine each other. All three schools are convincing. All are plausible. All have evidence and/or logic on their side. As a result, none of the three winds up feeling convincing, even though none of the three has been directly undermined. This is not a new achievement of Cicero’s. Epicurus used a similar technique, and Lucretius, his follower, did so too; and we know Cicero read Lucretius. But Cicero is the most important person to use this technique in antiquity, largely because 1,300 years later it will be Cicero who become the centerpiece of Renaissance education. And Cicero will have no small Medieval legacy as well.
Medieval Certainty, and the Big Question
Stereotypically for a Renaissance historian, I will move quickly through the Middle Ages, though not for the stereotypical reasons. I don’t think that the Middle Ages were an intellectual stasis; I do think that Medieval philosophy is fully of many complex things that I’m just starting to seriously work through in my own studies. I’m not ready to provide a light, fun summary of something which is, for me, still a rich forest to explore. Church Fathers, late Neoplatonists, Chroniclers, theological councils, monastic leaders, rich injections from the Middle East, Maimonides; all intersect with doubt, certainty and Criteria of Truth in rich and fascinating ways that I am not yet prepared to do justice to. So instead I will present an abstraction of one important aspect of Medieval thinking which I hope will help elucidate some overall approaches to doubt, even if I don’t pause to look at individual minds.
When I was in my second year of grad school, I chatted over convenience store cookies in the grad student lounge with a new student entering our program that year, like myself, to study the Renaissance. He poked fun at the philosophers of the Middle Ages. He asked me, “How could anybody possibly be interested in going on and on and on and on like that about God?” And in that moment of politeness, and newness, and fun, I laughed, and nodded. But, happily, we had a good teacher who made us look more at the Medieval, without which we can’t understand the Renaissance, and now I would never laugh at such a comment.
Set aside your modern mindset for a moment, and your modern religious concepts, and see if you can jump into the Medieval mind. To start with, there is a Being of infinite power, Whose existence is known with certainty. (Take that as given — a big given, I know, but it’s a given in this context.) Such a Being created everything that ever has existed or will exist. Everything that happens: events, births, storms, falling objects, thoughts; all were conceived by this Being and exist according to this Being’s script. The Being possesses all knowledge, and all good things are good because they resemble this Being. Everything in the material world is fleeting and imperfect and will someday be destroyed and forgotten, including the entire Earth. But — this Being has access to another universe where all things are eternal and perfect, which will last beyond the end of the material universe, and with this Being’s help there might be some way for us to reach that universe as well. The Being created humans with particular care, and is trying to communicate with us, but direct communication is a difficult process, just as it is difficult for an entomologist to communicate directly with his ants, or for a computer programmer to communicate directly with the artificial intelligences that she has programmed.
Now, the facetious question I laughed at in early grad school comes back, but turned on its head. How could you ever want to study anything other than this Being? It explains everything. You want to know the cause of weather, astronomical events, diseases, time? The answer is this Being. You want to know where the world came from, how thought works, why there is pain? The answer is this Being. History is a script written by this Being, the stars are a diagram drawn by this Being, the suitability and adaptation of animals and plants to their environments is the ingenuity of this Being, and the laws that make rocks sink and wood float and fire burn and rain fall are all decisions made by this Being. If you have any intellectual curiosity at all, wouldn’t it be an act of insanity to dedicate your life to anything other than understanding this Being? And in a world in which there has been, for centuries, effective universal consensus on all these premises, what society would want to fund a school that didn’t study them? Or pay tuition for a child to study something else? Theology dominated other sciences in the Middle Ages, not because people were backward, or closed-minded, or lacked curiosity, but because they were ambitious, keenly intellectual and fixed on the a subject from which they had every reason to expect answers, not just to theological questions, but to all questions. They didn’t have blinders, they had their eyes on the prize, and they felt that choosing to study Natural Philosophy (i.e. the world, nature, biology, plants, animals) instead of Theology was like trying to study toenail clippings instead of the being they were clipped from.
To put it another way: have you ever watched a fun, formulaic, episodic genre show like Buffy the Vampire Slayer, or the X-Files? There’ll be one particular episode where the baddie-of-the-day is Christianity-flavored, and at some point a manifest miracle happens, or an angel or a ghost shows up, and then we have to reset the formula and move onto the next episode, but you spend that whole next episode thinking, “You know, they just found proof of the existence of the afterlife and the immortality of the soul. You’d think they’d decide that’s more important than this conspiracy involving genetically-modified corn.” That’s how people in the Middle Ages felt about people who wanted to study things that weren’t God.
Doubt comes into this in important ways, but not the ways that modern rhetoric about the Middle Ages leads most people to expect.
Wikipedia, at the time of writing, defines Scholasticism as “a method of critical thought which dominated teaching by the academics (“scholastics,” or “schoolmen”) of medieval universities in Europe from about 1100 to 1700. ” It was “a program of employing that [critical] method in articulating and defending dogma in an increasingly pluralistic context.” It “originated as an outgrowth of, and a departure from, Christian monastic schools at the earliest European universities.” Philosophy students traditionally define Scholasticism as “that incredibly boring hard stuff about God that you have to read between the classics and Descartes”. Both definitions are true. Scholasticism is an incredibly tedious, exacting body of philosophy, intentionally impenetrable, obsessed with micro-detail, and happy to spend three thousand words proving to you that Good is good, or to set out a twenty step argument it is better to exist than not exist (this is presumably why Hamlet still hadn’t graduated at age 30). Scholasticism was also so incredibly exciting that, apart from the ever-profitable medical and law schools, European higher education devoted itself to practically nothing else for the whole late Middle Ages, and, even though the intellectual firebrands of both the Renaissance and the 17th and 18th centuries devoted themselves largely to fiercely attacking the scholastic system, it did not truly crumble until deep into the Enlightenment.
Why was Scholasticism so exciting? Even if people who believed in an omnipotent God had good reason to devote their studies pretty-exclusively to Theology, why was this one particularly dense and intentionally difficult method the method for hundreds of years? Why didn’t they write easy-to-read, penetrable treatises, or witty philosophical tales, or even a good old fashioned Platonic-type dialog?
The answer is that Christianity changes the stakes for being wrong. In antiquity, if you’re wrong about philosophy, and the philosophical end of theology, you’ll make incorrect decisions, possibly lead a sadder or less successful life than you would otherwise, and it might mean your legacy isn’t what you wanted it to be, but that’s it. If you’re really, really wrong you might offend Artemis or something and get zapped, but it’s pretty easy to cover your bases by going to the right festivals. By the logic of antiquity, if you put a Platonist and an Epicurean in a room, one of of them will be wrong and living life the wrong way, at least in some ways, but they can both have a nice conversation, and in the end, either they’ll both reincarnate and the Epicurean will have another chance to be right later, or they’ll both disperse into atoms and it won’t matter. OK. In Medieval Christianity, if you’re wrong about theology, your immortal soul goes to Hell forever, where you’ll be tormented by unspeakable devils for the rest of eternity, and everyone else who believes your errors is also likely to lose the chance of eternal paradise and absolute knowledge, and will be plunged into a pit of absolute misery and despair, irrevocably, forever. Error is incredibly dangerous, to you and to everyone around you who might get pulled down with you. If you’re really bad, you might even bring the wrath of God down upon your native city, or if you’re really bad then, while you’re still alive, your soul might depart your body and sink down to Hell, leaving your body to be a house for a devil who will use you to visit evil on the Earth (see Inferno Canto 27). But leaving aside those more extreme and superstition-tainted possibilities, error became more pernicious because of eternal damnation. If people who read your theologically incorrect works go to Hell, you’re infinitely culpable, morally, since every student misled to damnation is literally an infinite crime.
So, if you are a Medieval person, Theology is incredibly valuable, the only kind of study worth doing, but also incredibly dangerous. You want to tread very carefully. You want a lot of safety nets and spotters. You want ways to avoid error. And you know error is easy! Errors of logic, errors of failing senses. Enter Aristotle, or more specifically enter Aristotle’s Organon, a translation of the poetic works of Aristotle completed by dear Boethius, part of the latter’s efforts to preserve Greek learning when he realized Greek and other relics of antiquity were fading. The Organon explains in great detail, how you can go about constructing chains of logic in careful, methodical ways to avoid error. Use only clearly defined unequivocal vocabulary, and strict syllogistic and geometric reasoning. Here it is, foolproof logic in 50 steps, I’ll show you! Sound familiar? This is Aristotle’s old Criterion of Truth, but it’s also the Medieval Theologian’s #1 Christmas Wish List. The Criterion of Truth which was, for Aristotle, a path through the dark woods and a solution to Zeno and the Stick in Water, is, to our theologian, a safety net over a pit of eternal Hellfire. That is why it was so exciting. That was why people who wanted to do theology were willing to train for five years just in logic before even looking at a theological question, just as Astronauts train in simulators for a long time before going out into the deadly vacuum of space! That is even why scholastic texts are so hard to read and understand – they were intentionally written to be difficult to read, partly because they’re using an incredibly complicated method, but even more because they don’t want anyone to read them who hasn’t studied their method, because if you read them unprepared you might misunderstand, and then you’d go to Hell forever and ever and ever, and it would be Thomas Aquinas’s fault. And he would be very sad. When Thomas Aquinas was presented for canonization, after his death, they made the argument that every chapter of the Summa Theologica was itself a miracle. It’s easy to laugh, but if you think about how desperately they wanted perfect logic, and how good Aquinas was at offering it, it’s an argument I understand. If you were dying of thirst in the desert, wouldn’t a glass of water feel like a miracle?
To give credit where credit is due, the mature application of Aristotle’s formal logic to theological questions was not pioneered by Aquinas but by a predecessor: Peter Abelard, the wild rockstar of Medieval Theology. People crowded in thousands and lived in fields to hear Peter Abelard preach, it was like Woodstock, only with more Aristotle. Why were people so excited? Did Abelard finally have the right answer to all things? “Yes and No,” as Peter Abelard would say, “Sic et Non“, that being the the title of his famous book, a demonstration of his skill. (Wait, yes AND no, isn’t that even scarier and worse and more damnable than everything else? This is the most dangerous person ever! Bernard of Clairvaux thought so, but the Woodstock crowd at the Paraclete, they don’t.) Abelard’s skill was taking two apparently contradictory statements and showing, by elaborate roundabout logic tricks, how they agree. Why is this so exciting? Any troll on the internet can do that! No, but he did it seriously, and he did it with Authorities. He would take a bit of Plato that seemed to contradict a bit of Aristotle, and show how they actually agree. Even ballsier, he would take a bit of Plato that pretty manifestly DOES contradict another bit of Plato, and show how they both agree. Then, even better, he would take a bit from St. Augustine that seems to contradict a bit from St. Jerome and show how the two actually agree. “OH THANK GOD!” cries Medieval Europe, desperately perplexed by the following conundrum:
The Church Fathers are saints, and divinely inspired; their words are direct messages from God.
If you believe the Church Fathers and act in accordance with their teachings, they will show you the way to Heaven; if you oppose or doubt them, you are a heretic and damned for all eternity.
The Church Fathers often disagree with each other.
Abelard rescued Medieval Europe from this contradiction, not necessarily by his every answer, but by his technique by which seemingly-contradictory authorities could be reconciled. Plato with Aristotle is handy. Plato with Plato sure is helpful. Jerome with Augustine is eternal salvation. And if he does it with the bits of Scripture that seem to contract the other bits? He is now the most exciting thing since the last time the Virgin Mary showed up in person.
Abelard had a lover–later, wife, but she preferred ‘lover’–the even more extraordinary Heloise, and I consider it immoral to mention him without mentioning her, but her life, her stunningly original philosophical contributions and her terrible treatment at the hands of history are subjects for another essay in its own right. For today, the important part is this: Abelard was exciting for his method, more than his ideas, his way of using Reason to resolve doubts and fears when skepticism loomed. Thus even Scholasticism, the most infamously dogmatic philosophical method in European history, was also in symbiosis with skepticism, responding to it, building from it, developing its vast towers of baby-step elaborate logic because it knew Zeno was waiting.
Proofs of the Existence of God
We are all very familiar with the veins of Christianity which focus on faith without proof as an important part of the divine plan, that God wants to test people, and there is no proof of the existence of God because God wants to be unknowable and elusive in order to test people’s faith. The most concise formula is the facetious one by Douglas Adams, where God says: “I refuse to prove that I exist, because proof denies faith and without faith I am nothing.” It’s a type of argument associated with very traditional, conservative Christianity, and, often, with its more zealous, bigoted, or “medieval” side. I play a game whenever I run into a new scholar who works on Medieval or early modern theological sources, any sources, any period, any place, from pre-Constantine Rome to Renaissance Poland. I ask: “Hey, have you ever run into arguments that God’s existence can’t be proved, or God wants to be known by faith alone, before the Reformation?” Answers: “No.” “Nope.” “Naah.” “No, never.” “Uhhh, not really, no.” “Nope.” “No.” “Nothing like that.” “Hmm… no.” “Never.” “Oh, yeah, one time I thought I found that in this fifth-century guy, but actually it was totally not that at all.” Like biblical literalism, it’s one of these positions that feels old because it’s part of a conservative position now, but it’s actually a very recent development from the perspective of 2,000 years of Christianity plus centuries more of earlier theological conversations. So, that isn’t what the Middle Ages generally does with doubt; it doesn’t rave about faith or God’s existence being elusive. Europe’s Medieval philosophers were so sure of God’s existence that it was considered manifestly obvious, and doubting it was considered a mental illness or a form of mental retardation (“The fool said in his heart ‘there is no God’,” => there must be some kind of brain deficiency which makes people doubt God; for details on this a see Alan C. Kors, Atheism in France, vol. 1). And when St. Anselm and Thomas Aquinas and Duns Scotus work up technical proofs of the existence of God they’re doing it, not because they or anyone was doubting the existence of God, but to demonstrate the efficacy of logic. If you invent a snazzy new metal detector you first aim it at a big hunk of metal to make sure it works. If you design a sophisticated robot arm, you start the test by having it pick up something easy to grab. If you want to demonstrate the power of a new tool of logic, you test it by trying to prove the biggest, simplest, most obvious thing possible: the existence of God.
(PARENTHESIS: Remember, I am skipping many Medieval things of great importance. *cough*Averroes*cough* This is a snapshot, not a survey.)
Three blossoms on the thorny rose of this Medieval trend toward writing proofs of the existence of God are worth stopping to sniff.
The first blossom is the famous William of Ockham (of “razor” fame) and his “anti-proof” of the existence of God. Ockham was a scholastic, writing in response to and in the same style and genre as Abelard, Aquinas, Scotus, and their ilk. But, when one read along and got to the bit where one would expect him to demonstrate his mastery of logic by proving the existence of God, he included instead a plea (paraphrase): Please, guys, stop writing proofs of the existence of God! Everyone believes in Him already anyway. If you keep writing these proofs, and then somebody proves your proof wrong by pointing out an error in your logic, reading the disproof might make people who didn’t doubt the existence of God start to doubt Him because they would start to think the evidence for His Existence doesn’t hold up! Some will read into this Anti-Proof hints of the beginning of “God will not offer proof, He requires faith…” arguments, and perhaps it does play a role in the birth of that vein of thinking. (I say this very provisionally, because it is not my area, and I would want to do a lot of reading before saying anything firm). My gut says, though, that it is more that Ockham thought everyone by nature believed in God, that God’s existence was so incredibly obvious, that God was not trying to hide, rather that he didn’t want the weakness of fractious scholastic in-fighting to erode what he thought was already there in everyone: belief.
Aside: While we are on the subject of Ockham, a few words on his “razor”. Ockham is credited with the principle that the simplest explanation for a thing is most likely to the correct one. That was not, in fact, a formula he put forward in anything like modern scientific terms. Rather, what we refer to as Ockham’s Razor is a distillation of his approach in a specific argument: Ockham hated the Aristotelian-Thomist model of cognition, i.e. the explanation of how sense perception and thoughts work. Hating it was fair, and anyone who has ever studied Aristotle and labored through the agent intellect, and the active intellect, and the passive intellect, and the will, and the phantasm, and innate ideas, and eternal Ideas, and forms, and categories, and potentialities, shares William of Ockham’s desire to pick Thomas Aquinas up and shake him until all the terminology falls out like loose change, and then tell him he’s only allowed to have a sensible number of incredibly technical terms (like 10, 10 would be a HUGE reduction!). Ockham proposed a new model of cognition which he set out to make much simpler, without most of the components posited by Aristotle and Aquinas, and introduced formal Nominalism. (Here Descartes cheers and sets off a little firecracker he’s been saving). Nominalism is the idea that “concepts” are created by the mind based on sense experience, and exist ONLY in the mind (like furniture in a room, adds Sherlock Holmes) rather than in some immaterial external sense (like Platonic forms). Having vastly simplified and revolutionized cognition, Ockham then proceeded to describe the types of concepts, vocabulary terms and linguistic categories we use to refer to concepts in infuriating detail, inventing fifty jillion more technical terms than Aquinas ever used, and driving everyone who read him crazy. (If you are ever transported to a dungeon where you have to fight great philosophers personified as Dungeons & Dragons monsters, the best weapon against Ockham is to grab his razor of +10 against unnecessary terminology and use it on the man himself). One takeaway note from this aside: while “Ockham’s Razor” is a popular rallying cry of modern (post-Darwin) atheism, and more broadly of modern rationalism, that is a modern usage entirely unrelated to the creator himself. He thought that the existence of God was so incredibly obvious, and necessary to explain so many things, from the existence of the universe to the buoyancy of cork, that if you presented him with the principle that the simplest explanation is usually best, he would agree, and happily assume that you believed, along with him, that “God” (being infinitely simple, see Plotinus and Aquinas) is therefore a far simpler answer to 10,000 technical scientific questions than 10,000 separate technical scientific answers. Like Machiavelli, Aristotle and many more, Ockham would have been utterly stunned (and, I think, more than a little scared) if he could have seen how his principles would be used later.
The second blossom (or perhaps thorn?) of this Medieval fad of proving God’s existence was, well, that Ockham was 110% correct. Here again I cite Alan Kors’ masterful Atheism in France; in short, his findings were that, when proving the existence of God became more and more popular, as the first field test to make sure your logical system worked, (a la metal detector…beep, beep, beep, yup it’s working!), it created an incentive for competing logicians to attack people’s proofs of the existence of God (i.e. if it can’t find a giant lump of iron the size of a house it’s not a very good metal detector, is it?) Thus believers spent centuries writing attacks on the existence of God, not because they doubted, but to prove their own mastery of Aristotelian logic superior to others. This then generated thousands of pages of attacks on the existence of God, and, by a bizarre coincidence *cough*cough*, when, in the 17th and 18th centuries, we finally do start getting writings by actual overt “I really think there is no God!” atheists, they use many of the same arguments, which were waiting for them, readily available in volumes upon volumes of Church-generated books. Dogmatism here fed and enriched skepticism, much as skepticism has always fed and enriched dogmatism, in their ongoing and fruitful symbiosis.
The third blossom is, of course, sitting with us dolling out eclairs. Impatient Descartes has been itching, ever since I mentioned Anselm, to leap in with his own Proof of the Existence of God, one which uses a more mature form of Ockham’s Nominalism, coupled with the tools of skepticism, especially doubt of the senses. But Descartes may not speak yet! (Don’t make that angry face at me, Monsieur, you’ll agree when you hear why.) It won’t be Descartes’ turn until we have reviewed a few more details, a little Renaissance and Reformation, and introduced you to Descartes’ great predecessor, the fertile plain on whom Descartes will erect his Cathedral. Smiling now, realizing that we draw near the Illustrious Father of Skeptics whom he has been waiting for, Descartes sits back content, until next time.
But do not fear, the wait will be short this time. Socrates is in more suspense than Descartes, and if I stop writing he’ll start demanding that I define “illustrious” or “next” or “man”, so I’d better plunge straight in. Meanwhile, I hope you will leave this little snapshot with the following takeaways:
Medieval thought was notdominated by the idea that logic and inquiry are bad and Blind Faith should rule; much more often, Medieval thinkers argued that logic and inquiry were wonderful because they could reinforce and explain faith, and protect people from error and eternal damnation. Medieval society threw tons of energy into the pursuit of knowledge (scientia, science), it’s just that they thought theology was 1000x more important than any other topic, so concentrated the resources there.
When you see theologians discussing whether certain areas of knowledge are “beyond human knowledge” or “unknowable”, before you automatically call this a backwards and closed-minded attitude, remember that it comes from Plato, Epicurus and Aristotle, who tried to differentiate knowledge into areas that could be known with certainty, and areas where our sources (senses/logic) are unreliable, so there will always be doubt. The act of dividing certain from uncertain only becomes close-minded when “that falls outside what can be known with certainty” becomes an excuse for telling the bright young questioner to shut up. This happened, but not always.
Even when there were not many philosophers we could call “skeptics” in the formal sense, and the great ancient skeptics were not being read much, skepticism continued to be a huge part of philosophy because the tools developed to combat it (Aristotle’s logical methods, for example) continued to be used, expanded and re-purposed in the ongoing search for certainty.