This is an essay I was invited to write in 2017 for the delightful spec fic blog Queersship, which has since ceased to exist, but many people have asked me to re-post the essay, especially now as the series finale is coming out. For a more recent (though less expansive) discussion of similar issues see my guest post on Nine Bookish Lives which asked me here in 2021 to discuss Terra Ignota and the question of “a future that doesn’t see gender.”
Going Deep into the Gender of Terra Ignota
First I want to thank Queersship for inviting me to write about gender in my Terra Ignota series, since gender stuff is probably the part of the book that took the most time and effort word-by-word. (Well, the Latin and J.E.D.D. Mason’s dialog were literally more effort per word, but there is a lot less Latin in the book than there are pronouns…)
I want to talk separately about two levels of what the book does with gender:
(A) the larger world building, and (B) the line-by-line pronoun use.
On the line-by-line level the series uses both gendered and gender neutral pronouns in unstable and disruptive ways, designed to push readers to learn more about their own attitudes toward gendered language as they grapple with seeing it used so strangely and uncomfortably. On the macro level, the series presents a future society which is neither a gender utopia where all our present issues have been solved, nor an overt gender dystopia like The Handmaid’s Tale, but something both more difficult to face and, in my view, more realistic: a future which has made some progress on gender, but also had some big failures, showing us how our present efforts could go wrong, or stagnate incomplete, if we don’t continue to work hard pushing for positive change.
At the beginning of Too Like the Lightning the main narrator, Mycroft Canner, addresses the reader directly, asking “forgive me my ‘thee’s and ‘thou’s and ‘he’s and ‘she’s, my lack of modern words and modern objectivity.” We soon learn what this means: in the 25th century world of Terra Ignota, people have no assigned sex, practically all clothing and names are gender neutral, English has stopped using gendered pronouns, and normal dialog always uses the singular ‘they.’ But in the narration Mycroft assigns gendered pronouns to people based on his own personal opinions of which gender suits their personalities. Mycroft insists that his history won’t make sense without the “archaic” tool of gender, a claim which invites the reader to judge Mycroft’s decision to do this, and to think about how this use of gender manipulates us and the narrative. So, Mycroft uses ‘he’ and ‘she’ in narration, while most characters use ‘they’ in their dialog. But this is more than Mycroft reviving the gender binary in a genderless world, since Mycroft applies gender in idiosyncratic ways no one would today—just as authors today who use ‘thee’ and ‘thou’ in literature practically never use them as they were actually used in pre-modern English. Mycroft’s understanding of ‘he’ and ‘she’ has nothing to do with biological sex, or anything we can recognize from how our society uses the words today, and learning about how Mycroft uses gender is our first window into the strange gender attitudes of the world he is trying to describe.
World Building: An Age of (Gender) Silence
I want to talk about the larger world building before I go more deeply into Mycroft’s pronoun use. We learn early in Too Like the Lightning that the gender neutral language of this 25th century is not the result of society’s efforts toward inclusiveness finally succeeding, but the result of global trauma and severe censorship. In the twenty-second century a global conflict called the “Church War” devastated much of the Earth, and in the aftermath both religious discourse and gendered language were forbidden, by severe taboos and censorship laws. Using ‘he’ and ‘she’ is not just outdated in this world, it’s completely disallowed, and discussing religion without a state-licensed chaperone is a severe crime.
This element of the world is intentionally polarizing for my readers, creating a future that feels like utopia to some and dystopia to others. A world where family members are forbidden to discuss religion with each other may feel liberating to anyone who’s had nasty interactions with proselytizing parents, but oppressive to anyone who values religious community and heritage. Similarly a world where ‘they’ is the only permissible pronoun may feel liberating to some who see it as an escape from the current binary, but feels oppressive to anyone (whether cisgender, transgender, nonbinary, gender-nonconforming, or something else) who strongly desires to express gender, considering gender an important part of identity and wanting to be acknowledged with the pronoun of their/his/her/zir/its/etc. choice. But in the world of Terra Ignota, even Sniper—a character who actively prefers the ‘it’ pronoun because Sniper wants to dehumanize itself and be treated as a living doll—is denied the right to be ‘it’ if it wants, just as others are denied ‘he’ or ‘she.’ One of my big goals in creating this polarizing world was for readers to discuss their reactions with each other, exploring how one person’s utopia can be another’s dystopia, and exploring the tensions between our different ideals of religious freedom and of gender liberation—tensions we need to understand and address as we work together in the real world to create inclusivity which will work, not just for some people, but for everyone.
Our narrator claims in the text that this forced global silencing of gender and gendered discourse has resulted in a false gender neutrality, that under the surface people in his world still think in terms of binaries, and that inequality continues, just without anyone being willing to admit it. Real gender progress stopped short under the silence, so the society kept unconsciously passing on forms of gendered thought and inequality, not because they’re somehow ineradicable or biologically ingrained, but because the abrupt end of dialog meant no one was working to eradicate them, so they continued to be passed on. In a world that insists gender is gone, no one is doing studies on the pay gap, or discrimination, or gender ratios of politicians, or analyzing fiction for how it presents gender. Since the society declared that the big problems were solved, no one is watching for the effects of gender on the world anymore, so no one perceives those smaller problems which haven’t been solved, or tries to address them.
This is one of several threads in the series which press beyond the question “Does the end justify the means?” to another question: “Does a bad means poison the end?” Is gender equality achieved through censorship so problematic in itself that it might harm efforts toward true equality more than it helps? Is forced silence in the name of progress actually an enemy of deeper progress?
Put another way, in Terra Ignota I wanted to show a world that botched the endgame of feminism and gender liberation. Sometimes you hear people say things like, “Feminism is done, women have equal rights under the law, so we don’t need all this gender discussion anymore.” It’s a strategy people use to try to shut down discourse. But gender progress isn’t done. We’re only beginning, through psych studies and research, to understand how we unconsciously pass on gendered behavior patterns to children. We’ve only just realized how much we’ve been drowning our kids in stories where women have less narrative agency than men, and where ‘boy’ and ‘girl’ are harsh, unquestioned binaries. We’re only just beginning to produce new works that do better. Transgender and nonbinary gender rights and representation are in their infancy. And realistically in fifty years, with many legal battles won, these processes will still be in their infancy. Olympe de Gouge wrote her Declaration of the Rights of Woman and of the Female Citizen in 1791, yet female suffrage didn’t gain momentum until the late 1800s, and we’re still struggling to make an equal space for women in politics even now. But imagine if feminist discourse had shut down in 1960 when the last Western nations adopted women’s suffrage. If we’d stopped the conversation then, declared that to be victory, then no one would now be doing things like watching the pay gap or writing feminist literature, and progress would slow to a crawl, or possibly stop entirely. And the same could happen to other forms of social progress (race, ableism) if their conversations are shut down. So, as a rebuttal to those who say feminism is finished and should stop, and who will in the future say that other movements like the transgender movement are finished and should stop, I wanted to depict a world where these conversations did stop, where silence fell in the 2100s, and we see the bad effects of that stagnation still affecting the 2400s.
Pronouns: ‘Thee’s and ‘Thou’s and ‘He’s and ‘She’s
As for Mycroft’s line-by-line narration, one challenge I posed for myself in these books was writing from the point of view of a narrator so immersed in his world that he is inept and clumsy at critiquing it. I’m a historian, so, from reading historical documents all the time, I’m acutely aware that it’s incredibly difficult it is to start a conversation about an issue one’s society has silenced. When we read early feminist or socially progressive works, like Olympe de Gouge, or Mary Wollstonecraft, or Voltaire, or even Plato’s Republic (which argues that male and female souls are fundamentally the same; proto-feminism in 300+ BC!), we admire some of their ideas but often find their actual discussions of the subjects painful to read. Authors so early in the discourse tend to be so saturated with the outdated prejudices of their eras that a lot of those prejudices leak through, even as they seek to battle them. You see people fighting for women’s rights while voicing deeply sexist ideas about the attributes or role of women, or calling for the rights of people of color while using the condescending, infantilizing racist language that saturated the 1700s and 1800s. First generation members of a movement nearly always express themselves ineptly by the standards of their successors, because, when there has been no critical conversation about a topic, it is very hard for the first critics to get a good perspective on it.
So in framing my tale of the 25th century as a historical document, written by someone in the period, I decided to have that fictional author be limited by how plausibly difficult it would be for someone to start seriously discussing gender again when no one had done so in 350 years. And I chose to model the narration on 18th century narration partly because 18th century critiques of gender are brilliant-yet-inept in precisely the way I wanted to examine. Giving my narrator the sophisticated terminology of the 21st century would have made it too easy for his critique to become comfortable for us. Mycroft Canner, and also all the other characters we hear discuss gender in the books, all have deeply bizarre, twisted, and by our standards unhealthy ideas about gender. Because realistically that’s the best I think people could do as a first step in a world so wracked by silence, just as Plato and Mary Wollstonecraft’s works were the best they could do in their own eras. It’s disorienting reading Mycroft’s discussions of gender, and seeing his strange and uncomfortable attitudes, and the other characters who address gender are generally just as uncomfortable to us. And that discomfort pushes the reader to distrust all the pronouns and all the gendered language, to try to cut through Mycroft’s distorting perspective, much as we have to do when trying to get past the bias in real historical documents. It shows just how difficult it could be to restart these conversations after silence, which I hope will strengthen readers’ commitment to keep on pushing, writing, talking, and critiquing. To make sure silence doesn’t fall.
Thus, my narrator Mycroft, struggling to express himself, resorts to using ‘thee’ and ‘thou’ and also ‘he’ and ‘she’, assigning ‘he’ and ‘she’ based on which gendered archetype he associates with a character’s personality and actions, regardless of appearance. Mycroft’s gender categories are very idiosyncratic, and we learn about him by observing them, much as in Star Wars we learn a lot about Darth Vader observing how he uses ‘thee’ and ‘thou’. To start with, Mycroft’s own attempt to stick to a gender binary quickly breaks down. For some characters gendered pronouns fit easily, and do indeed help the reader make sense people’s actions, as when we deal with Heloïse, a nun whose religious vocation is deeply steeped in traditional ideas of gender, and who very consciously embraces an identity as ‘she’. For other characters, gendered pronouns are such a mismatch that even Mycroft resorts to ‘they,’ as with the human computer Eureka Weeksbooth. And for yet other characters Mycroft assigns gendered pronouns but they feel so irrelevant that there would be no change if one reversed them, as with the otherworldly Utopians Aldrin and Voltaire. (I’ve sometimes had readers forget what pronoun Mycroft gives each of them—I’m so proud when people forget!) As the series advances, Mycroft sometimes switches pronouns for a character, or apologizes to the reader for having trouble finding the right gender fit. For some characters, physical descriptions make it clear which sex the character’s body appears to be (Mycroft will mention a beard, or breasts, or genitalia) so the reader knows whether the sex matches the pronoun, while for other characters the reader is given no clue to the character’s appearance or biological sex other than the pronoun assigned by the narrator. All this strangeness aims to make the reader hyperconscious of the pronouns, and of the ways gendered pronouns mislead, clarify, distort, help, and harm.
Some readers have told me that the book’s use of pronouns changed how they felt about the singular they, that they’d disliked it before, thinking of it as a distortion of grammar, but that Too Like the Lightning helped them see for the first time how manipulative binary gender pronouns can be, how ‘they’ can be a valuable and liberating alternative. (This was one of my big goals!) Other readers have told me they were surprised to find themselves obsessing over the ‘real’ genders of the characters whose genders aren’t clear, painstakingly tracking every hint in physical descriptions, and that discovering that they were doing this helped them realize for the first time how much they really do judge characters differently based on gender. (This was another big thing I hoped to help make readers conscious of.) Some readers have said they were particularly fascinated by their reactions to the characters whose physical descriptions clearly don’t match their pronouns, that for some characters they found themselves thinking of the pronoun as the ‘real’ gender while for other characters they thought of the physical description as the ‘real’ gender, and that this made them rethink how they understand the relationship between gender and bodies. (Brace yourselves for books 3 and 4, where things get even trickier!) I’ve been particularly touched when readers have told me that the books helped them gain more respect for the transgender movement and for transgender, nonbinary, and gender-noncompliant people, understanding at last why many people want so badly to be able to choose their pronouns and genders for themselves. (So proud when people have that reaction!)
In contrast, a couple of readers have told me they felt they didn’t get much out of the book’s strange use of pronouns, that it just replayed for them the familiar (and often painful) problems of assigned sex and the current gender binary. Writing intentionally uncomfortable fiction like Too Like the Lightning is high risk. For some people it hits too close to painful areas and just hurts instead of being productive. For others it’s too rudimentary, spending a lot of time demonstrating the manipulative effects of pronouns which many readers are already very conscious of. But other readers are not so conscious of them. Right now F&SF readers, and readers in general, vary enormously in how much we’ve thought about gender, about binary and non-binary gender, about transgender and cisgender, about intersex and agender—some readers live and breathe these issues every day, while others have just dipped a toe into the conversation. With readers in so many different places in that conversation, a book which one group of readers finds stimulating and productive may totally fail for another group. I know some readers have found the first book painful in a bad way, and whatever my intentions that pain is real and I’m sorry I caused it, that try as I might it was too difficult walking the line between the productively painful (1984 and A Handmaid’s Tale are very painful) and the unproductively painful. But I hope this essay will at least help those readers who found it too painful see that I was aiming for something constructive, even if, while I hit the mark for some readers, I missed it for others. And I agree 100% with my (amazing!) fellow Hugo finalist Yoon Ha Lee’s comment that it’s important that we accept works that try hard to address difficult topics, even if they don’t succeed as perfectly as we would like, because we don’t want to scare people off from trying. (And I can’t tell you how proud I am to be part of such an incredibly diverse group of fellow Hugo finalists!)
Writing Mycroft’s inconsistent pronoun use was also a fascinating learning process for myself as an author. First, I worked out carefully what Mycroft’s own ideas about gender were, what characteristics would make him choose ‘he’ or ‘she’ for someone. Then, when I had mostly outlined the series, I went through and read over the outline in detail three times for each of the thirty-four most important characters (more than 100 rereads total), once imagining the character as “he” in the narration, another time as “she,” and a third time trying to think of the character without gender. For some characters I did more than three passes, when I decided to try something even more unusual with gender. My goal was to see how each character’s arc might feel different with a different pronoun. Some characters’ arcs felt much the same regardless of gender, while, for other characters, actions or outcomes felt very different when gendered differently, suddenly falling into a cliché, or defying one. I learned a lot about my own attitudes toward gender by seeing when the pronoun made a big difference for me, and when it didn’t. By making myself live through the four book arc of Terra Ignota 3+ times for every character, I made sure that I was 100% clear on how Mycroft’s choice of pronoun might change the reader’s feelings and expectations about each character, so I could be sensitive to that as I wrote the actual books, and make use of its potential to disrupt expectations. In a few cases where I felt Mycroft would waffle about which pronoun to use, I took the opportunity to have him use the one which would make the character’s arc more striking, or to have him minimize gendered language for that character to create a nearly-genderless arc, as with Eureka, Mushi, Aldrin, and Voltaire. In the end I found this gender-swapping reread process so productive that now I’m doing it with every story I outline, even if I’m not planning to do much with pronouns, since it’s such a great way to discover new narrative possibilities, and to notice when I’ve slipped into a gender cliché.
Once writing was underway, I also spent pass after pass through the manuscript hunting for inconsistencies in my own pronoun use, correcting ‘they’s to ‘she’s, ‘she’s to ‘he’s, ‘they’s to ‘it’s, and ‘he’s to ‘He’s (for the character who capitalizes His pronouns). Some chapters I wrote more than once with different pronouns to see how they would feel each way. Switching so constantly totally broke the pronoun habits in my own head, so that it leaked out into all my other work. While working on these books, I’ve constantly had the editors of my academic articles complaining about how I was switching between ‘they’ and ‘he’ and ‘she’, and once (my favorite) I got the baffled question, “Why are you using ‘she’ for Jean-Jacques Rousseau?” (In Terra Ignota Rousseau is ‘she’ by Mycroft’s rules of gender, but it wasn’t easy explaining that to an academic journal!) And some chapters are narrated by other characters who don’t use gendered pronouns at all, so switching from narrator to narrator also took great care (but gives the reader a much-needed break from the disruptive pronouns). In the end, even with the giant team effort of (I kid you not!) thirty-six beta readers, plus the editor, copy editor, and page proofer all hunting for (and finding!) inconsistent pronouns, a few still slipped through into the printed version, moments a ‘they’ that should be a ‘he,’ or vice versa. The process was exhausting, and imperfect, but more than worth-it—I feel that every time a reader tells me that it helped them discover new aspects of how pronouns affect our thought, our culture, and themselves. (Yes!)
Between Utopia and Dystopia:
Terra Ignota is neither a dystopia nor a utopia—it’s a future that has taken two steps forward but one step back. It has a lot of things that feel Utopian: flying cars, a 150+ year lifespan, a 20 hour workweek, a Moon Base, long-lasting world peace. Maybe 80% of the attributes of this world are the stuff of Utopia. But it has a lot of things that feel dystopian: censorship, surveillance, “Reservations” (hello, Huxley), a resurgence of absolute monarchy, and the complete dissolution of our current political world. Gender is only one of many axes on which it presents a disorienting mixture of things we long for and things we dread. It’s not an easy read, not a comfortable read, not a safe read. For many (myself included) it’s a painful read. The more you love the good aspects of this future (and I love them dearly!), the more painful it is seeing the bad ones mixed in with them. I sometimes say Terra Ignota is the opposite of beach reading. And right now it’s especially difficult because, with only two books out, it isn’t finished, and a lot of things (especially with what path forward this world will take to address its problems with gender) are absolutely unresolved.
It’s also a harder read, I think, than pure dystopia. When we read 1984, and The Handmaid’s Tale, and V for Vendetta, and The Hunger Games, we know these worlds are terrible. We the readers, the author, and the characters can all cry out together in one voice: “No!” Something like Brave New World is more difficult, because there, amid the things we find abhorrent, we are forced to admit that we would be happier, in a pure pleasure-center-synapses-firing-per-lifetime sense, if we lived in Huxley’s world than in our own. That’s a painful thing to admit. But Huxley’s world strips away so much we value more than happiness that we can still cry out together: “No!” But what if it stripped away even less, and gave us even more? (ADDENDUM: see my 2021 essay on hopepunk for an expansion of this idea.)
By most metrics of how we evaluate civilizations, the civilization in Terra Ignota is the best era humanity has experienced in Earth’s history. It has no war, no poverty, no hunger, very little crime, very little disease, very little labor, long life, amazing toys and games, spectacular future cities, unprecedented political self-determination, no homophobia, no ecological problems or pollution, less racial tension, genuinely less gender inequality even though some lingers, and kids take field trips to the Moon. But it also has deep, deep flaws—not as deep as Brave New World, but deep. The series keeps coming back to a pair of questions, asked in different ways by several core characters: Would you destroy this world to save a better one? And its opposite: Would you destroy a better world to save this one? These aren’t questions about having two planets or two realities and blowing one up, they’re questions about history, and progress. Will the characters risk destabilizing this flawed-but-best-yet age of human civilization, risking the return of catastrophe and violence, in hopes of someday making an even better world? Or will characters try to prevent this society from changing to preserve how nearly-wonderful it already is? Destroying the possibility of a better future world to avoid endangering this already very good one? These are questions no utopia or dystopia can ask—only a hybrid of the two.
So that’s Terra Ignota’s gender project in a (rather lengthy) nutshell. I hope everyone will enjoy reading on to later volumes where the gender pronouns are disrupted even more, presenting new challenges and instabilities, and where we get to see this future society come face-to-face with its lingering gender issues, and seek a good path forward. And I hope readers will be patient as the four books come out. Some novel series are episodic, each adventure completing before the next, but some, like this one, really are one project so complex it can’t be told in 140,000 words. It needs 560,000. The society of Terra Ignota will have to face its newly-unsilenced gender issues, and its solution cannot be stasis, nor can it be reversion to the old binary. But, just as real world reform movements are shaped by events—disasters, recessions, crises, wars—I want to show how this one could be shaped by events, and take a different shape depending on those events. And those events need a lot of pages to be told.
Thank you for reading, and I hope you will continue to read and enjoy Terra Ignota, but I hope above all that many of you will go on to write your own new works (fiction and nonfiction) addressing gender, and these ideas, and others. Because the biggest goal is that discourse continue!
(Want to see more recent discussions? See my guest post on Nine Bookish Lives which asked me here in 2021 to discuss Terra Ignota and the question of “a future that doesn’t see gender.”)
Hello, friends! Quick post today to say three things:
I am (barring emergencies) going to Worldcon in DC this December! It looks like my recovery/therapy should be just enough to try it, my first venture out to an event since the onset of the new problems, but the doctors are encouraging! It will be wonderful seeing people again!
I recently did some more guest blogging as well as an online discussion for CUNY of world building and social science, along with my good friend Jo Walton, and others including Henry Farrell, Paul Krugman, and Noah Smith. The links are below! The one I’m most excited by is the Hopepunk essay.
I recently did two AMAs, one in summer and one this week, and I thought I would re-share some of the most interesting questions/answers here for you to enjoy. Below I’m sharing two to start, touching on religion and utopia in Terra Ignota, and I’ll share more in the coming weeks. I hope you enjoy!
My ideal is when readers debate it being a utopia or not, which aspects of it do seem utopian and which seem bad or even dystopian. I intentionally made a mixture: it has a twenty hour workweek, a 150 year average lifespan, general prosperity, unprecedented political self-determination, you can live with your friends, there’s been peace for 300 years… and it has censorship, severe religious restrictions, weird silencing of gender, tension over land and rents, various political strife and prejudices, and other flaws. It’s wonderful how often a pair of friends will read the book and it will feel dystopian to one and utopian to the other. The silencing of religion makes some people say, “Yay! My super-religious parents would have to shut up and theocrats would be kept out of politics!” while the same makes others say, “Wait, I couldn’t have a passover dinner or a religious wedding without state supervision?!” Similarly the silencing of gender makes some readers feel like it would be ideal, making everyone stop gendering each other and use ‘they’ for all, while other readers feel like suppressing gender expression would be terrible and prevent them from feeling like themselves. It’s often the conversations between people for whom the world feels great or very-not-great that get richest, something I intended to help show how we need to think carefully about social change if we want to make a world that works for everyone. My real goal was to make a world which would feel to us as I think our present would feel to Diderot or Voltaire: some things are amazingly much better especially medicine and lifespan and daily tech; other things are weirdly confusing like (in the Enlightenment case) clothes that would seem to them as if we’re naked all the time, and social class working totally differently; other things are depressingly familiar like, for Voltaire especially, the campaigning Voltaire did against religious intolerance, torture, and anti-vaccination movements (he was a smallpox innoculation proponent and fought with antivaxxers in the 1700s!) the continuation of those problems still being issues would be weirdly depressing. That mixture is what I was going for: better in a lot of ways, worse in a few, in others just weird and confusing. Since that is really what the future is likely to be to us.
Q: Why does everyone (in Terra Ignota) assume there’s only one god?
So, short answer I don’t, and they don’t. This is a space where I think Mycroft’s own frequent discussions of Providence and a singular ‘Peer’ are overshadowing for you the details we learn about everyone else’s. Remember that Mycroft says in Seven Surrenders that he and Saladin were explicit atheists before his all-important encounter with you know who, and that while Mycroft has unitary ideas about Providence he also lenses them through Greek polytheism. Also recall that in the descriptions of sensayers especially in Too Like the Lightning it specifies that they consider atheism a belief system to be studied and discussed in all its pluralness and richness alongside the others, so sensayers are studying atheism among all the systems they study, and it’s just as diverse and complex, many different variant atheisms, than the others, all of which also enters the dialog with a sensayer. Having sensayer sessions isn’t about pushing people toward any particular belief, but facilitating people having an examined set of ideas, and making sure that everyone encounters many different systems, including several variant atheisms alongside a variety of theisms. Also important to note that, apart from the King of Spain and those close to JEDDM, the only other people whose beliefs we specifically hear about are the ones JEDDM encounters in the Saneer-Weeksbooth bash’house on pages 196-7 of Too Like the Lightning, one of whom is exposed as Catholic but the other as a believer in karma; belief in karma is definitely not the same thing as monotheism, and Carlyle tells Bridger on page 118 of Too Like the Lightning, “People have a lot of different ideas of different ways that reincarnation and karma might work,” and again most (though not all) belief systems that include reincarnation and/or karma are not monotheistic. So there’s a lot of complexity, but we see forefronted most the headspace our narrator is in, and the monotheistically-structured systems at Madame’s, precisely because they’re being *inappropriately* brought out in public while the properly-handled ones are silent, i.e. the many characters whose beliefs we don’t know because they’re correctly following the taboo. In other words, we are only seeing explicitly the religion that’s being done “wrong” by the metrics of the culture, and that one is dominated by Madame-influenced quasi-Catholic monotheism, but we see many many hints of the others in different corners of the text. Meanwhile, if you’re interested in a more complex dive into polytheism & religious pluralness in the Terra Ignota future, then I think you will enjoy Perhaps the Stars.
Good news first, I have a new essay out in Uncanny Magazine, “Expanding Our Empathy Sphere Using F&SF, a History,” where I talk about my term ’empathy sphere’ meaning the collection of beings we consider coequally a person with ourselves, something which historically has expanded over time, and which is useful in thinking about why when we read old utopias, like More’s Utopia, or early SF utopias, they often don’t feel utopian to us anymore if they don’t have freedom for groups that are inside our empathy sphere but weren’t inside More’s (like lower classes, women, certain races, clones, A.I.s etc.). It’s a useful analytic term and one several people have asked me to write about, and I also give a history of how SF has helped expand this sphere over time. I hope you enjoy reading it!
Less good and more personal news next, my health has taken a bad turn, bad enough that I have taken medical leave and had to cancel my fall teaching. My medical team is still running tests (U Chicago has an exceptional hospital), and they don’t think it’s life-threatening, but it’s probably a circulatory system issue, with symptoms including severe dizziness, faintness, stumbling & falling, all of which make it very hard to do anything, including teaching. They’re still running tests, and generally hopeful that things will improve, but on a scale of months, not weeks or days. I hope to be well enough to teach in spring. As for writing, I’m doing some, since one of the hard things in this situation is to keep my morale up and nothing nothing nothing makes me happier than writing, but it’s still being slowed, alas, though may pick up a bit as the glut of start-of-leave tasks diminishes.
So I wanted to share some reflections on this.
One is that it is amazing how much of the resistance to taking medical leave came from me, not others. Even when friends, colleagues, disability staff at the university, and family were all encouraging it, even when I confirmed my employer policies meant I could do it w/o a bad hit to income etc., even when I was in the doctor’s office and the doctor checked a couple things and the first words out of her mouth were, “Well, you can’t work!”, even when the doctors took it so seriously they wouldn’t let me walk out of the office but insisted I wait for a wheelchair, I still immediately started protesting about, “Well, if I teach remotely from lying down… but this course is special… but if I have X accommodation…” etc. arguing back even against such reasonable arguments as, “Your body is failing to deliver oxygen to your brain! You know what you need to do anything?! Oxygen for your brain!” Nonetheless, it took many days, much encouragement, and many repetitions of exhaustion & collapses for me to decide that, yes, everyone urging me to take medical leave did indeed mean I should take medical leave. (Important principle: in teaching all courses are special/unique, if you make exceptions for that you’ll never stop making exceptions.)
Where did my resistance to taking medical leave come from, when I was in the extraordinarily fortunate position of my employers, doctors, and family all being 100% supportive? (a rare and lucky thing). Partly it came from not wanting to let others down, partly from not wanting to admit to myself that it was serious, but a big part of it also comes from narratives, from The Secret Garden, from Great Expectations, from a hundred other narratives, some classic some recent, in which chronic illness/weakness/invalidness is all in one’s head, or where it’s “overcome” by force of will or powering through the pain, so that even in the fortunate case where everyone around me was being supportive and great, those narratives of powering through were unconsciously deep inside me feeding my resistance to accepting that my doctors and employer aren’t exaggerating when they say, “Don’t work.” This connects to something I discussed in my second-most-recent Uncanny essay, on the Protagonist Problem, that it’s very important to have a variety of narratives and narrative structures, and it can do real harm if one type of narrative or structure dominates depictions of a topic. Some versions of this have been discussed a lot recently: back pre-Star Trek, when close to 100% of black women depicted on TV were housemaids, it did harm by reinforcing bad stereotypes & expectations; similarly today when a very high percentage of immigrant characters depicted on TV are shown committing crimes, it feeds bad expectations. In the Protagonist Problem essay I argue that it also does harm when a large majority of our stories show the day being saved by individual special (often chosen one or superpowered) heroes, since it feeds a variety of bad impulses, including the expectation that teamwork can’t save the day, and feelings of powerlessness if we don’t feel like heroes; the argument isn’t that protagonist narratives are bad, it’s that protagonist narratives being the vast majority of narratives is bad, because any homogeneity like that is bad, just as it’s important for us to depict many kinds of people being criminals on TV, not a few kinds overrepresented and others erased.
Thus, for disability, we also have a problem that depictions of disability tend to repeat a few stock narratives, not one but three really, which together drown out others and dominate our unconscious expectations. One form is is the disabled/disfigured villain, a holdover from pre-modern ideas about Nature marking evil with visible indicators (and virtue with beauty). Another is a person falling ill and dying, a tragedy, which ends up focusing on the friends and loved ones who help along the way, or who survive. Another is ‘inspiration porn’ (David M. Perry has great discussions of this) which has a few varieties but tends to focus on how heroic an abled person is for helping a disabled person achieve a thing (like Secret Garden where she gets him out of the chair) instead of on the disabled person’s achievements/experience, or to present “Look a disabled person did a thing!” but in a weirdly dehumanizing way, the same way you would write “Look, this monkey can play chess!” All of these make people resistant to accepting the label disabled, since, even though it’s really useful once you have (I had trouble for a long time) we associate it with being morally bad, being doomed, or being helpless and dehumanized.
The disability narrative most relevant in my recent situation, though, are the stories of ‘overcoming’ disability, where a person is either cured (through their own efforts or others’), or works hard and pushes through, so the disability becomes a problem of the past, that has been left behind. This often-repeated narrative (present in fiction and nonfiction) encourages the attitude of seeing disability’s disruptions to life as temporary and surpassable. It means that, when I get a new diagnosis, my first thoughts even this many years into having chronic illness, are always about how long it’ll be until I overcome it, what I need to do to get past it, the expectation that it’ll be normal by spring/summer/December/whatever. This often leads me to delay by weeks or months or longer taking steps to, for example, adapt my home to be more comfortable (like getting a lap desk so I can work lying down), and other changes dependent on expecting the condition to be here to stay. I think, as a culture, we really hate telling stories about illnesses and disabilities that are here to stay.
I remember a conversation with a friend once about a situation where a medication good at treating their particular condition was taken off the market, and the parents of a kid with the condition contacted my friend to ask how to advocate or find other ways to get more of the medication, and the friend had to keep saying no that wont’ work, no you can’t get it, no you really can’t get it, no your doctor can’t write a special note, until finally they asked directly, “So what do we do now?” to which my friend answered, “Accept a lower quality of life.” That phrase crystalized things for me. I think in many ways no ending is scarier for us in narrative than accept a lower quality of life. It isn’t a one-time tragedy like death, we have good narrative tools to write tragedy, and to transition focus to the characters who live on, commemorate, remember. Accept a lower quality of life in a story means losing, giving up, surrendering, all the things we want our brave and plucky characters to never do, and then having to live with every day being that much worse forever. It’s neither a happy ending nor a tragic ending, it’s a discouraging ending, and we rarely tell those stories.
I vividly remember the first story like that I ever met, it was a James Harriot All Creatures Great and Small story, about a man whose family had been coal miners, who really wanted to farm, and bought a farm, and worked tirelessly to do a good job, and was a really nice person and always kind and earnest (unlike a lot of the characters in the stories), but then his cows got sick and James tried everything he could to cure them but it didn’t work, and then the farmer came to tell him, with a calm demeanor, that he was selling the farm and had always promised his father he’d go back to coal mining if “things didn’t work out” (coal mining which in the 1920s-30s meant a much shortened life expectancy as well.) James realizing how huge this was (accept a lower quality of life) despite so many efforts said, “I don’t know what to say,” and the farmer answered, “There’s nothing to say, James. Some you win.” I still tear up just thinking of that scene, the cruel unspoken and some you lose applied to a whole long life-still-to-come, every day of which would be worse, and there was no other way. A big part of modern advancement is about avoiding there being no other way–offering insurance, social safety nets, appropriate grants–but it’s also an important type of story to tell sometimes, and one I really needed some examples of. Why? Because those stories, those phrases in my memory (some you win, and, accept a lower quality of life) are not where I think I am now, I’m still working hard on treatments and therapy etc., but I needed to have them in my palette of expectations of things that could be the case, to help me plan. I needed those at the start of term to get out of the, “But surely it’ll get better in a couple weeks if I work hard,” mindset to the better attitude of, “The doctors don’t know how long this will last, I’d better plan in case it lasts a long time.”
If the only outcomes in our expectations are (A) powering through and it gets better, or (B) death/villainy/helplessness-forever, none of those archetypes will give us the sensible advice that it’s wise to plan long-term just in case there is a long-term thing that impacts quality of life. Because today a lot of those can be addressed with adapting tech/stuff/habits. I put off buying a lap desk for 2.5 months this summer, struggling to work lying down, since I didn’t want to waste the money if I was about to get better. But having a lap desk and turning out not to need it is much better than needing one and grinding on without. I also put off adapting the area around my bed to optimize for work, put off getting the new screen which finally today (Oct 7, I started wanting this in July!) got installed so I can have multiple monitors while lying down. I put off realizing that instead of watching chores pile up expecting to catch up when I got better, the household needed to discuss and make changes to reduce the total load of chores (simpler meals, paper plates, self-watering planters, planning! Also: thank you so much Patreon supporters, you made my new lying-down desk and canes and such possible!!).
The some you win stories are extremely sad and shouldn’t become our dominant narrative, but they need to be in the mix, one color in the color wheel, to help people who do face disability to weigh the odds better, and not think well, in 90% of stories I know the person gets better so probably I’ll get better and this [desk/ screen/ cane/ adaptation] is likely to be a waste of money. Because you now what’s a good thing even if the end of one’s real life story is accept a lower quality of life? Accepting a quality of life that’s only 5% lower instead of 20% lower because you’ve adapted your home/ routine/ desk/ fridge/ breakfast routine etc. to mitigate as much of the negative impact as you can. So here I am in what is probably the best possible lying-down desk, writing and producing more than nothing, but I sure would’ve produced more over the last few months if I’d done this sooner. And I also would’ve been a lot more willing to say “You’re right I should take medical leave,” if I had believed my odds of recovering quickly were, say, 50/50, instead of, as narrative tells me, expecting that if I tried hard it was certain that I’d quickly power through (and that if I didn’t recover quickly that heralded either moral weakness, helplessness, or death, three things our minds work very hard to resist). A broader mix of disability narratives whispering in the back of my unconscious mind, telling me there might be many outcomes and I should plan for many outcomes not just for the best, would have done so much good–that’s why we need variety.
As a coda to this discussion, chatting about it with Jo Walton, she pointed out that both my examples of accept a lower quality of life stories are nonfiction (Herriott’s fictionalized from real life, the other just real life), and that after she and I first discussed the Herriott story she tried hunting for examples of that kind of story far and wide but basically never found them, that she often found it as “a Caradhras, a mountain you can’t get over so you go under, never the end.” But recently she found several examples in the work of the extremely obscure and neglected Victorian writer Charlotte M. Yonge; it’s great to find one, but also to have confirmation from a voracious reader about how rare such narratives genuinely are.
Now, my other reflection is on academia not disability things.
When I finally decided on taking leave I joked to myself, “For academics, ‘vacation’ means when you do the work you really want to do, and ‘medical leave’ is when you actually vacation.” But the reality is that even medical leave I’ve been finding myself doing minimum four hours of academic work a day, sometimes much more. It has been an interesting chance to see, both which specific parts of academic work absolutely can’t be cancelled or handed off to others, and just on the sheer volume of time that academics are required to give to things which are neither teaching nor research. Letters of recommendation wait for no man, ditto letters for other scholars’ tenure files, and mentoring meetings with Ph.D. students about their urgent deadlines; it’s one thing to set aside one’s own agenda but another to neglect things that other people really depend on. So here I was on full disability leave, with all teaching and research obligations on hold, something my university was quickly able to give, and yet I found myself working intensively from waking until dinnertime and still falling farther and farther behind even when the only work I did was letters of recommendation and inescapable paperwork. In other words, at least when rec letter season is upon us, the paperwork and mentoring parts of academia are pretty close to a full 9-to-5 job even without teaching or research! And that is for someone tenured at U Chicago, one of the most privileged teaching positions in the world, with a light load at a very supportive university.
As one friend put it, “I’m not a teacher, I’m a full-time e-mail answerer,” another, “I teach for free, it’s the grading and admin they pay me for,” another, “You can either produce research or keep up with email, but you can’t do both.” We need to factor this in as we think about how academia functions and what reforms to push for, and into how we teach Ph.D. students since things like email skills and time-management skills are absolutely essential to teaching and research when they need to be balanced basically like hobby activities squeezed into the corners of time we can scrape out around the full-time job of admin. It doesn’t have to be this bad. Possibly the problem is best summarized when I was talking to people about a high-level search committee (i.e. hiring at tenured full professor level instead of junior level) and they said they weren’t going to ask for letters of recommendation until they got to the short list of finalists and would only ask for letters for those few, not everyone, “Because we want to respect the time of the important people writing the letters.” Subtext: we don’t respect the time of the less-high-status people writing the many hundreds more letters needed for junior hires. I genuinely think every academic field would produce another 80+ books per year if we just switched to only requesting rec letters for finalists instead of all applicants, and that’s just one example of a small change. In sum, anyone near academia needs to acknowledge that the real pie chart of academic work is depicted below, that we need to plan for that and remember that small changes to self-care or workflow (just studying up on gmail tag and shortcut things for example) can make a huge difference to reducing the unreasonable load and avoiding burnout, and above all that we should always remember that phrase–respect the time of the people doingX–when we plan how to organize things (syllabus, meetings, forms, applications, committees, etc.). And I’m sure a lot of this applies far beyond the academic world as well.
Meanwhile, between recommendation letters I can’t get out of writing, plus disability paperwork, doctor’s appointments, and working on getting my home adapted so my quality of life is diminished by a little instead of a lot in my present state, I’m definitely working-rather-than-resting more than 40 hours a week, and that’s a pretty typical illness experience. It’s good to know that going in, accept it, plan for it, carve out time for the inescapable tasks and to think of adapting the home as time-consuming (or something we should ask for help with!) Otherwise it’s very easy for a week, or month, or three months of ‘rest’ to be not at all restful, and the hoped-for ‘recovery’ to remain elusive. I still have three months of leave before me and I’m definitely leveling up at how to make my leave actually be leave (delegating, adapting things, finding others to write letters when possible) but learning how to make leave actually be leave, and rest actually be rest, is definitely a skill one must level up at, and I think if we understand that it’s a skill (and perhaps tell stories about it?) we’ll be better at realizing we need to actively work to learn it when we (or loved ones) need that skill.
So, for now, I’ll be focusing on rest, and doctor’s appointments, and home adaptation, and things to keep my morale up, and writing (keeps morale up!), and getting ready for the release of Perhaps the Stars (!!!!!!!) but I hope these reflections are helpful, and many thanks to everyone who’s been supportive & helpful throughout. I’ll see you soon when I’m either (A) better or (B) fully adapted to a partly-but-minimally-lower quality of life.
First, I’m excited to announce that you can now pre-order the first segmentof the new cast recording audiobook of Terra Ignota that’s being done by Graphic Audio. I’m really excited about this new audio set, which is doing the whole series broken down into volumes so this first release is the first half of book 1. I’ll write about it at greater length soon, but what I love is how the different voices with their different accents make you so much more aware of the global/international nature of the characters and setting, and the amazing director Alejandro Ruiz worked with me on some really exciting experiments with gender and casting, casting a lot of roles against what one might expect, so that the voices and physical descriptions and pronouns are all mismatched, enhancing the way Mycroft’s strange use of gender in the narration disrupts the reader’s perception of character gender, inviting the reader reflect on how perceived gender affects our feelings toward characters. We also got to do some really great representation in the casting, including not only race and nationality, but also a fantastic nonbinary performer doing Sniper and a brilliant trans woman doing Carlyle. I’ll reflect more later on but I’ve stayed up irresponsibly late more than once being unable to stop listening to the audio files, so if you enjoy the books and enjoy audiobooks I think you’ll love them! (Though the best of all possible Terra Ignota experiences definitely also involves listening to the Derek Jacobi audiobook of the Fagles Iliad right before you read book 4…)
Meanwhile… Why I Care About Barbie’s Career of the Year:
This topic is very far from my usual bailiwick but important in its odd way, expanding on a Twitter thread from 2020. I am not, nor have I never been, a Barbie collector, but I find the Career of the Year series fascinating as a metric of public attitudes toward feminism. In the broad spectrum of feminist discourse, the fringes and harsh or stinging voices are often the loudest (the progressive left & conservative right), making the Mattel Barbie team an informative contrast.
Generally Mattel’s team wants to present Barbie as a feminist trendsetter but in a centrist way, a model of forward-thinking but non-controversial feminism, and it’s fascinating to watch that metric evolve.
Mattel knows that what it includes or excludes in the Barbie line gets attention and has a political impact, and knows it’s doing and who it’s offending/pleasing, in terms of both profit-seeking and messaging, when it creates things like its new trans-friendly nonbinary Creatable World doll kits which make gender-mixing easy:
But while producing something like that means Mattel is taking a stand in one sense, it’s notably not in the main Barbie line. The Barbie line itself tends to be a bit more cautious, especially with Barbie herself.
Since the Barbie Career of the Year doll is designed to be the most discussed, and aims to make an impact in its claims about the attributes of an ideal female role-model, it is a fascinating reflection of what a group of decision-makers who are almost all women feel is the right focus for their annual feminist-yet-centrist message about what girls and women should aspire to. The bodily diversity of the Barbie line has been growing steadily, as shown in the image below of the 2019 line with its range of body types, racial characteristics, and disability representation, but the politics of careers is fascinating separately.
The 2020 discussion (slightly tidied w/ 2021 addition at the end):
Barbie’s 2020 Career of the Year is (for the first time) not a single Barbie but a team, a Political Campaign team featuring 4 dolls: Candidate, Campaign Manager, Fundraiser, and Voter, with diverse race and body types. Interesting to compare to past Career of the Year Barbies.
I’ll give the list of past ones first, then some analysis. Barbie had already had many earlier careers, including astronaut, president, business woman, and others, but the formal Career of the Year series launched in 2010:
2010 = Computer Engineer
2011 = Architect
2012 = Fashion Designer
2013 = Mars Explorer
2014 = Entrepreneur
2015 = Film Director
2016 = Game Developer
2017 = no Career of the Year doll for 2017 that I can find
2018 = Robotics Engineer
2019 = Judge
2020 = Political Campaign Team (four dolls)
(2021 = Music Producer, not yet out when I wrote this thread)
Barbie has had a lot of careers over time, including earlier iterations of astronaut and president, & her 60th Anniversary Career set has astronaut, firefighter, soccer player, airline pilot, news anchor, and “political candidate.”
The WSJ did a great piece on Barbie’s history as a political figure back in 2016 when Barbie did a version running for president with a female running mate, clearly referencing Hillary’s campaign
2016 was the 6th presidency-focused Barbie, making her political career thus:
1992 = President
2000 = Presidential Candidate
2004 = Presidential Candidate
2008 = President
2012 = “Barbie for President”
2016 = President & Vice President candidate pair
2019 = Barbie 60th Anniversary Career series repackage of the 2016 presidential candidate doll in a different box w/o her VP running mate & with lighter skin & straighter hair. Fascinating.
2020 = Campaign TEAM and NOT SPECIFICALLY PRESIDENT
Note that 1992, 2000, 2004, 2016 & 2020 all candidates, while 2008 & 2012 are Presidents, i.e. already victorious rather than running, an interesting choice for the window right after Hillary lost to Obama in 08 primary.
The 2016 and 2020 political Barbies have variety in skin tone and hair color, and 2020’s has variety of body type as well, in line with Mattel’s recent changes, whereas 1992-2012 are all distinctly the original blonde Barbie moving into a political career.
But no earlier president or presidential candidate Barbie was in the Career of the Year series, which is the most visibly political moment of Mattel’s year, the Barbie choice they expect to get the most discussion and spark the most newspaper coverage etc.
Career of the Year started as something of a disaster in 2010 when the well-meaning Computer Engineer Barbie, winner of a voting contest to pick the first Career of the Year, was launched w/ a badly-thought-through accompanying book which focused on her repeatedly messing up and needing male programmers to fix her machine (even to get rid of a simple virus!) and to turn her concept into a game.
When Architect was next (2011) I remember thinking about the fact that (at the time) when you looked at lists of college majors by expected salary, architect was usually listed as the highest-paid major for women.
In fact the story is really cool, this great article discusses the campaign for architect Barbie, effort to convey power via her glasses & hardhat (which she never wears in the photos), & the experiments presenting her to girls to make sure the doll’s professional skill went unquestioned, unlike her computer engineer predecessor.
2013 Mars Explorer was the 1st mission-specific space Barbie though there had been several astronauts.
She was pinker than average though less sparkly than average, and accompanied by many science facts. Below are three earlier astronaut Barbies, for contrast:
2014’s “Entrepeneur” is strangely vague, forgettable, and was much mocked at its release with headlines like “Entrepreneur Barbie will Inspire Girls to Be Vaguely Ambitious.” It was very well researched underneath, made in consultation with some major global feminist leaders like Reshma Saujani, the founder of Girls Who Code, but it struggled to get a clear concept across.
The vagueness of Entrepeneur Barbie for me is an exposure of the strained path-to-wealth archetype in our society, since it’s so much about networking, pitching, acquiring companies, buying out rivals, moving money around rather than making things; hard to describe on a box. The fact that there’s no comprehensible clear thing an entrepreneur Barbie would do or make, other than have money and move money around to make more money, is an example of how hard it is to communicate to kids how power and money really work (and how nonsensical it often is).
2015 Film Director was a clear response to discussion happening at the time about how few female film directors there were in Hollywood. She did come with several types of hair and skin tones (version options on some of these are hard to trace).
2016 Game Developer was a direct effort to redo and recover from the mistakes of the 2010 computer engineer. She (left) has one of the least pink least feminine outfits ever on a barbie, a silver (not pink) laptop and much more technical info on the box. 2010s on the right.
Here you see game developer Barbie next to her unsuccessful software predecessor:
One Casey Feisler on Flickr did this great compilation comparing the packaging for both plus for the 2018 Robotics Engineer career of the year Barbie. Note the new focus on precisely what she does herself not leaving it to colleagues.
The robotics engineer doll was very similar, still glasses and a laptop, but notable for the black variant being used in publicity images a lot more, almost 50/50 with the white version.
I’m still trying to figure out why there appears to have been no Career of the Year Barbie for 2017. I’ve looked and looked at this strange gap between the two very similar engineering dolls of 2016 and 2018. I’d love for someone to solve the mystery. It’s worth remembering that Mattel had very excitedly made their all-female presidential ticket President & Vice President pair in 2016 and seemed really invested in Hilary’s campaign to be the first female president – was it a morale thing that slowed them down for 2017?
Regardless, after 2018’s robotics engineer, Mattel exploded into the super political with 2019’s Judge Barbie, a clear and extremely not-neutral reference to the activities of RBG on the Supreme Court, and Republican-led stuffing of the courts. And as you can see in the image below, the 2019 judge Barbie, like the presidential set from 2016 and the two engineering Barbies, actively spotlighted its increased diversity in hair color and skin tone in its media:
Which gets us to 2020’s Career of the Year team, the first team and, so far as I can tell, the first politically active Barbie that isn’t focused on the presidency specifically, but could be running for congress, senate, local office, anything.
The distribution of race and body type was clearly carefully calculated, with an African American candidate, a medium-skin-toned POC-looking voter (could be Latina, First Nations, many things), and Mattel’s new heavier body type for the blonde in the role of fundraiser. It’s about teamwork, both the idea that a successful campaign requires many people beyond the candidate, & about the importance of many kinds of races, continuing Judge Barbie’s turn toward branches of government beyond the Presidency.
So many Barbie careers are about celebrity (actress, singer, rapper, princess) & the Presidency is a celebrity position (more so under Trump) so the break with celebrity & focus on non-famous staffers & voters & less spot-lit races is a bigger change for Barbie than it may seem. Mattel’s goal is clear, their contribution to the turn-out-the-vote movement, but I think the attention to teamwork and the importance of non-celebrity people, of the people who aren’t the center of attention, has a potential power beyond the political.
Careers of the Year have always been the one in the spotlight: the director, the architect, the designer, the one who steps on Mars, the president, with little discussion of being on the team, or the fact that movies, buildings, Mars missions are teamwork. So after six presidents or presidential candidates (and one VP) and many other Barbies-in-the-spotlight I hope this teamwork focus will help girls feel like they’re powerful even if they aren’t on the stage, in the spotlight, or in charge. A good message.
My 2020 summary thought: Keep it up, Mattel! This year’s team is great, let’s see more Career of the Year teams! Design teams, surgical teams, the Mission Control team, crisis intervention teams, pharmaceutical development teams, publishing teams (author, editor, publisher, publicist)!
My 2021 addendum: Barbie’s 2021 Career of the Year, music producer, is less remarkable than the last two in many ways, another iteration of Barbie with a pink laptop and headphones, which seems to be Mattel’s signature for Barbie-in-tech, though this one also has the music levels slider board:
Yet there are some interesting elements. Her range of unnatural hair colors is not the first in the career line, and is something Mattel appears to associate with tech as well as with music, but I find her ripped-knee jeans is notable since no earlier career Barbie wore anything quite so casual, except for the “voter” in the team set. Since this doll was certainly in development in 2020, that likely reflects the advance of casual-is-okay -for-work ideas in fashion in the age of work-from-home. That 2021’s doll is not a team does make sense in a world where work from home separated us so much, but it will be interesting to see if the solo Barbie continues to be a pattern. It is neat, though, seeing them once again showcase a job which is part of the fact that media is teamwork, i.e. the producer not the rockstar, similar to when Barbie the Film Director in 2015 directed girls’ attention to a different type of power in Hollywood from the many movie star Barbies of earlier Barbie decades. In a sense it’s a job which showcases teamwork even while alone, and thus very apt for 2020/2021, and perhaps a good sign for Mattel continuing to think about teamwork and plural agency even in their solo dolls. And the fact that it got much less media attention than judge or political team may mean that the forces of capitalism step in to encourage Mattel to try something bolder next year–we must never forget the $$ side of commercial political messaging.
So, what does the Career of the Year sequence show us about Barbie as a mark of centrist feminism? A few things. One is that women-in-tech is definitely a thing, far more in the minds of the organizers than women-in-STEM, since we haven’t seen biologist Barbie or epidemiologist Barbie showcased, only several iterations of tech Barbies, including software and hardware. It also shows through things like entrepreneur Barbie and architect Barbie that sometimes they look a lot at research, especially about income and what are high-paying careers, and think it’s important that Barbie encourage girls to go into high-paid professions not just exciting ones (beloved-yet-underpaid careers like teacher and nurse have been frequent Barbie careers but not showcase ones). They also sometimes run into challenges in communication, i.e. ‘entrepeneur’ is a very important concept but very difficult to communicate in a doll via clothing and accessories, as is true of many careers.
Several of the career Barbies–notably game designer and music producer–have been major steps in more casual clothing, which is a not insignificant message when we think of the target market largely including middle-class suburban mothers (parents buy the toys more often than kids, after all) who are thus expected to consider ripped knees and wild hair a respectable image for girls to aspire to. The increase in tightly-fitted-yet-somewhat-ungendered clothing, which reached its peak with the carefully-planned game designer doll, is also notable. Recalling how much fashion pressures linger in business, how many employers still expect makeup and highly feminine dress for all women, the dolls’ statement that sneakers, jeans, and a shirt and jacket whose only feminine coding lies in the tightness of their fitting and the small amount of pink on the shirt is a genuinely significant change. That 2021’s is a step more feminine in coding even as it is a step more casual is interesting when put in dialog with gender and transgender issues becoming such a hot topic in the past few years.
And, of course, we saw with judge Barbie and the political candidate team Barbie that a lot of people who consider themselves politically fairly neutral/centrist, including Mattel, felt that the wake of Trump’s election and the midst of the authoritarian surge of 2016-2020 was an important moment to step forward, become more active, and, for the first time in Barbie’s history, to take a semi-overt political stance, since celebrating judge Barbie in the midst of so much focus on Ruth Bader Ginsburg, is not explicitly pro-Democratic-party but it’s extremely clear the way it leaned. Many organizations that strive for party neutrality, from Mattel to the ACLU to the science journal Nature, felt that Trump’s second run was the moment to use that history of neutrality for an important end, since breaking a multi-decade string of never endorsing one party over the other makes the moment when one does speak out that much more powerful. That 2021’s doll is far less political, except for being pro women-in-tech, raises the question whether we should view the renewed projection of party neutrality as a happy return to normal, or as a scary sign that the wave of sudden political engagement sparked in 2016-20 is fading again, and that voter turnout may wane with it.
In sum, since news and social media both tend to magnify radical voices on both sides, things like Mattel’s carefully-calculated political stances can be a valuable window on the often-quieter middle, though whether it really is the middle or just attempting to claim “this should be the middle!” as the real middle moves left and right is another question. And the fact that the fashion-focused “Fashionistas” line and new sets like the glamorous bond-movie style “Spy Squad” Barbie set persist alongside our career Barbies also shows that the extremely gendered Hollywood femme fantasy side of Barbie is still just as strong in the moderate center of this particular feminine ideal as all the politically-progressive versions are (if not stronger since the fashion focused Barbie lines are usually much larger than the career sets). Of the nine dolls in the 2016 splash add below, one-third are narrative-free fashion-consumers, one-third Hollywood fanatsy babes, and one-third career role models, a telling microcosm of the imagery proportions kids are pelted with. Ongoing food for thought.
Partnering with my good friend and fellow author & history lover Jo Walton (more on her below), we interview fellow writers, historians, researchers, editors, and other friends, talking about the craft of writing, history, food, gelato, and other nifty topics, with some episodes of just me and Jo having the kinds of intense writing or history discussions we enjoy. You can listen for free on Libsyn, on Apple Podcasts, on Spotify, and on YouTube. Those whosupport me on Patreon get new episodes early (and new ExUrbe posts early too.)
Sample Episode: Speculative Resistance with Malka Older
The episodes in this first season are modeled on the kinds of panel discussions one has at science fiction conventions, and are long (an hour plus), and since our interviewees are all so interesting! Episodes of this season will come out monthly, with occasional bonus episodes, those are the ones with just me and Jo.
For those who aren’t familiar, Jo Walton is a voracious reader in a huge number of genres with an encyclopedic knowledge of the history of genre literature, as well as the Hugo and Nebula award-winning author of more than a dozen novels including Among Others, and an F&SF critic, author of What Makes This Book So Great and An Informal History of the Hugos. Jo and I travel a lot together when I go to Europe for research, and we’ve had such wonderful conversations over the years connecting dots between our shared interests in history and the writer’s craft that we wanted to share such discussions for more people to enjoy.
Interviewees in the first season (to give a sense of the range) include Malka Older, political scientist and author of Infomocracy, Jonathan Sneed, a Mars astrogeologist & astrobiologist, Ruthanna Emrys, a city/state planning & politics expert and author of the Innsmouth Legacy series, Mary Anne Mohanraj a wonderful writer friend and creator of Sri Lankan cookbooks, Max Gladstone, author of The Craft Sequence and a favorite friend to discuss the craft of writing with, David M. Perry, journalist, activist, and Medieval historian, Emily Cambias, game writer & editor/writer for Cricket, the children’s magazine company, and another writer friend Naomi Kritzer, author of Cat Fishing on CatNet.
Second, I’m Teaching an Open-to-All Online History Course This Fall!
I’ve long wanted to find a way to open up my teaching beyond the university, so through U Chicago’s Graham School continuing education program, and taking advantage of the Zoom skills we’ve all developed this year, I’m teaching an online course this fall on Saturdays, 10 AM to 12:30 PM Central Time, called FFAC10100 Monks to Voltaire: European Intellectual Transformations 1200-1750. It’s a version of a course I’ve taught for undergrads which starts with late Medieval thought and looks at four successive major revolutions in European ideas, scholasticism, then Renaissance “humanism,” then the 17th century’s “new philosophy” or “scientific revolution”, then the Enlightenment, presenting them in continuity and showing how they didn’t replace each other (as summaries often make it seem), but rather joined each other, continuing to thrive side-by-side. I’m aiming at a variant on a “flipped” model of a course, in which I will share the lectures as text transcripts people can read, and then the class sessions can be entirely Q&A digging in more intensively. If you’re interested, anyone can register for it, and you can learn more at the discussion I’m going to have about it with the Graham School staff on August 24th, which you can register for here: Conversations @Graham, August 24 | UChicago Graham
Third, My Introductions to Gene Wolfe’s Book of the New Sun
Tor invited me to write introductions for the new Tor Essentials editions of Gene Wolfe’s four book Book of the New Sun, collected into two volumes, Shadow & Clawand Sword & Citadel. It’s hard to express how formative these books were for me, staggeringly brilliant and ambitious SF which showed me how high I could aim, how deep world building can reach, and how complex a narrator can be. I haven’t felt nearly so nervous and impostor syndromy about a project in a long time as sitting down to write about these books, so seminal both for the history of science fiction and for me, but I’m really happy with the resulting essays, so if you’d like to read or reread (these are books designed for rereading!) some incredible SF with a little bit of my guidance, I can’t recommend them enough, especially to anyone who enjoys Terra Ignota.
Speaking of which…
Fourth, a new Terra Ignota audiobook series is coming from Graphic Audio
I’m extremely excited for this project, now up for pre-order. I’m planning to do a blog post about them soon, but while the Recorded Books audiobooks have a single actor, these are a cast recording, with many different performers playing the different roles, and it’s amazing how different that is in terms of things it can achieve. At my suggestion we’re trying a somewhat radical experiment, so the recording begins a note from Gordian saying the performances have been made in line with Gordian’s recommended genderblind casting practices, and then the casting of the parts is largely unrelated to the gender of the performers, so voices of all kinds are playing characters of all kinds, letting performers who never usually get to do a booming-voiced old man or a delicate child exercise those parts of their ranges, and adding an amazing additional layer to the book’s complexly-worked gender confusion, layering on top of how Mycroft’s use of pronouns often doesn’t match physical descriptions of bodies, and now it won’t match voices either, further encouraging the listener to question all Mycroft’s gendered language and to examine even more how perceived gender affects the way we judge or react to different characters. I’m also especially excited that, against this backdrop of intended ambiguity, the amazing casting director Alejandro Ruiz met my requests to be careful about representation, and found brilliant trans woman Kay Eluvian to play Carlyle Foster, and a nonbinary performer, Taylor Coan, to do Sniper.
Alejandro and I are also both excited about how diverse the cast is in terms of race and nationality, even with a performer from Mumbai to play Bryar Kosala, and we’re doing some double-casting, giving multiple roles to the same performers to encourage the listener to think about and compare them (Ganymede & Danae for example), creating intertextual links between different characters, modeled on the way the inestimable Jane Howell did it in her direction of her Henry VI sequence for the BBC Shakespeare project, my very favorite work on film. These recordings will be slightly abridged, as Graphic Audio usually does, adding some music and special effects and cutting things like “he said” “she said” or some of the descriptions designed to remind readers of who characters are or where they’re from since hearing a Mumbai accent will by itself achieve the same information reminder. It’s been an absolute thrill working on the productions, and I couldn’t be more excited for the new layers they’re adding to what the books are already aiming at in creating a truly global-feeling cast of characters, and stimulating questioning and introspection about gender.
Fifth and last, the publication of Perhaps the Stars is finally close!
The fourth and final volume of Terra Ignota comes out October 19th, and it’s really for sure this time, it has a cover, and the final most finalest final page proofs are done, and all the Latin and Greek and other special characters are taken care of, everything! It’s up for pre-order on Bookshop.org and Amazon and Barnes & Noble and at all sorts of local indie bookstores (please support them if you can!). It may not feel like news that a book which has been planned for months to come out in October is actually coming out in October, but it’s hard to articulate how many invisible steps there are on the back end, including a somewhat-COVID-related continent-wide shortage of printing press time which is making book printers everywhere struggle for time spots to actually get the physical book made at the factory, pushing a lot of things back to 2022…. but not this thing! I’ll definitely be blogging more about book 4 in the coming months, but short version, there are only 2 chapters in the whole of book 4 which, from a craftsmanship point of view, weren’t harder than the hardest chapter in any of the earlier three books, and I can’t wait to share it with everyone!
Hello, readers! The past few weeks have been very intense for me with the 2021 run of my Papal Election Simulation, but I wanted to post some links and announcements about a couple of free online talks, two recent and two upcoming.
One is today (May 13th) at 5:30 PM Central time on “The Apocalyptic Renaissance” for the U Chicago Smart Museum of Art’s fabulous new exhibit “Lust, Love, and Loss in Renaissance Europe.” I’ll be presenting some material from my book in progress “Why We Keep Telling the Myth of the Renaissance.” Sign up for free at this link.
The second is a talk on Saturday at 1 PM Central time for the Chicago Women’s Alliance (also free and open to anyone) on“Who Has the Power to Change History?” in which I’m going to talk about my teaching and how I use historical reenactments and role-playing to teach better ways of thinking about power, what really controls change (individuals, great forces, both?), and thus encourage feelings of empowerment and activism. Several former students will be joining me to talk about how the role-playing elements of my teaching changed the way they think about history and power and how they apply that in their activities. I’m really excited to discuss the question with my students there too. Sign up for free at this link.
And two other recent things I did which are now online:
I did a fun interview for History Hacks podcast, about The Inquisition(s) and the history of censorship (drawing on my research).
And I did a video lecture for the Paideia Institute about Recovering a Lost Classic in the Renaissance, with webcam footage of some real 16th century books and manuscript samples from my book history teaching collection. So excited to have a webcam I can do that with; hoping to do more like that this summer!
NOTE: An unfinished draft of this post was accidentally published for a little while on March 2nd-3rd, but it wasn’t actually ready yet then, but here’s the finished version:
Hello, wonderful readers! What I have to share today is not a polished essay, but the transcript, slightly cleaned up but mostly as given, of a talk I gave recently at a science fiction convention, Capricon 2021, whose theme this year was “Making the Future We Want”—a great topic for reflection. In the talk I look at our ideas about who has power to shape the future, stringing together short precis of several different articles and such that I’ve been working on lately. Each little precis is less polished and evidence-packed than the long versions (links & citations provided where I can), but I think the combination, though compressed, has a useful flow and brings together some points that I hope will help people reflect on how our narratives about history shape the power (and powerlessness) we believe we have. I hope you enjoy!
As another treat, here is a wonderful video made by my friends at the Paideia Institute which recently invited me to give a talk on the process of recovering a classical text in the Renaissance, with live examples of me showing Renaissance era printed books and manuscripts thanks to the miracle of webcam. I’m hopeful I’ll be able to use the same webcam system to do more rare books demos in future!
Talk transcript:
I was struck by how Capricon 2021 (Capricon is a fabulous F&SF con! you should all go to it!) had a theme this year—“Making the future we want”—which overlaps some of the history work that I’ve been doing, so I thought people might enjoy a bit of a serious talk on a very interesting question. Some of you may have read my blog post “On Progress and Historical Change” which gives a history of the concept of progress, and this talk will overlap that a bit. But what I want to talk about here is the question of how we imagine how society and history change, who we imagine has control over that, how much control we imagine we have, and how that has changed over time. And our feelings about how much power we have, or how much power we feel other people have over change in real history, is often very different from what the historical record suggests. So I’ll be talking about some of my work as a historian and what it shows about how culture changes, vs the concepts that we usually tend to have about that.
I’m what’s called an intellectual historian, which means I focus on what we think is true. This is related to history of ideas, so I study concepts like the concept of progress, the concept of atoms, the concept of rights or equality, the concept of atheism, not just what atheist ideas existed but also what people at different points in history who didn’t consider themselves atheists thought an atheist should or would be like in terms of ethics, the personality the imaginary atheist whom Thomas Aquinas is arguing with in some of his writings, for example—these are examples of things an intellectual historians studies. But intellectual historians also look a lot at worldview. If a material culture historian is working on reconstructing the clothing of another time period, and a food historian is working on the diet and recipes of another time period, and an art historian on the architecture, and a historian of science on the technology or weapons of another time period, the intellectual historian is trying to get at the mindset of that time period. What world do those people live in, from their own perception, from what they believe is true? What is the potential of their world, what do they believe is true about how it works or how it changes, how does it differ from the world we believe we live in? Someone from a culture which believes that disease is caused by astrological influences instead of by germs makes decisions about medicine and health as if living in a different world from the one we believe we live in.
So, for example, we in the present have a very particular expectation that every generation’s experience will in most fundamental ways be different from the experience of the generation before, that there is a constant process of change, progress is one of several names for this, or a name for one element of this process, but we expect, for example, that in two generations while some things will be similar, many things will be very different. For example, very few of us expect that our grandchildren or great-grandchildren will still live in the same house we live in, and use the same teapot we’re using. In contrast with, let’s say, a medieval European figure, who is very likely to have the expectation that their descendants will live in the same house for a number of generations, and who generally has the expectation that change may come if there’s a war, change may come if there’s a great king, change may come if there’s a bad king, change may come if God curses the land, but the change isn’t inevitable, the way we think of it being inevitable in our own period.
So, in different moments in history people have had different ideas about how constant change is, what causes change in the human condition over time, what aspects of the human condition change over time, and which people, if any, have power over the way the human condition changes over time.
I’m going to discuss briefly at first the origin of the concept of anthropogenic progress, and I’ll come back to that term in a moment. Then I’m going to zoom in to a very very microcosm example within my own field, in my own period of specialization, which is Renaissance Italy, which shows some of the problems generated by the disconnect between the way we imagine the world changing from the way it really does, and then from that microcosm zoom out again to the larger question.
(If you’ve read the longer version of this in On Progress and Historical Change pretty recently, you may want to skip the next couple paragraphs down from here to the italicized note, by the picture of a pretentious Roman orator on the rostra, or you may prefer to keep reading so the content is fresh)
So, as I often say when I’m beginning this discussion, in about 1620 Francis Bacon invented progress. What I mean by that is that Francis Bacon, a British intellectual and statesman, published the Novum Organum, and some other works in which he argues, pretty much for the first time that not only is it possible for human beings to change the human condition but that if they do so intentionally and systematically through science—this is the birth point of the modern scientific method, which is to say collaboration among groups of people sharing knowledge to work as a team to gradually expand human knowledge—and remember, Bacon is the origin of the saying “knowledge is power,” by which he means power over Nature, not individual power, he means human beings collectively having more power when we understand more of what he calls “the secret motions of things” or what we might call how diseases work, how physics works, how electricity works, etc. By “knowledge is power” he means our power to command electricity, our power to cure diseases, these things that in 1620 he hoped science might someday achieve. And Bacon (and I’ll get to why in a moment, and also what similar concepts existed before in a moment) articulates for the first time the suggestion that if human beings collaborate as a team to observe nature, to do scientific experiments, to double check (we would say peer review) each other’s experiments, to publish this knowledge and share it, and to collectively try to expand humanity’s scientific understanding, then every generation thereafter will be a little bit more powerful in terms of how many diseases we can cure, how well we can preserve food, how well we can grow food, and thus that every generation’s experience will be a little better than the generation before.
Bacon characterized this in Christian terms as an act of charity, that to be a scientist is the ultimate act of charity because it gives the gift of a happier life to every generation and every human that will be born after you—a very interesting root for the scientific method, which many people are used to seeing presented as an enemy of religion as opposed to as mandated by Christianity which is the way he presents it. And he suggests that by conducting research as a team and sharing it, it can intentionally come to pass that every generation’s experience is different.
Now, we would say that every generation’s experience has been different since the very beginning of humanity, that progress, while it might accelerate later on, as technological advances accelerate has been constant, and the experience of somebody in 1500 is different from somebody in 1400. And indeed, we would argue that human action, discoveries, development of states, gradual processes such as centralization of government or of finding agriculture were always causing progress. People were inventing the moldboard plough, preservation techniques, gradually breeding corn to be a slightly better and slightly better crop until it became the strange super crop that it already was before modern scientific genetic meddling, that progress was always there, than anthropogenic progress, meaning human caused progress, progress that is the result of human beings taking action.
But nobody had described this as a phenomenon before Bacon, and Bacon and his peers in fact believe that there isn’t anthropogenic progress. What kinds of progress or change do they think there is? One dominant idea in Europe at the time, and indeed for many centuries before, is that the primary changes you see experienced by humans are a plan scripted out by God, are Providence. Sometimes God, or Fate, decides that it is time for an Empire to rise, because God has a plan for it. Now it’s time for the Empire to fall, because God has a different plan for it. God decides to send a great king onto the stage of life, or God decides to send a bad king to teach people the moral lessons that tyrants teach. That is the dominant model of what people think causes change in this period, that it is an external decision made by a divinity or plan whose intentions are largely didactic, largely educational, that the purpose of sending a good king or a bad king is to send people moral examples for people’s personal moral education, to increase the likelihood of human souls doing to heaven.
If we move earlier than that, to Antiquity, there are ideas of what I would not quite call progress or anthropogenic progress, but there are ideas of development that the Epicureans discuss for example that just as an organism might develop from a juvenile state to a mature state, so similarly planets develop over time and human civilizations develop over time. In Lucretius’s De Rerum Natura, our longest surviving classical Epicurean text, he describes the idea that Earth undergoes a process similar to the lifespan of an organism, that, for example, only in the earliest days, when Earth was young and fertile, did new species come into existence, and that only those species that were suited to their environment survived until the present day. It sounds very close to Darwin. Lucretius goes on to say that no new species are created any more because Earth is old and has undergone menopause and no longer has giant placentas growing out of the ground everywhere which is where animals came from in the first place. So before you give him too much credit for the survival of the correctly adapted idea, which he does have, and is a very sophisticated idea, there are plenty of others that we chuckle at. But he describes the idea that early on humans lived peacefully in nature, and they ate the fruits that they found and they slept under trees. You can recognize this as one of the roots of pastoral man or of a golden age. But then humans gradually discovered luxuries, gold, and treasures. And when they did that, and there were limited supplies of luxuries, then people needed to defend them, and they developed weapons, and they developed armies and laws and social structures. Then, from that, developed war, and the descent from a golden age to a silver age to an iron age. Lucretius is not the only classical Mediterranean source of this, but he is an example. So there is an idea of development, and even development where people are causing it, but it isn’t an idea of constant development, and it is an idea of negative development and not positive.
That idea exists in Antiquity, is much less prevalent in the Middle Ages, when many of those texts are not available, is rediscovered in the early 1400s, and disseminates again. So it’s one of several influences on Bacon. But what’s innovative in Bacon is his idea that there’s a strong intentionality, that the team of humans who are scientists work together intentionally to create the next generation’s experience, and indeed that if they don’t, the next generation’s experience will not be different from this generation’s experience, that unless progress is intentionally caused, progress will not occur. This is very different from our sense in which progress is constant and inescapable and positive, but also negative. We’re very used to thinking about progress giving us better technologies and better medicine, but also giving us the negative sides. As Freud puts it in his “Civilization and Its Discontents” writing just after World War 1, “it is indeed miraculous that my daughter who is across the ocean on another continent and I can speak over these new electric telephone wires, that is an amazing fruit of science but,” he says “If we didn’t have science she wouldn’t be on the other continent, she would still be here, because we wouldn’t have made the ship that carried her.” Civilization and its development creates problems in addition to creating positive innovations.
That is an idea that we’re very familiar with in the Twenty-First and Twentieth Centuries and that really develops largely in the Nineteenth Century as a result of the Eighteenth Century’s romantic interest in the idea of the pastoral and the influence of Jean-Jacques Rousseau, but also in the aftermath of the French Revolution. In 1620 Francis Bacon said “If we do science we can have progress,” and people said “Great! We can use science to evaluate everything and make more rational medicine, and more rational engineering, and more rational architecture, and more rational farming, and more rational laws, and more rational religion, and then they had an enormous war in which huge numbers of people died, and the scale of war got an order of magnitude bigger than anyone was used to, and it created a lot of anxiety and fear that possibly we don’t have as much control over this progress thing as we thought. So it’s then in the Nineteenth Century that you get the image of being a cog in the machine being trapped within progress, progress being inescapable.
(If you were skipping down because you’ve read On Progress and Historical Change recently, this is the right spot to start reading closely again)
Within this review of concepts of progress the question then comes: who has power over determining what happens? Is the answer “No one”? That progress happens in so many different arenas and so many different directions that pretty much it’s just chaos? Is the answer “Governments or rulers”? that a great king or a great virtuous leader is going to be what determines whether our country prospers. Or do we believe that certain individuals exercise influence?
And here is where I’m going to zoom in to the Renaissance, which is to say, to my own period, which is where we see ideas of who has power over progress, and especially who has power over mindsets and the clash between the assumptions we often make about that vs the historical evidence that we have.
So, Italian Renaissance, or Renaissance in general, we’re used to thinking of this as a moment of great change when things accelerate, when the Middle Ages, which we tend to think of as stagnant, and perhaps backwards and without dynamism and without a lot of progress or change—all of these things are false, but they are the assumption most people have—suddenly shifts and we get a lot more. We get faster innovations, and faster development of technologies, and a change in mindset. Now all of these things are true, we do get an acceleration, but it’s a wedge, it’s not a flat line and then a spike.
The core figures that we know from the Renaissance, and the way you often know about the Renaissance when you’re not a specialist, are famous geniuses. Leonardo da Vinci is the best known Renaissance figure, followed by Machiavelli and Michelangelo and Raphael, and we think of this as an age of genius, an age when excellent brilliant people created magnificent art and magnificent architecture, and there is a focus on those geniuses as being the core of change, on those geniuses as being fundamentally what makes the progress happen. There are in fact a lot of cultural reasons, and this is not the moment for me to give you my list of what the causes of the Renaissance are, though there is a blog post about that if you’d like to look for it in my discussion of COVID-19 and why we have the Medieval/Renaissance distinction and the problems with it, but the key data point that I want to start with here is when I’m at a dinner party and I meet somebody’s spouse and they ask me what I study and I say I study the Renaissance, and they ask what I study in the Renaissance, and say well, I study Renaissance radical thought, I study censorship and the Inquisition, and I study atheism, and I study heretics and heresy trials and the dissemination of radical thought. It’s very common for there to be a pause, and an excited face, and then the question “Oh! So isn’t it true that x famous Renaissance genius was an atheist?” It’s incredible how often that’s the question. And the reason for that is a narrative that many of us are familiar with that the Renaissance is an era of secularization or secular thought, that it’s the moment when the shackles of religiosity are being challenged, when Humanism as a centering on humans comes in (that’s not actually what Humanism was in the Renaissance, but that’s what it is now and that’s what people think it was) and that the core of the difference between the Middle Ages and the Renaissance was the breaking free of faith and the beginning of an age of Reason.
That is an expectation largely based on ideas about the Renaissance that were published largely by German and British historians in the Nineteenth Century. This is very, very different from the way the Renaissance presents itself, and also very different from what we find when we look at things. But the important detail here is everyone’s excitement to have me confirm at the dinner party that yes, Leonardo was an atheist, or yes Machiavelli was an atheist, or yes, Michelangelo was an atheist. Why is that? Mainly it’s because the dominant narrative about the Renaissance is that there were a bunch of geniuses who had a pseudo-modern mindset. Meaning more secular, not necessarily atheistic, some people are simply excited for it to be modern, scientific, etc., and that those geniuses saw lying before them the possibility of a more secular more rational age that would be better, it would be modern, it would be free of all the limitations that dragged the Renaissance down. And that these geniuses seeing that in their future, seeing it, then worked to make it so. I see this narrative in a lot of places. I see it in academic articles or books that assume that it’s true. But if I zoom out further, you’ll see it in op eds, when people are writing an op ed about the Black Death and COVID and they’ve researched this for 48 hours, that’ll be what they say, or it’ll be in the intro of an economics book.
If you zoom out even farther, you get it in the Cosmos TV series, the new one with Neil de Grasse Tyson which chose to begin its first episode with this glimpse of the life of Giordano Bruno (analyzed in the article above), it presents Bruno as a kind of martyr for science, and whom it describes as a very modern person, that he saw that there could be this other world of reason and he tried to buck against the system and fight against it, and the Inquisitional Thought Police, and he uses the phrase “Thought Police” which is a very important detail, chased him down and suppressed him because they didn’t want these ideas to be spread. But he knew that one day it would, and other people who were like-minded with him worked really hard, and their underground efforts eventually made it so.
And if you zoom out even further again, you get it in pop culture stuff. The plot of the extremely mediocre old David Warner fantasy movie Quest of the Delta Knights. The plot of that is that there has been an ancient scientific and secularizing rationalist secret society founded by Archimedes that has since Antiquity been keeping the light of reason secretly alive through the Middle Ages, and eventually will break through and reveal all the lost documents and science from Atlantis and usher in the modern age, and the hero helps make that happen, to defeat the Middle Ages and make the modern age happen because this cabal of people who fundamentally think like modern people has existed since Antiquity and preserved the light and now it’s going to triumph. It’s also the plot of the Assassin’s Creed video games, that there is a secret society of rationalist people who fight the pope—literally, by punching him!—and they advance science, and Machiavelli is in charge of it.
And these are hilarious and fun but they are echoes of the idea that the people who are in charge of intellectual change, the people responsible for getting us from Medieval to Enlightenment thought like us, that they thought ahead of their time, that they saw the future potential, that they had a plan and they secretly implemented it—in other words that the people who have power to determine what the future is, is a few particular geniuses, who in their brilliance see what the future is going to be. That is not true. First of all, none of these people predicted any of what the future is like, and none of them in fact tried to undermine the church—I’m talking about Leonardo, Machiavelli, etc.—and none of them articulated things that would read like modern when you really sit down and read them. They are presented as proto-moderns by historians of the Nineteenth Century who want to claim a descent from them, because we respect these names, they are exciting, and so if you can claim “I am carrying on in the tradition of X!” you can make your own regime seem powerful. This is a tool that the rising nationalist movements of the Nineteenth Century, in Germany, in Britain, in France, in Italy, used to compete with each other by trying to claim “Oxford is a truer descendant of the ideas of XYZ than the universities in Italy, look at how we’re carrying on the rationalist secret underground messages that were in Machiavelli and were in Leonardo” which are not there.
There are radical messages in these things. Here is the key. Renaissance radicals were really radical! Machiavelli was really radical, Leonardo was really radical. People like Giovanni Pico della Mirandola, who is a little less famous but very frequently pointed at as the Renaissance genius, mastered fourteen languages by the time he was a teenager, he was brilliant, he was rich, everyone loved him, he wrote a giant nine hundred thesis synthesis of all world religions, which he proposed to defend in front of the pope. And he has this famous text known by its Nineteenth Century title The Oration on the Dignity of Man. I’ve got a 682 page book on my shelf by Brian Copenhaver about how The Oration on the Dignity of Man is not an oration and it’s not about the dignity of Man—that is a Nineteenth Century reading of it which still colors forward. It was actually a manual on how to turn yourself into an angel, by hybridizing Islam and Zoroastrianism with Kabala, channeling those through Plato and making a new version of Christianity which would synthesize all world religions. And by the way, Pico and his friends believed that if they just explained this to the Ottoman Muslims they would immediately understand that Islam and Christianity were exactly the same and then there would be world peace. That was the plan, that everyone would real Plato and then they’d realize that all religions were the same and then there would be world peace. These guys are really radical! Their versions of Christianity have reincarnation and soul projection and did you know that if you study enough of the Chaldean Oracles you can project your soul out of your body and use it to spy on Padua? This is what Latin class and Greek class should be for. Renaissance radicals were incredibly radical. But their radicalism did not resemble our modern mindsets. Their radicalisms were all over the place. The more I study them the more I love them, there are dozens of worldviews in there more alien than any alien in Star Trek ever made up, they’re gorgeous, but they don’t resemble the way we thought in the Twentieth Century, or in the Nineteenth Century, or now. Those people in the Renaissance didn’t see a future and then intentionally make it. Those people in the Renaissance had radical ideas, proposed them, debated them, mixed stuff all over, had different weird inconsistent influences, and the cauldron of all of it, of the people you’ve heard of and the many dozens of people you haven’t and the many hundreds of people who read them and debated them and the many thousands of people who read those books in schoolrooms and disseminate those weird ideas, those are what actually makes the change happen.
Now I’m going to zoom in to my most micro example. So. The least interesting set of texts that I can think of, and I’ve been trying to look into this, were editors’ prefaces at the beginnings of copies of Epictetus (aka. my article “Humanist Lives of Classical Philosophers and the Idea of Renaissance Secularization” in Renaissance Quarterly Vol. 70, No. 3 (2017), pp. 935-976). Epictetus’s manual is a set of Ancient Roman moral maxims about how to be a virtuous person, and it’s short, so people liked to teach it at school, and a lot of it sort of lines up with Christian virtues, so it’s very compatible with teaching in a Christian context, so it was a super popular textbook. It was a super popular schoolbook, the equivalent of making kids read Dickens or making kids read a Shakespeare play as part of High School English class. So there are many, many dozens of editions printed pretty much as soon as printing is invented—from the 1490s, and all the way through the 1500s, and the 1600s and the 1700s there are editions of Epictetus, and every one has a preface from the editor explaining why Epictetus is a great text to read, and usually trying to seem more awesome than the previous edition of Epictetus, so that you’ll buy that edition for school instead of the other edition, or so that the teacher will recommend that edition instead of the other edition. Publishing, as we all know, is very competitive, and you’re always trying to have the blurb that makes it attractive — if someone walks into the bookstore and there are four books with dragons on the cover, something is going to make the difference to which one they pick up, it might be the copy on the back, this is the equivalent of that.
So when you look at the prefaces, which are being written by scholars who aren’t very important and aren’t very famous and nobody has heard of or cares about any more, but they have a job and the printing house has got them to edit this thing because this is their job. Those guys who are doing that, that you’ve never heard of, are reading the guys you have heard of. They’re reading Pico, or they’re reading people who are talking about Pico. They’re reading all these dozens of bizarre strange ideas that are going on. Most of them definitely don’t agree with it, or if they agree with it, they only agree with one thing, because there are dozens of different ones, but there’s the milieu. And they notice the arguments that are big and all over the place, and one of them is, Renaissance people were very interested in the fact that the sayings of Epictetus were remarkably similar to some of the ethical teachings of Christianity, especially in the letters of St. Paul. In fact in the Middle Ages there was a rumor that Seneca and St. Paul knew each other and wrote letters and Seneca was associated with Epictetus, and Epictetus was supposed to have secretly converted. None of this is true, and in the Renaissance they had pretty much figured out that it wasn’t true. But it was very interesting to note that Epictetus’s moral maxims were similar to St. Paul’s. So if you were writing a preface to Epictetus, you would write a preface that said “He was a pagan, but he was almost as good as St. Paul!” And then ten years later when there was a new edition and someone wanted to make that edition sound better, and I have all of these editions in Latin and I have the article version of this where you can read the translations of all of these, they’ll say they want to make a stronger claim, so they’ll say “Epictetus’s moral maxims are barely less good than St. Paul’s.” And the next one will say “Epictetus’s moral maxims are just as good as St. Paul’s!” And the next one will say “Epictetus’s moral maxims, even though he was a pagan, were even better than St. Paul’s, because they are simpler and clearer and more effective at teaching ethics.” And by the time you get to the 1700s there’s an edition, which you can tell is copied from these earlier editions, they even plagiarize sentences from each other’s prefaces, it’s direct evolution, that says “Epictetus by the light of Reason alone and without the necessity of scripture or revelation arrived at better ethics than St. Paul.” And that is the kind of book that Voltaire owns when he is a kid.
So, who is transmitting this radical idea? Gradually of the idea that Scripture and Christian revelation are necessary for ethics? Who is transmitting that? Is it the big famous people? No. Because there are thousands upon thousands more copies of these classroom Epictetus volumes than there were of Machiavelli in this period. Machiavelli is banned in most places. You can’t even get it without hunting hard, you can only get digests of it, and it’s not printed nearly as often. People have it but it’s extra work, like how there are people around who use Linux instead of Windows or Mac but it’s extra work and you have to work at it and not a lot of people do it. But Epictetus? He’s as ubiquitous and default-accessable as Windows. So for every one person who’s actually reading Machiavelli in that era, a hundred people at least are reading this Epictetus preface that says you don’t need Christian Scripture to arrive at a good system of ethics. Who spread the radical idea? Thousands of people you haven’t heard of, dozens of editors who wrote these editions, most of them not intending to make anything radical happen but just intending to sell a copy of a book, in fact most of them genuinely believing that Epictetus was an author who would advance Christianity and make people believe in it more, they, nonetheless, unknowingly and unintentionally transmitted the radical ideas that turned into Deism, and that turned into that secular turn that we associate with the Enlightenment, and that we falsely associate with the Renaissance. So you see it is the Renaissance seeds that lead to it, but it some genius list of special radical people who thought the way we do that made it happen, it’s thousands of people who had dozens of different worldviews, some very orthodox, others very radical in ways that don’t resemble us, but all these ideas discussed and wrote and published and debated, and those debates influence textbooks written by nobodies who get left out of the kinds of history that focus on big names, but it’s those small names that have so much deeper broader reach than the treatises of the people who are the most famous today, largely because our canon of who’s famous was cemented by nineteenth-century people who were looking into the past and cherry-picking people to celebrate whose ideas they thought resembled their own, in order to legitimize themselves (and prop up their belief that they had a right and duty to dominate the world and ‘elevate’ ‘lesser’ cultures with their ‘right’ ideas and ‘right’ path of progress).
So, what this example and other similar studies shows us is that we overestimate how much intentionality we think individual special people have. So every time I see a cover of a tech magazine that has a new tech start-up billionaire that says “Will this man be the first man to live for two hundred years?” or “die on Mars?” or whatever, and in the article it claims “This person has a vision for the future and knows what the future will be like, and it’s this, and he’s working on building it!” (or occasionally “she”, but usually “he”) and “the future is in the hands of these geniuses.” That idea, that the future is in the hands of people whose mindsets somehow already match the future, that makes us feel powerless, makes us feel like our job is to sit back and wait for those geniuses who can already see the future to bring it about the way we imagine Machiavelli did. I remember when I first realized this it was watching the first Iron Man movie, where Tony Stark, after inventing pollution-free green energy, and saving stuff with the iron man suit gives this press conference statement “I have successfully privatized world peace,” and that is the line that gets pushed, that the solution to our strife, or climate change, is waiting for the genius to appear like a superhero with the special power to reshape the world, while everyone else sits back and gets rescued. Why do we get drawn into the rhetoric that claims this? Because our textbooks tell us that’s how we got to where we are now, that the present came about because historical figures who were geniuses and could foresee the wonderful, liberated, rational modern era that was coming then sat down and intentionally tried to make that era come. Well, they didn’t. This world is nothing like anything that people in the Renaissance tried to make. They tried to change what their world was like, but the things they were aiming for were not what actually resulted. It was the mixing together of all sorts of different and incompatible ideas, radical and orthodox, mixing in the contributions and debates of enormous numbers of forgotten, un-famous people, that was key to what changed mindsets overtime. Sometimes particular key works were extra prominent in that discourse, be it Hobbes or Epictetus, but that doesn’t mean that the effect of Hobbes publishing Leviathan was to engineer a future Hobbes envisioned, it was to stimulate thousands of people to have new conversations, many of them hostile, which in turn generated new ideas.
So, who has the power to shape the future? The answer the Renaissance shows us is not genius, not “a few people who have the right vision” it’s lots and lots of people, but a lot of the time the result of their efforts isn’t going to be what they meant. So for example, the first Renaissance scholars who tried to get people to read Epictetus, Petrarch and such, their idea was that fractious and war-torn Renaissance Italy—which Shakespeare depicts somewhat accurately in Romeo and Juliet, where rival noble houses are having feuds and goons and killing each other in the streets, and people are very willing to cause a civil war to advance their family) Petrarch and other Renaissance intellectuals thought that this was a problem and that the best way to address it was to improve the moral education of these ruling families. The reason that they wanted to build libraries and advance the new reading of the classics and so on was that they thought if their leaders had better ethics they would stop having feuds, civil wars, that if they could get the leaders of their society to read the books that young Cicero, or young Seneca, read when they were growing up, that you could then rear a new generation of leaders who would put the good of the country before the good of their family, who would like the Roman Brutus (not the one who killed Caesar, the earlier one) be willing to execute his own sons if they were committing treason, in contrast to the Renaissance norm which was to get your goons to assault the jail and liberate your son if he’s been arrested. These Renaissance people wanted people to read Epictetus, and similar works, to make them more Christian, and more ethical. We have reams and reams of this writing where they say it “We think that reading these Roman texts will make you a better Christian.” They say it over and over and over. And yet the inadvertent effect of that was disseminating a lot of radical ancient thought, which made a whole lot of different radical stuff happen, which caused changes within Christianity, which contributed to the Reformation as well as deism and atheism, which none of these early Renaissance figures would ever have wanted to have had happen, but which was the real effect of their project.
Important sideline because people always ask: so, lots of Renaissance figures who spread radical ideas but also claimed to be good Christians get pointed to by modern people who want to say “They’re secretly an atheist, they’re just self-censoring!” but the thing is that tons of these figures publicly endorse ideas that are way more dangerous, way higher on the Inquisition’s list. You don’t meticulously self-censor that you’re secretly an atheist or secularizer and then publish a demon-summoning manual with your name on it, or a book that endorses Martin Luther or espouses your belief in reincarnation. Someone who recklessly expresses ideas much more dangerous than atheism would not bother to be so careful with their atheism that we can’t find traces even in their private writings. Atheism isn’t as risky in the period as the other stuff which these people are espousing. I’ve got an article in which I go through every single person we know in the Renaissance who read Lucretius and got in trouble with the Inquisition and what they got in trouble for and it was never Lucretius. One of them goes to the stake for sola fide, Luther’s belief that faith alone, rather than faith and good works is how you get to Heaven. No atheist would choose to go to the stake for sola fide, they would say “I don’t care” and walk away. Giordano Bruno, pointed to in the Cosmos TV series which claims that he went to the stake for believing in atoms and the existence of alternate worlds, when you actually read the trial record, he doesn’t espouse that stuff in the trial, nor do they ask him about it. The only time Lucretius comes up in his entire trial is a time in which Bruno brings him up to call him stupid and disagree with him. They’re worried about Bruno’s use of Aristotle, and they’re worried about Bruno’s ideas about the division of the soul and the way the soul connects to the body and stuff that has nothing to do with atheism, atomism, materialism, or anything else that we consider modern. That’s not what gets Bruno in trouble. So we think of it as being atheism, but nobody would bother to self-censor atheism and then avow this other stuff that they all know is more dangerous. Most of them avow incredibly dangerous stuff and get away with it because they’re friends with a Duke, or a king, and this is a period in which you can say almost anything if you have the protection of royalty, and that’s how we get lots and lots and lots of radical works, because people are under their protection. So the claim that there was a radical underground of secret atheists meticulously hiding their traces so carefully that we can barely detect them even in their own works while they were running around declaring their support for much more dangerous ideas, and getting in big trouble sometimes for doing so, just doesn’t make any sense.
Were there some atheists in the period? Totally! See fabulous treatments by Matytsin, Hunter/Wootton, Kors, Sheppard, Popkin, D. C. Allen) and I think I’ve personally confirmed about three definite ones, but the three I’ve looked at didn’t hide it that much, we know about it because all their friends knew and commented on it, it wasn’t that secret. And—the more important part—it wasn’t those three who had monopolized the power and influence and created modernity through their genius vision, they were fairly minor people—one was an illiterate fishmonger, an awesome one, but not a person in power in any traditional. The key is that these awesome atheist radicals were only one kind of radical amid dozens and dozens of other equally awesome Renaissance radicalisms that don’t resemble modernity at all, but it all mixed and churned among the many more orthodox ideas, and percolated into things like Epictetus prefaces, and that’s what caused the change, not the few people who most resemble later things, the mix of everyone.
To briefly quote my polished version of talking about this (from my chapter in the Hardie/Prosperi/Zucca volume) “We do these courageous freethinkers a disservice if we dismiss the diverse and original ideas expressed in their Christian Epicurean and anti-Epicurean works as mere veils over a comfortably proto-modern rationalism. Rather than taking a step forward on a triumphant path leading inevitably toward modernity, Lucretius’s radical Renaissance readers took many steps in many directions, breaking new and fruitful ground… As we seek the agents who forged modernity, we need to stop looking for people who look like heroes, for people who look like villains, and above all for people who look like us. The characteristic ideas and values of modernity were not birthed by people we would have agreed with. They were birthed by a vibrant and diverse range of pre-modern minds alien to our own, advancing plural projects which all moved and shaped each other, plural particles in constant motion all with dynamism and momentum, not passive and inert until struck by a single modernizing genius who contained the swerve.”
So, zooming back to the present, a lot of people have a sense of powerlessness, as if we’re supposed to wait for the geniuses who clearly see the future to make it happen, and we don’t resemble those geniuses because, as history presents them, the genius figures who shaped modernity always had a clear plan, they never have vague self-doubt, or maybe they have like one dramatic turning point doubt crisis and then come out of it as their mature perfect genius selves, they’re perfect, like the protagonists of novels, and they never do laundry, and they never run out of socks, or worry about paying rent, and if the historical record shows them worrying about money then they’re somehow morally compromised and not true intellectuals, which isn’t true! I have letters where Machiavelli writes home complaining that his salary hasn’t arrived and he’s run out of socks, and has holes in his shirt and he’s worried about looking scruffy in front of the pope, and then there’s a letter back “We gave your salary to Michelangelo and we hope he can bring it to you,” and then an angry letter “Michelangelo didn’t bring me my salary!” And “Sorry, Michelangelo had trouble on the road and turned around and had to come back, there were bandits.” Real famous people in history had lots of setbacks and problems and laundry like anyone else. They resemble us more than we think. (This is why I teach Machiavelli’s letters so much). And such people also had less power over change than we ascribe to them.
Real changes in what a society thinks, in what a culture values, come from thousands of people debating something. It comes from that classroom where people are talking about Epictetus. And the modern equivalent of that classroom where people are talking about Epictetus then is this talk, this convention, blogs and social media spaces, even Twitter, anywhere where people are talking about books and events and thoughts. What’s going to shape the future? It’s people online debating about which actions are ethical or unethical in Game of Thrones. That’s exactly like these classroom discussions of Epictetus, which turn into introductions to Epictetus, which turn into the education of Voltaire, which turn into the pen mightier than all swords. Random conversation is where it happens, not one genius, thousands of people exchanging ideas. And it doesn’t result in the world those people envision. Renaissance people did try to intentionally re-engineer the world, and they did sort-of have a shared plan, they wanted to make a world that was more ethical, that had a lot of moral education of its elites in those values which both Ancient Rome and Christianity shared, and this would result in an era of peace.
It didn’t.
The actual social engineering project that the Renaissance undertook, the world that they wanted to make, is not the world that resulted. The world that resulted was more different, more strange, and more awesome than anything they ever envisioned. Because you know what came out of the Renaissance? Francis Bacon inventing progress and the scientific method and the beginnings of the Enlightenment, which are fruits far more innovative, far more exciting, and far more powerful than anything any Renaissance genius sat down to do. So when you ask yourself “The work I’m doing to try to make a better world, is it helping? Is it going to make the world I envision?” The answer is: it’s not going to make the world you envision, but it is helping, and it’s going to combine with the efforts of thousands of other people that happen in every conversation, in every convention, every workplace, every school, and media post where you’re debating or disseminating an idea or even sharing a concept, it all contributes. But the world that we end up making is not going to be he we envision, it’s going to be—like Francis Bacon’s world—stranger, more different, and more awesome, than those who created it could imagine, just as the Enlightenment was relative to the Renaissance.
So I hope what you take away from this is some point of encouragement and hope, and the understanding that we will not make exactly the world we imagine, but the world we’re going to make is going to be an amazing world, and that we are all contributing to making it, not just elite geniuses, but every one of us, every day.
Hello! It’s been a while since I posted since, as usual, many projects press, so it’s rare for me to have the time to write the kinds of polished essays I like sharing here. But I’ve been hoping to share more things, since a lot of the history work I’ve been doing lately has helped me with reflecting on current events, and I want to share that faster than the slow grind of book-length work and academic journals will allow. So I’m going to start posting a few things here that are a little rougher. I hope to still post formal essays a few times a year as before, but I’m going to start also sharing things like transcripts of lectures or talks I’ve given, excerpts from my teaching notes, or assemblages from Twitter threads which took meatier turns. I hope you’ll enjoy them, but I’ll also try to always make clear what kind of content each post is, so you know which are the polished essays you’re used to.
I’m also launching a Patreon, so if you’ve enjoyed my posts, books, music etc. please consider supporting me.
I’ve felt torn about Patreon for a long time since, unlike so many wonderful scholars and authors I know, I have a steady living wage from my university and don’t struggle to get by. But, as my Patreon page explains, what I don’t have enough of is the means to hire help. As someone trying to create a lot (and as a chronic pain sufferer who often has fewer than 7 days in my week) it makes an enormous difference to how much I can do if I can pay for help: pay a music editing service to turn polish vocal tracks into completed albums without spending hours on it myself, to pay my part-time assistant Denise who helps with my calendar and paperwork and fire-hose of email which so easily eat up whole days, to hire a sound editor to finally make it possible to launch a podcast with my good friend Jo Walton talking about books, and craft of writing, and history, and science, and Florence, and gelato, and interviewing awesome friends. Even the little post below was made possible by having help, and wouldn’t exist otherwise. And supporters will get updates on what I’ve been up to, and early access to blog posts and podcast episodes, and snippets of outtakes and works in progress. So if you’d like to help me hire the help I need to turn more ideas into reality, and to have more time to write, please have a look at the Patreon page for details, and thank you very much!
Meanwhile…
Why I Teach Machiavelli Through His Letters
(excerpt from a lecture transcript, so this is how I explain this to students too)
Teaching Machiavelli through his letters is a separate thing from being an historian accessing Machiavelli through his letters. One of the reasons that I love teaching Machiavelli through his letters is that you get a very different view of the person from letters. You get unimportant details. You get the things that the person cared about that week, as opposed to the things that the person wanted to be discussed by many people in the context of that person’s name for a long time. You do get the serious political thought, but you get it mixed with “Where is my salary?” “Hello my friend,” “Here’s the party I was at,” “I have a cold,” all of these very human elements that don’t come to us when we just read a thesis.
Thanks to interdisciplinarity, both at University of Chicago and elsewhere, I move from department to department a lot–I spend some of my time with historians, and some with classicists, political science people, Italian literature or English literature people, and with philosophy people. Each of these disciplines has a different way of approaching text, but many of them approach the text perhaps not with the formal philosophical attitude of “death of the author, we care only about the text,” but all the same with the effective attitude of “we try to learn about this author only through the text,” and only through the formal polished text, the treatise.
When I’m trying to unpack not only Machiavelli but history in general to my students, it’s very easy for the history to seem like a sequence of marble busts on pedestals who handed us great books. It’s much harder to get at the fact that those people are also people who are like us: people who messed up, people who ran out of money, people who had anxieties, people who failed in things that they undertook. People who had friends, people who were nervous without their friends, and lonely. And that isn’t a version of history that we get shown very often. We get shown heroes, we get shown villains, and we get shown geniuses, as if there isn’t a person present as well. Machiavelli is a very valuable example, because we have such a great corpus of letters, but he’s also such a name. If you want to make a shortlist of people who are a marble bust on a pedestal in the way that they’re presented as we talk about the history of thought, Plato, Aristotle, Socrates, Cicero, Machiavelli, are major major figures in that way. So the letters humanize them and make them real.
I feel it’s important not to approach these works as if these people are somehow superhumanly excellent, as if these people are somehow perfect in what they undertake. I’ll often be at a conference where someone will talk about a passage in a work in isolation. I was recently at the Renaissance Society of America conference, and there was an interesting discussion of a passage in which Ficino had a really weird interpretation of this one passage of Lucretius. And there was a very nasty fight between two scholars over the interpretation, in which one of the scholars insisted he’s making this complicated subtle three-part reading of a thing that relates to another thing, diagram diagram diagram. The other person said “I think he translated the passage wrong. Because the passage was really hard. And his copy didn’t have a very clear script. And I think he didn’t read the sentence the way we read the sentence.” And the first person was adamant that it is inappropriate to question whether someone like Ficino might have had trouble reading a piece of Latin, that of course his Latin is immaculately better than our Latin. And his Latin was better than our Latin, because he spent more of his life doing it and I do believe he’s better than most classicists at this — but most classicists really struggle with that line. And when you read the commentaries on it there’s lots of ambiguity even now about what it means, and we have dictionaries, which he did not.
It was very interesting to me to see that battle between thinking of the figure as human, in which the question “Did he mess up?” is a valid question, as opposed to thinking of the person as someone who could never mess up. And a lot of the ways we approach historical figures, whether it’s Machiavelli, or Aristotle, or anyone, involve the idea that all of their works are fully intended, that they’re somehow in an a-temporal vacuum, that we should look at them all in sequence, that no one is ever going to change his mind about a thing unless the person themselves made changing their mind about a thing be a big deal. We create this idea of these geniuses where everything they wrote even from early on is exactly what they meant, which then all gets incorporated into material.
I want my students to come away from my courses not thinking about historical figures like that, but remembering that every historical figure had to pay for socks, or had to deal with laundry, or have a servant who dealt with laundry for them and then they had to deal with the servant. But they all had everyday practical existences, and they all mess up. Machiavelli’s letters give you access to somebody who feels like a real human being. Some of the things he’s doing are really weird. Some of the things he’s doing involve bizarre sexuality. Some of the things he’s doing involve uncomfortable politics. Some of the things he’s doing involve very astute politics. Some of them involve very terrifying moments like his wife saying: “I’m so glad you’re alive, we heard that Cesare Borgia massacred all of his people, I’m so glad you’re alive!” And others are very much “We’re trying to get my brother a job and no one will give him a job because it was corruptly given to the other person and we have to figure out how to get my brother a job,” which is not the sort of thing we imagine such people giving their hours to.
When you read Michelangelo’s autobiography there’s an interesting point in it where he stops talking about art for a while and starts talking about the lawsuit that went on between him and people associated with Giuliano della Rovere because he was contracted to build Giuliano della Rovere’s tomb, but then for a variety of complicated reasons the tomb did not materialise as it was supposed to have, largely because the plan for the tomb was the most insane ridiculous over-the-top impossible tomb that you could ever possibly conceive of. That was obviously never going to happen. But also there were lots of fights between him and della Rovere over who had to pay for the marble and whether the marble was delivered and he said the marble was delivered and Della Rovere said the marble wasn’t delivered and there was a crack in it… and all these lawsuits went back and forth, and also Guiliano della Rovere was starting a giant war and invading Ferrara. At one point Michelangelo ran away from Rome saying “I’m not going to work on this stupid tomb any more” and went to Florence, and then Giuliano della Rovere moved an army over to besiege Florence and started threatening them “Florence! I will besiege you and burn you down unless you give me back Michelangelo!” We have these great documents where Michelangelo is begging Signoria “Please don’t make me go back to Della Rovere! I hate him and he just torments me. I’ll build you really good defensive walls! Look at my engineering ideas for how to improve the walls!” and they had to say “No, I’m sorry Michelangelo, we’re not going to war with the Battle Pope just for you, go back to Rome, build the stupid thing.” And he did go back to Rome, and then Della Rovere made him paint the Sistine Chapel ceiling knowing Michelangelo hated painting, basically as punishment for trying to run away. I’m not exaggerating. And that’s why there are lots of angry figures in the Sistine Chapel ceiling. But the wonderful horrible flirtatious strange antagonism between Michelangelo and Giuliano della Rovere is magnificent.
And in his autobiography he’s talking about this lawsuit that arose because of the della Rovere tomb project, in great detail, and then there’s a line that says Michelangelo realized that, while dealing with a bunch of lawsuits and Pope Adrian and such, he’d been so stressed he hadn’t picked up a chisel in four years. Because he spent the entire time just dealing with the lawsuit. (Anyone feeling guilty about being overwhelmed by stress this year, you’re not alone!) And we have four years worth of lost Michelangelo production, because he didn’t do any art that entire time, because he was just dealing with a stupid lawsuit. And that’s not the sort of thing that fits into our usual way of thinking about these great historical figures. We imagine Michelangelo in his studio with a chisel. We do not imagine him in a room with a bunch of lawyers being curmudgeonly and bickering and trapped in contract hell.
Those sorts of things are important, I think, to reintroduce into the way we imagine historical figures. That they have an everyday mundanity that we imagine that they don’t. And I think that’s a big part of why when we compare ourselves to them we feel as if we can’t live up to that greatness. Because we tell edited versions of the lives of great men and great women, in which we edit out the things that feel like us. So of course we feel as if our everyday lives full of mundanity can’t rise to those levels, because we’re not comparing ourselves to the real people, we’re comparing ourselves to the edited version in which we take out the mundanity! So Machiavelli’s letters give us that. And they give us a person with problems, and a person with mistakes, and a person with a sense of humor, and a person with sexuality, all of these elements we erase from our marble busts on pedestals. And so that’s a big part of why I use the letters while teaching, and when my students read them I want them to put together “Here is a real person who is like us,” as well as “Here is the everyday on the ground experience of what it’s like to live in this crisis.”
We need that, when we live in a real crisis ourselves, and it makes us feel so often like we’re powerless and weak compared to these impressive people in the past–but they felt that way too.
Short version: I’m posting today to share two files I made for my university. One is a Healthy Work Habits and Self-Care Guide for the Pandemic Crisis, which many people have said they found helpful and refreshingly different from others that are going around (details below). Some bits are academia specific but most of it is applicable broadly, and I’ve left in the links to university services because it’s likely you can find such services . The other is teaching-specific, a guide to Adapting a Syllabus for the Crisis, not focused on remote teaching, but on the fact that everyone on Earth is already on the verge of breaking down, so it’s vital to do all we can to structure courses and assignments to have a little more leeway, and paths to recover when students (and instructors) do break down.
I want these to help as many people as possible, so please download them, share them, excerpt them, adapt them, make a version for your school or business, use them any way you wish; while credit would be nice my only firm request is that, as you pass them on, you also pass on the wish to pass them on. (If you want .docx files they’re here for self-care and here for syllabus design.)
Long version:
Early this spring, as the COVID-19 crisis set in, someone on #DisabilityTwitter asked (I wish I could find the tweet) if others too had found that the self-care skills needed for chronic pain are the same as the ones needed to cope with the pandemic.
I was among many who answered “Yes!”, and soon a small thread was describing our experiences, that it felt almost like a superpower, already understanding the slow, invisible toll of constant daily trauma, the exhaustion that sets in, how to self-monitor, how to spot when you can’t do it and should switch to a different task (rest is a task), and how to fight back against that self-accusing voice inside that insists you should keep pushing through, and is plain wrong. Worldwide health organizations have recognized this as a World Mental Health Epidemic, as well as a viral epidemic. Chronic exposure to fear and anxiety (which are forms of pain), have a real, measurable traumatic effect on the brain, dealing neurological damage which is worse because it repeats every day, and which is affecting every human on Earth right now. Symptoms include fatigue, difficulty concentrating, difficulty sleeping, short temper, diminished higher cognitive functions like writing, reading, creativity, second languages and other skills seeming to vanish away. Does that sound familiar? It’s very familiar to me, the feeling of waking up one morning to find I just can’t do it, words aren’t flowing, my eyes keep straying from the screen, the Latin or Italian which made sense yesterday is just a wall. This is not to say that the crisis hasn’t been bad for me and others with chronic pain, it’s like having the condition twice at once; not like, it is having two chronic-traumatic conditions at once, the usual plus 2020 (and with quarantine many aspects of treatment and care are harder, higher risk, or just not possible). But having the skill set there was still invaluable.
So I joined my university’s committee to adapt teaching for COVID in the fall, and made these guides.
A lot of it is stuff that’s always helpful, but polished specifically for the current crisis. Here’s a little bit of the logic behind the things I tried to focus on, while the downloadable files themselves are more the focused methods than the abstract principles:
First, I tried to make it non-proscriptive. A lot of Pandemic productivity advice says “60% of people are more productive when they [wear a suit or whatever] so you must do it!” So the other 40% of people should, what, get bent? So I tried to focus on learning about yourself: try this work tip, then try the opposite, try different things to learn what helps you personally produce at your best.
Second, I talk about self-care as a work task and a duty. Culture pressures us to skip it, that when corners need to be cut we cut rest, play, and sleep. We shouldn’t. When we cut those, we start producing less (in quality and quantity) in those hours when we do work. It hurts our productivity as much if not more than cutting work tasks, and also makes us miserable. I remember a dreadful article a couple years ago with the thesis If you want to be successful you have to work 60 hours a week like these rich CEOs, but when you looked at the breakdown of what they called “work” they counted in those 60 hours all their commuting time, the gym & shower, power naps, tennis with a colleague, lunch meetings. So, to be clear, if you worked 9-5 with a 30 min commute on each end they were counting that as 10 hours’ work for the CEO but only 8 for employees, and if there was a trip to the gym and dinner or drinks with colleagues after work the CEO got to count those too, increasing 10 to 12, but the employee didn’t. If you counted only the activities that regular employees get to count as “work” the CEOs were working barely 35 max but the article was calling it 60 to advance this horrible false argument that only work-a-holics get ahead (a claim so many corporations want us to believe). You know what that article showed helps people get ahead? Having time in the day for rest, and exercise, and breaks, and games, and leisurely lunches, and spending time outside, and counting that self-care as vital to your work. Most jobs won’t count that as work (and the offices that do have nap rooms & massage chairs & lego rooms often do that to entice employees to also ridiculously late and never leave), but we can at least count self-care as work in our minds, and tell our guilt reflexes that this is not where corners should be cut.
(And if you’re a Terra Ignota fan, remember how the Utopian Oath requires you to promise to take that minimum of rest and play that’s necessary for your productivity? And remember that in their society twenty hours a week is the default work week? Utopia’s standards of rest and play are very high, and skipping the self-care that keeps you at your best is oathbreaking just as much as skimping on work. Also, Unusual Frequency has awesome new Utopian travel mugs if you want to reconfirm defeating death and attaining the stars! And Cousin flags if you want to affirm doing so kindly while taking care of yourself and others.)
Third, I talk about the brain like an organ. A body part. Which it is, one we push to its max a lot in daily life. So we should monitor it like one. Tennis elbow affects 40% of serious players, so it’s common sense for any tennis player to learn about tennis elbow and how to watch for it to set in. The latest studies I’ve seen show at least mild depression affecting 33% of undergrads, 41% of Ph.D. students, and in this pandemic it’s affecting a huge swath of the entire human race, so we should all learn about it, and watch for it, and train ourselves on how to mitigate it as much as possible while the symptoms are still mild, just like tennis players learn about tennis elbow. The only reason we don’t is that our culture stigmatizes problems with the brain totally differently from other organs, treats them as a failure of character or failure of will; they aren’t. “Push through the pain” is the wrong advice for that tendon in a tennis player’s elbow, and it’s the wrong advice for brain things too.
Fourth, I tried to stress in both documents that, in this situation, breaking down is normal. Lower productivity is normal. Grief, exhaustion, short tempers, snapping at friends, regretting it, they’re all normal. We have to plan for that, expect it, brace for it, recognize that in a team or in a household these months are going to be everybody taking turns having small breakdowns – if we prepare for that we can help each other prevent the *bigger* breakdowns that are the real problem. Voices inside will tell us that the days we wake up in the morning and sit down to work and just… can’t… deal, are bad, our fault, our weakness, failure, but all the neuroscience we have says it’s not our fault, it’s natural, it’s what brains do pushed past their limit, and our brains are past their limit. So on the mornings when you sit down to work and just can’t deal, and the self-doubt voice inside looms up to say weak! failure! push back. That voice is common too. I hear it. I still hear it after years of chronic pain and every saying that the pain is real, that I should take it easy, and all my friends being supportive, and my family, but something in our culture still makes us blame ourselves inside, weak! failure! So if you hear that voice on mornings when you just can’t deal, try to summon up another voice to shout back at it: everyone on Earth is breaking down. Today my job is not this task–today my job is to take care of myself, and protect the work I’ll be able to get done tomorrow.
A closing thought: Early in the pandemic the anecdote went viral that Isaac Newton came up with his theory of gravity while he was quarantining in the country from a plague, and many people (not jokingly enough) used it to say we should have high standards for what we produce in a pandemic, or that if we don’t set high standards it means we’re not geniuses like him. The true fact (historian here, this is my period!) is that Newton did theorize gravity while quarantining, but didn’t have library access, and while he was testing the theory he didn’t have some of the constants he needed (sizes, masses), so he tried to work from memory, got one wrong, did all the math, and concluded that he was wrong and the gravity + ellipses thing didn’t work. He stuck it in a drawer. It was only years later when a friend asked him about Kepler’s ellipses that he pulled the old notes back out of the drawer to show the friend, and the friend spotted the error, they redid the math, and then developed the theory of gravity. Together, with full library access, when things were normal after the pandemic. During the pandemic nobody could work properly, including him. So if anyone pushes the claim that we should all be writing brilliant books during this internationally recognized global health epidemic, just tell them Newton too might have developed gravity years earlier if not for his pandemic. And for a better historical model to use for how productive we should be in 2020, remember 1522-3, when Michelangelo was being hounded by lawsuits, and there was a political takeover crisis in his homeland, and he was so stressed he wrote later that he couldn’t touch a chisel the whole time, he couldn’t concentrate on any kind of art, too stressed and scared. Even Michelangelo, whom everyone agrees to call “genius.” Breaking down is normal for everyone, there are no special geniuses immune somehow to the slings and arrows of outrageous 2020. So next time you find a project taking longer than your planned, and your attention straying, and your ability to cope fading away, remember that if you’re getting anything accomplished in these months you’re already doing better than Michelangelo. And then do some self-care.
“If the Black Death caused the Renaissance, will COVID also create a golden age?”
Versions of this question have been going around as people, trying to understand the present crisis, reach for history’s most famous pandemic. Using history to understand our present is a great impulse, but it means some of the false myths we tell about the Black Death and Renaissance are doing new damage, one of the most problematic in my view being the idea that sitting back and letting COVID kill will somehow by itself naturally make the economy turn around and enter a period of growth and rising wages.
Brilliant Medievalists have been posting Black Death pieces correcting misconceptions and flailing as one does when an error refuted 50 times returns the 51st(The Middle Ages weren’t dark and bad compared to the Renaissance!!!). As a Renaissance historian, I feel it’s my job to shoulder the other half of the load by talking about what the Renaissance was like, confirming that our Medievalists are right, it wasn’t a better time to live than the Middle Ages, and to talk about where the error comes from, why we think of the Renaissance as a golden age, and where we got the myth of the bad Middle Ages.
Only half of this is a story about the Renaissance. The other half is later: Victorian Britain, Italy’s unification, World Wars I and II, the Cold War, ages in which the myth of the golden Renaissance was appropriated and retold. And yes, looking at the Black Death and Renaissance is helpful for understanding COVID-19’s likely impact, but in addition to looking at 1348 we need to look at its long aftermath, at the impact Yersinia Pestis had on 1400, and 1500, and 1600, and 1700. So:
This post is for you if you’ve been wondering whether Black Death => Renaissance means COVID => Golden Age, and you want a more robust answer than, “No no no no no!”
This post is for you if you’re tired of screaming The Middle Ages weren’t dark and bad! and want somewhere to link people to, to show them how the myth began.
This post is for you if you want to understand how an age whose relics make it look golden in retrospect can also be a terrible age to live in.
And this post is for you if want to ask what history can tell us about 2020 and come away with hope. Because comparing 2020 to the Renaissance does give me hope, but it’s not the hope of sitting back expecting the gears of history to grind on toward prosperity, and it’s not the hope for something like the Renaissance—it’s hope for something much, much better, but a thing we have to work for, all of us, and hard.
I started writing this post a few weeks ago but rapidly discovered that a thorough answer will be book-length (the book’s now nearly done in fact). What I’m sharing now is just a precis, the parts I think you’ll find most useful now. So sometimes I’ll make a claim without examples, or move quickly over important things, just linking to a book instead of explaining, because my explanation is approaching 100,000 words. That book will come, and soon, but meanwhile please trust me as I give you just urgent parts, and I promise more will follow.
Now, to begin, the phrase “golden age” really invokes two different unrelated things:
(1) an era that achieved great things, art, science, innovation, literature, an era whose wondrous achievements later eras marvel at,
(2) a good era to live, prosperous, thriving, stable, reasonably safe, with chances for growth, social ascent, days when hard work pays off, in short an era which—if you had to be stranded in some other epoch of history—you’d be likely to choose.
The Renaissance fits the first—we line up to see its wonders in museums—but it absolutely positively no-way-no-how fit the second, and that’s a big part of where our understandings of Renaissance vs. Medieval go wrong. So, our outline for today:
Renaissance Life was Worse than the Middle Ages (super-compressed version)
Where did the myth come from in the first place? (a Renaissance story)
Why is the myth of a golden Renaissance retold so much? (a post-Renaissance story)
Conclusion: We Should Aim for Something Better than the Renaissance
It’s also important to begin this knowing that I love the Renaissance, I wouldn’t have dedicated my life to studying it if I didn’t, it’s an amazing era. I disagree 100% with people who follow “The Middle Ages weren’t really a Dark Age!” with “The Renaissance sucks, no one should care about it!” The Renaissance was amazing, equally amazing as the Middle Ages, or antiquity, or now. I don’t love the Renaissance for being perfect. I love it because it was terrible yet still achieved so much. I love it because, when I read a letter where a woman talks of a nearby city burning, and armies approaching, and a friend who just died of the plague, and letter also talks about ideas for how to remedy these evils, and Xenophon’s advice for times of war, and how Plato and Seneca differ in their advice on patience, and the marvelous new fresco that’s been finished in the city hall. To find these voices of people who faced all that yet still came through it brimming with ideas and making art, that makes me love the human species all the more. And gives me hope.
In Florence, there are little kiosks near the David where you can buy replicas of it, and alongside the plain ones they have copies dipped in glitter paint, so the details of Michelangelo’s design are all obscured with globs of sparkling goo. That’s what the golden age myth does to the Renaissance. So when I say the Renaissance was grim and horrible, I’m not saying we shouldn’t study it it, I just want you to scrape off the glitter paint and see the details underneath: damaged, imperfect, a strange mix of ancient and new, doing its best to compensate for flaws in the material and mistakes made early on when teamwork failed, and violent too—David is, after all, about to kill an enemy, a celebration of a conquest, not a peace. Glitter drowns all that out, and this is why, while the myth of the golden Renaissance does terrible damage to how we understand the Middle Ages, it does just as much damage to how we understand the Renaissance. So let’s take a quick peek beneath the glitter, and then, more important, let’s talk about where that suffocating glitter comes from in the first place.
Part 1: Renaissance Life was Worse than the Middle Ages (super-condensed version)
The Renaissance was like Voldemort, terrible, but great.
On February 25th 1506, Ercole Bentivoglio, commander of Florence’s armies, wrote to Machiavelli. He had just read Machiavelli’s Deccenale primo, a history in verse of the events of the last decade. Bentivoglio urged Machiavelli to continue and expand the history, not for them, but for future generations, so that:
“knowing our wretched fortune in these times, they should not blame us for being bad defenders of Italic honor, and so they can weep with us over our and their misfortune, knowing from what a happy state we fell within brief time into such disaster. For if they did not see this history, they would not believe what prosperity Italy had before, since it would seem impossible that in so few days our affairs could fall to such great ruin.”
Of these days of precipitous ruin, Burkhardt, founder of modern Renaissance studies, wrote in 1869:
“The first decades of the sixteenth century, the years when the Renaissance attained its fullest bloom, were not favorable to a revival of patriotism; the enjoyment of intellectual and artistic pleasures, the comforts and elegancies of life, and the supreme interests of self-development, destroyed or hampered love of country.” (The Civilization of the Renaissance in Italy, end of Part 1)
Burkhardt seems to be describing a different universe from Bentivoglio, so desperate to prove to posterity that he tried his failing best to defend his homeland’s honor. Yet this was the decade that produced Leonardo’s Mona Lisa, Michelangelo’s David, Raphael’s Marriage of the Virgin, Bramante’s design for the new St. Peter’s Basilica, Josquin des Prez’s El grillo (the Cricket), the first chapters of Ariosto’s epic Orlando Furioso, and Castiglione’s first courtly works at the court of Urbino, soon to be immortalized in the Courtier as the supreme portrait of Renaissance culture. These masterworks do indeed seem to project a world of enjoyment and artistic pleasure in utter disconnect with Bentivoglio’s despair. Can this be the same Renaissance?
This double vision is authentic to the sources. If we read treatises, orations, dedicatory prefaces, writings on art or courtly conduct, and especially if we read works written about this period a few decades later—like Vasari’s Lives of the Artists which will be the first to call this age a rinascita—we see what Jacob Burkhardt described, and what popular understandings of the Renaissance focus on: a self-conscious golden age bursting with culture, art, discovery, and vying with the ancients for the title of Europe’s most glorious age. Burkhardt’s assessment was correct, if we look only at the sources he was looking at. If instead we read the private letters which flew back and forth between Machiavelli and his correspondents we see terror, invasion, plague deaths, a desperate man scrambling to even keep track of the ever-moving threats which hem his fragile homeland in from every side, as friends and family beg for frequent letters, since every patch of silence makes them fear the loved one might be dead.
Machiavelli’s correspondent, Ercole Bentivoglio, typifies the tangled political web which shaped these years. His father had been Sante Bentivoglio, who began as a blacksmith’s son and common laborer but was identified as an illegitimate member of the Bentivoglio family that dominated Bologna (remember Gendry in Game of Thrones?), so Sante was called to rule Bologna for a while when the only other adult Bentivoglio was murdered in an ambush, and young Ercole grew up in a quasi-princely court with all the grandeur we now visit in museums. Ercole’s mother was Ginevra Sforza, an illegitimate niece of Francesco Sforza who had recently conquered Milan, replacing the earlier Visconti dukes who had in turn seized the throne by treachery fifty-five years before. Renaissance politics isn’t turtles all the way down, it’s murders and betrayals all the way down.
Why was life in the Renaissance so bad? This is going to be a tiny compressed version of what in the book will be 100 pages, but for now I’ll focus on why the Renaissance was not a golden age to actually live in, even if it was a golden age in terms of what it left behind.
Let’s look at life expectancy: In Italy, average life expectancies in the solidly Medieval 1200s were 35-40, while by the year 1500 (definitely Renaissance) life expectancy in Italian city states had dropped to 18.
It’s striking how consistently, when I use these numbers live, the shocked and mournful silence is followed by a guy objecting: those numbers are deceptive, you’re including infant mortality—voiced as if this observation should discredit them. Yes, the average of 18 does include infant mortality, but the Medieval average of 35 includes it too, so the drop is just as real. If you want we can exclude those who die before age 12, and we do get a smaller total drop then, average age of death 54 in the 1200s dropping to 45-48 in 1500, so only a 12-16% drop instead of 48%, but the more we zoom the grimmer the Renaissance half proves. Infant mortality (within 12 months) averaged 28% both before and after 1348, so the big drop from Medieval to Renaissance Italy is actually kids who made it past the first year, only to die in years 2-12 from new diseases. We also think of the dangers of childbirth as lowering women’s lifespans, but death from childbirth stayed steady from Medieval to Renaissance at (for Tuscany) 1 death per 40 births, while the increase in war and violence made adult male mortality far higher than female even with the childbirth threat. If we look at the 20% of people who lived longest in Renaissance Italy it’s almost entirely widows and nuns, plus a few diehards like Titian, and poor exiled Cardinal da Costa of Portugal languishing in Rome to the age of 102, with everyone he’d known in the first 2/3rds of his life long gone. Kids died more in the Renaissance, adults died more, men died more, we have the numbers, but I find it telling how often people who hear these numbers try to discredit them, search for a loophole, because these facts rub against our expectations. We didn’t want a wretched golden age. (Demographics are, of course, an average, and different bits of Europe varied, but I’m using the numbers for the big Italian city-states precisely because they’re the bit of Europe we most associate with the golden Renaissance, so if it’s true there, it’s true of the Renaissance you were imagining.)
Why did life expectancy drop? Counter-intuitively the answer is, largely, progress.
War got worse, for one. Over several centuries, innovations in statecraft and policy (which would continue gradually for centuries more) had increased the centralization of power in the hands of kings and governments, especially their ability to gather funds, which meant they could raise larger armies and have larger, bloodier wars. Innovations in metallurgy, chemistry, and engineering also made soldiers deadlier, with more artillery, more lethal weapons, more ability to knock a town’s walls down and kill everyone inside, new daggers designed to leave wounds that would fester, or anti-personnel artillery designed to slice a line of men in half. Thus, while both the Middle Ages and Renaissance had lots of wars, Renaissance wars were larger and deadlier, involving more troops and claiming more lives, military and civilian—this wasn’t a sudden change, it was a gradual one, but it made a difference.
Economic growth also made the life expectancy go down. Europe was becoming more interconnected, trade increasing. This was partly due to innovations in banking (which had started in the 1100s), and partly, yes, the aftermath of the Black Death which caused a lot of economic change—not growth but change—some sectors growing, others shrinking, people moving around, people trying to stop people from moving around, markets shifting. There were also innovations in insurance, for example insuring your cargo ship so if it sinks you don’t go bankrupt like our Merchant of Venice. This meant more multi-region trade. For example, weaving wool into fine-quality non-itchy thread required a lot of oil, without which you could only make coarse, itchy thread. England produced lots of wool but no oil (except walnuts), so, in the Renaissance, entrepreneurs from England, instead of spinning low-profit itchy wool, started exporting their wool to Italy where abundant olive oil made it cheap to produce high-quality cloth and re-export it to England and elsewhere. This let merchants grow rich, prosperity for some, but when people move around more, diseases move more too. Cities were also growing denser, more manufacturing jobs and urban employment drawing people to crowd inside tight city walls, and urban spaces always have higher mortality rates than rural. Malaria, typhoid, dysentery, deadly influenza, measles, the classic pox, these old constants of Medieval life grew fiercer in the Renaissance, with more frequent outbreaks claiming more lives.
The Black Death contributed too—in school they talk as if the plague swept through in 1348 then went away, but the bubonic plague did not go away, it remained endemic, like influenza or chickenpox today, a fact of life. I have never read a full set of Renaissance letters which didn’t mention plague outbreaks and plague deaths, and Renaissance letters from mothers to their traveling sons regularly include, along with advice on etiquette and eating enough fennel, a list of which towns to avoid this season because there’s plague there. Carlo Cipolla (in the fascinating yet tediously titled Before the Industrial Revolution) collected great data for the two centuries after 1348, in which Venice had major plague bursts 7% of years, Florence 14% of years, Paris 9% of years, Barcelona 13% of years, and England (usually London) 22% in the earlier period spiking to 50% in the later 1500s, when England saw plague in 26 out of 50 years between 1543 and 1593. Excluding tiny villages with little traffic, losing a friend or sibling to plague was a universal experience from 1348 clear to the 1720s, when plague finally diminished in Europe, not because of any advance in medicine, but because fourteen generations of exposure gave natural selection time to work, those who survived to reproduce passing on a heightened immune response, a defensive adaptation bought over centuries by millions of deaths.
Today thousands of cases of Y. pestis (the plague bacterium) still occur each year, largely in sub-Saharan Africa and East Asia where it was not endemic so immunities didn’t develop. And if geneticist Mihai Netea is correct that the immune mutation which helps those of European descent resist Y. pestis also causes our greater rate of autoimmune disorders, then the Black Death is still constantly claiming lives through the changes it worked into European DNA over 400 years (and literally causing me pain as I type this, as my own autoimmune condition flares). While the 1348 pandemic was Medieval, most of the Middle Ages did not have the plague—it’s the Renaissance which has the plague every single day as an apocalyptic lived reality.
Economic growth also made non-military violence worse. Feuds (think Montagues and Capulets) were a Medieval constant, but the body count of a feud depends a lot on how wealthy the head families are, since the greater their wealth and the larger their patronage network, the larger the crowd of goons on stage in the opening scene of Romeo & Juliet when partisans of the two factions are biting their thumbs at each other, and the larger the number of unnamed men who also get killed in the background while Romeo fights Tybalt. In Italy especially, new avenues for economic growth (banking and mercenary work) quickly made families grow wealthy enough to raise forces far larger than the governments of their little city states, which made states powerless to stop the violence, and vulnerable to frequent, bloody coups. The Bentivoglios of Bologna and Sforza of Milan (whose marriage alliance produced Ercole who wrote that letter to Machiavelli) had risen by force, ruled by force, and were in turn overthrown by force, several times each, in fact, as rulers were killed, then avenged by returning sons or nephews, and cities flip-flopped between rival dynasties every few years:
In the 1400s most cities in Italy saw at least four violent regime changes, some of them as many as ten or twelve, commixed with bloody civil wars and factional massacres, until all Italy’s ruling houses were so new that the Knights Hospitaller—who normally required knights to have been noble four generations to join—let Italians in with only two generations because otherwise there would have been no one. Petrarch talked about this in his poem Italia Mia, which we think was written by 1347 (i.e. before the Black Death); he described Italy’s flesh covered with mortal wounds, caused by “cruel wars for light causes, and hearts, hardened and closed/ by proud, fierce Mars,” and his poor poem begging Italy’s proud, hard-hearted people for, “Peace, peace, peace.” It sounds just like what Ercole described to Machiavelli, doesn’t it? Well, Petrarch’s poem is as far from Machiavelli’s history as Napoleon’s rise from Yuri Gagarin’s space flight, a long time during which the wars grew worse, armies bigger, cities richer, plagues more frequent, steady escalation of the same things Petrarch feared would wipe out Italy 150 years before.
Important: none of this was new in the Renaissance! These were all gradual developments: banking, trade, centralization, the cultural produce of the Renaissance too (paintings, cathedrals, music, epics), these had all been gradually ramping up for centuries, changing the character of Europe decade by decade. Banking innovations started in the 1100s, insurance innovations in the 1300s, economic shifts before as well as after 1348, political shifts accumulated centuries, it’s all incremental. Thus, when I try to articulate the real difference between Renaissance and Medieval, I find myself thinking of the humorous story “Ever-So-Much-More-So” from Centerburg Tales (1951). A traveling peddler comes to town selling a powder called Ever-So-Much-More-So. If you sprinkle it on something, it enhances all its qualities good and bad. Sprinkle it on a comfy mattress and you get mattress paradise, but if it had a squeaky spring you’ll never sleep again for the noise. Sprinkle it on a radio and you’ll get better reception, but agonizing squeals when signal flares. Sprinkle it on the Middle Ages and you get the Renaissance. All key qualities were already there, good things as well as bad, poetry, art, currents of trade, thought, finance, law, and statecraft changing year by year, but add some Ever-So-Much-More-So and the intensity increases, birthing an era great and terrible. Many different changes reinforced each other, all in continuity with what came before, just higher magnitude, the fat end of a wedge of cheese, but it’s the same cheese on the thin end too. The line we draw—our slice across the cheese—we started drawing because people living in the Renaissance started to draw it, felt it was different, claimed it was different, and their claims reordered the way we think about history.
Some more quick un-fun facets of Renaissance life: while the Medieval Inquisition started in 1184, it didn’t ramp up its book burnings, censorship, and executions to a massive scale until the Spanish Inquisition in the 1470s and then the printing press and Martin Luther in the 1500s (Renaissance); similarly witchcraft persecution surges to scales unseen in the Middle Ages after the publication of the Malleus Maleficarum in 1486 (Renaissance); and the variety of ingenious tortures being used in prisons increased, rather than decreasing, over time. Rule of thumb: most of the scary practices we think of as “Medieval” were either equally true of the Renaissance, worse in the Renaissance, or only started in the Renaissance. If you want corrupt popes, they too can be more terrible as they get richer. And pre-sanitation, the more luxury goods traveled, the more people grew wealthy, the wider the variety of food people ate, and with more kinds of foods came more different kinds of parasites living in your intestines eating your from the inside out, hooray! Even in the Middle Ages we can tell your social class from the variety of parasite eggs in your preserved feces (the more you know!), but in the Renaissance the total could go up, and the frequency and intensity of chronic pain with it (not to mention a wider variety of horrible toxic things doctors would try to feed you as a cure; before sanitation more doctors = bad, not good).
In sum, if you’re a time traveler and you’re being banished, don’t pick the Renaissance.
As for how an age so terrible to live through produced the masterpieces and innovations we still hold in awe, my ultrashort answer is that Renaissance art and culture was also a gradual ramp-up from ever-changing Medieval art and culture, and that the leaps we seem to see in the later period are the desperate measures of a desperate time.
Legitimacy is a key concept here. The secret we all know is that governments, countries, laws, they’re all just a bunch of stuff we made up. They exist only as long as we all keep agreeing they exist, and act accordingly. Far more than Tinkerbell, regimes and governments need us to believe in them or they die. Sometimes this death takes the form of people just ignoring old structures, like in the Hellenic age when a remote Greek colony might hear from the founding city so infrequently that it starts ignoring the empire and just makes its own government. A more common consequence when people stop believing in governments is that some rival will take advantage of that lack of confidence, and rise up to claim power instead, whether through an electoral primary challenge or a bloody civil war.
For this reason, regimes to work hard to gain legitimacy, that is to acquire any and all things that make people agree the regime is real, and has the right to rule. When a usurper murders the old king but marries his widow, sister, or daughter, that’s an attempt to secure legitimacy in a world where people are used to government going with blood right. When no local royal-blood bride is available, the usurper might instead marry a princess from a famous distant kingdom, and fill his court with expensive, exotic treasures and other indications that he’s connected to foreign powers and money—this is another bid at legitimacy since it implies the new ruler has strong allies and the means to bring prosperity and trade. There are lots of other ways to project legitimacy: getting trusted local elites to work for you, getting religious leaders to bless you, publishing your pedigree (fake or real) with mighty ancestors, cracking down on crime and having showy trials, paying an astrologer to circulate your horoscope with great predictions, mounting a big parade, building an equestrian statue of yourself in the square that everyone walks past, receiving ambassadors in a showy way so everyone sees how much foreign powers honor you, repairing bridges and caring for orphans so people talk about your generosity and virtue, even a modern city funding a zoo and orchestra and art museum is that city projecting legitimacy with the trappings we associated with cultured power. When a regime has lots of sources of legitimacy, it makes people more willing to go along with that regime continuing. Some sources of legitimacy tie into a culture’s traditional ideas about what makes power lawful (religion, heredity, virtue, particular values), while other sources of legitimacy, like a collection of exotic animals or a fancy palace, just impress people, and make them feel that life under this regime will probably be good, and that overthrowing it would probably be difficult if it has money to throw away on palaces and elephants.
Thus the radical oversimplification is that, when times get desperate, those in power pour money into art, architecture, grandeur, even science, because such things can provide legitimacy and thus aid stability. Intimidating palaces, grand oratory, epics about the great deeds of a conqueror, expensive tutors so the prince and princess have rare skills like Greek and music, even a chemical treatise whose dedication praises the Duke of Such-and-such, these were all investments in legitimacy, not fruits of peace but symptoms of a desperate time. In an era when a book cost as much as a house (it really did!), and Florence’s Laurenziana library cost more per GDP than the Moon Landing, you don’t get that level of investment unless elites think they’re going to get something out of it. Just as today giant corporations fund charities or space tech because they get something out of it, publicity raising their stock prices, so a mighty merchant family might repair a church or build a grand public square and put their coat of arms on it, drawing investment and intimidating rivals.
Culture is a form of political competition—if war is politics by other means, culture is too, but lower risk. This too happened throughout the Middle Ages, but the Renaissance was ever-so-much-more-so in comparison, and whenever you get a combination of (A) increasing wealth and (B) increasing instability, that’s a recipe for (C) increasing art and innovation, not because people are at peace and have the leisure to do art, but because they’re desperate after three consecutive civil wars and hope they can avoid a fourth one if they can shore up the regime with a display of cultural grandeur. The fruits fill our museums and libraries, but they aren’t relics of an age of prosperous peace, they’re relics of a lived experience which was, as I said, terrible but great.
All this I’ll explore further in the book, but if you want more info in the meantime you can get an excellent overview of the period in Guido Ruggiero’s The Renaissance in Italy, and a look at how this fed philosophical innovation and birthed Renaissance humanism in James Hankins’s Virtue Politics. For today, though, our goal isn’t to look deeply at the David, it’s to look at the glitter we just scraped off it, and to understand where that glitter comes from.
Part 2: Where did the Myth Come From in the First Place? (A Renaissance Story)
Whenever I’m with Medievalists and the subject turns to one of the bad things people say about the Middle Ages (dark age, backwards, superstitious, stagnant, oppressive, enemy of progress, all homogenous), I make a point of speaking up and saying, “Yeah, that’s my guys’s fault. Sorry.” It was a joke the first time, and it’s still half a joke, but I keep doing it because there’s this special smile under the resulting chuckle, this pause, warming, affirming, on the Medievalist’s face that says: I’ve always felt I deserved an apology from the Renaissance! Thank you!
Because the beginning of the problem was the Renaissance’s fault.
Pretty-much every culture, when it tells its history, divides it into parts somehow (reigns, eras, dynasties). These labels may not seem like a big deal, but they have a huge effect on how we imagine things. Think of how the discourse about boomers vs. Gen-X vs. millennials affects people’s self-identities, who associates with whom, and the kinds of discourse we can have with those terms that we couldn’t have with different ones. The lines and labels in our history are powerful. In my Terra Ignota science fiction novels I mention that the people in my 25th century society debate whether World War I ended in 1945 or 1989, and it always blows readers’ minds for a few seconds, and then follows the reflection: yeah, I could see WWI and WWII being considered one thing, like the Wars of the Roses. My first exposure to the way this makes your brain go *whfoooo* was as a kid and hearing Eugen Weber provocatively call WWI and WWII “The Second Thirty Years War”. Feels weird, right? Weird-powerful.
People living in the European and Mediterranean Middle Ages generally (oversimplification) divided history into two parts, BC and AD, before the birth of Jesus and after. For finer grain, you used reigns of emperors or kings, or special era names from your own region, i.e. before or after a particular event, rise, reign, or fall. There was also a range of traditions subdividing further, such as Augustine’s six ages of the world which divided up biblical eras (Adam to Noah, Noah to Abraham, etc.), though most of those subdivisions are pre-historical, without further subdivision post Christ’s Incarnation. The Middle Ages also had a sense of the Roman Empire as a phase in history, but it was tied in with the BC-to-AD tradition, and with ideas of Providence and a divine Plan. Rome had not only Christianized the Mediterranean and Europe through the conversion of Constantine c. 312 CE, but authors like Dante stressed how the Empire had been the legal authority which executed Christ, God’s tool in enacting the Plan, as vital to humanity’s salvation as the nails or the cross. Additionally, many Medieval interpreters viewed history itself as a didactic tool, designed by God for human moral education (not the discipline of history, the actual events). In this interpretation of history, God determined everything that happens, as the author of a story determines what happens. The events of the past and life were like the edifying pageant plays one saw at festivals: God the Scriptwriter introduces characters in turn—a king, a fool, a villain, a saint—and as we see their fates we learn valuable lessons about fickle Fortune, hypocrisy, the retribution that awaits the wicked, and the rewards beyond the trials and sufferings of the good. The Roman Empire had been sent onto the world’s stage just the same, a tool to teach humanity about power, authority, imperial majesty, law, justice, peace, offering a model of supreme power which people could use to imagine God’s power, and many other details excitedly explored by numerous Medieval interpreters. (Many Renaissance interpreters still view history this way, and the first who really doesn’t do it at all is Machiavelli.)
The two people most directly responsible for inventing the Middle Ages are two men from Tuscany: Petrarch (Francesco Petrarca, 1304-1374), and Leonardo Bruni (1370-1444).
Petrarch was the first person to talk about the era after the Roman empire as a separate, bad period of shadow, misery, darkness, and decay. Petrarch gained his fame with his Italian poetry, and popularized the sonnet (though we have a long time still to wait for Shakespeare), but later in his life he was part of a circle of Italian scholars who loved, loved, loved, loved Cicero, and read his political works intensively as they thought about questions of republicanism and statecraft. Petrarch described himself as having been born in exile. He was born in exile in space quite literally, while his parents were in banishment, and he grew up in Avignon in the period the papacy was there in French control. But he also considered himself an exile in time, exiled from that community of antiquity which was the true home of his spirit. I already quoted his lament Italia Mia, and his sense of the degeneration of his era was enhanced by the feeling that France’s control of the papacy had ravaged and spoiled Rome and Italy. He also lived through the Black Death, and lost almost all his scholar-friends in it. Two surviving friends wrote to him after the main wave had passed to plan a precious reunion—they were attacked by bandits on the way, and one murdered, while the other escaped but was missing for many months. You can understand why Petrarch, reading of the Pax Romana when the ancient texts claim you could walk in safety from one end of Rome’s empire to the other, might see his age as one of ash and shadow. He projected that ash and shadow back on everything since Rome, lumping together for the first time the long sequence we now refer to as the Middle Ages.
Petrarch, importantly, did not claim his era was already a golden age, nor did he use the word Renaissance; he claimed his era needed tohave a transformation, that desperate times called for desperate measures, and that if Italy was to have any hope of healing it must look to its ancient past, to Rome, the Pax Romana, that dream age when there were no bandits on the road or pirates in the sea. The lost arts that nurtured the age of Emperors were languishing in ancient tomes waiting to be restored if only people reached for them. We know the Renaissance as the era that revived a lot of lost Roman technologies, geometry, engineering, large-scale bronze work, and those were important, but what Petrarch really thought would change things were people, intellectual technologies, not science or engineering tools.Petrarch wanted the library that educated Cicero, and Seneca, and Caesar. When we today look at ancient Rome we’re often struck most by the wicked Emperors, Caligula, Nero, the anecdotes of decadent corruption, but Petrarch instead saw the republican Brutus, who executed his own sons when they conspired to take over the state—in a world where city after city was falling to monarchal coups, and Lord Montague was used to using his great influence to make the Prince let Romeo get away with murder, the thought of Brutus putting Rome before his family felt like a miracle. (Unhelpfully, Petrarch didn’t write a single clear treatise where he spelled this out, but if you want a sample try his letters and invectives, or for the mega-thorough scholarly version see James Hankins’ Virtue Politics).
Important: even using antiquity wasn’t new in the Renaissance. Medieval people had been reading Seneca, and Cicero, and Virgil the whole time, and imitating and reusing ancient stuff, they just used the classics differently from how Petrarch did, just as the classics are also used differently in the 17th century, and the 19th century, and today. There were some major innovations in Renaissance engagement with the classics (several stages of innovation in fact), that differentiate them from Medieval, but those are complexities for another day.
Leonardo Bruni was the next step. He was child when Petrarch died, and grew up in the era of heady excitement of trying to use classical education to create the golden age Petrarch proposed. Bruni studied Latin with a focus (as Petrarch encouraged) on imitating ancient Latin instead of Medieval Latin whose grammar and vocabulary had evolved (as any language does) over the centuries. Bruni served as Chancellor of Florence, and imitated ancient Roman historians in writing his History of the Florentine People, which for the first time formally divided history into three parts: ancient, middle, and modern, which we now call Renaissance. He also filled his history with analysis and deep interpretation, which many Renaissance scholars will tell you was the first modern history, the first history of a post-classical time/place, and the first truly analytic history written since antiquity, and then Medievalists will scream at them and pile up examples of Medieval chronicles full of framing and moral analysis, which absolutely are doing sophisticated interpretive work, and vary enormously from each other, but Bruni’s is recognizably as different. Why? Largely because Bruni actively wanted his history to seem innovative and different, and wrote with that as a goal, in a new kind of Latin, with new structure, setting out to make something everyone would look at and say: Wow, it’s like what the Romans did!
With Bruni we had three periods—ancient, medieval, and the new age. That new age wasn’t called rinascita until Vasari’s Lives of the Artists in 1550 (more than a century after Bruni) and renaissance proper was coined by Jules Michelet in 1855, but Bruni’s idea of three periods, and that this new one could be a golden age, caught on quickly because of its potential for… (da da da daaa!) …legitimacy! Back then, as now, claiming that you’re the start of a new golden age is an ideal way to make your (teetering, illegitimate) regime seem exciting, full of momentum, glorious. History-writing modeled on Bruni quickly became all the rage, and you could awe people with a history of how great your city/people/family is, get them excited about a golden age, make yourself seem legitimate. And Bruni’s history writing had another power too.
One set of events Bruni described in his Florentine History were the conquests of Gian Galeazzo Visconti the “Viper of Milan” (1351-1402), a man who lived up in every way to his badass family crest of a serpent swallowing a helpless little dude. After ambushing and supplanting his uncle, the Viper seized Milan (bribing appropriate powers to make him duke), then took Verona, Vicenza, Padua, and tried for all of northern Italy including Bologna and Florence, securing a great victory at the Battle of Casalecchio in 1402. But then (according to Bruni) brilliant Florentine cunning arranged the would-be conqueror’s defeat and downfall. When Bruni’s history circulated in 1444, the Viper’s grandson Duke Filippo Maria Visconti did a spit take: “What the?! We didn’t lose that war! Granddad dropped dead of a f*ing fever and the troops had to go home! The Florentines never beat us in a single battle! They can’t say won the war!” They can. They did.
It turns out history isn’t written by the winners; history is written by the people who write histories.
So, what are you going to do about it, grandson of the Viper of Milan? There’s only one thing to do: hire one of these new classically-educated humanisty types to write a history of your city and your family framed your way, and replacing the murdered-his-uncle bribed-the-king totally-illegitimate conquest-by-force narrative with a glorious lineage that constantly kicked Florence’s ass!! That’s what he did—that’s what everybody did, Milan, Venice, France, England, Hungary, Naples; everybody had to have a history, and all the histories claimed there had been a bad middle age, that it was over, and that we were now in the glorious classical-revival-powered new age which had the potential to surpass it thanks to the virtues and glories of [Insert Prince Here]. This is why, up in England, baby King Henry VI’s uncle Duke Humphrey of Gloucester tried to hire Leonardo Bruni to come to England and work for him, and write a history that would shore up the tenuous Lancastrian claim to the throne (we’re entering the Wars of the Roses here). And this is why, while Bruni stayed in Florence, another major Florentine figure Poggio Bracciolini actually was lured by the high pay to go to England and work for Humphrey’s rival Cardinal Beaufort. And all these histories pick and choose details to make the current regime/ruler look great and legitimate, at the expense of making the newly-invented middle age look bad.
This is why all Medievalists, deep down inside, know they deserve an apology from the Renaissance.
One attempt at a solution is dropping the term Renaissance, but that doesn’t actually solve the problem, since it leaves us with antiquity and a period from then to… what? Is the dividing line the Enlightenment? Industrialization? Colonialism? The Industrial Revolution? The Agricultural Revolution? The French Revolution? WWI? No matter how late you push the line, any of these divisions is still accepting Bruni’s ancient-middle-modern division, and involves making a claim about what begins the modern. Normal parlance in history now is “early modern” which begins with [insert-scholarly-squabble-here] and ends roughly with the French Revolution, which is generally agreed to kick off “modern” proper. While “early modern” does avoid accepting claim that the Middle Ages were bad and needed a rebirth, and I use it myself, I also think it’s a dreadful term, since (A) it’s confusing (“early modern” sounds like the Crystal Palace, not Shakespeare’s Globe), and (B) the term actively worsens the degree to which your selected start date is a judgment call about what makes us modern. Because the real problem with the myth of the bad Middle Ages versus golden Renaissance is not what Petrarch and Bruni created within the Renaissance itself—it’s what happened later to entangle both terms with an equally problematic third term: modern.
Part 3: Why is the Myth of a Renaissance Golden Age Retold so Much? (a post-Renaissance story)
The thing about golden ages—and this is precisely what Petrarch and Bruni tapped into—is that they’re incredibly useful to later regimes and peoples who want to make glorifying claims about themselves. If you present yourself, your movement, your epoch, as similar to a golden age, as the return of a golden age, as the successor to a golden age, those claims are immensely effective in making you seem important, powerful, trustworthy. Legitimate.
In sum, one of the most powerful tools for legitimacy is invoking a past golden age. Under my rule we will be great like X was great! Whether it’s a giant golden age (Rooooome!) or a tiny golden age (the US 1950s!), if you can claim to be bringing it back, you can make a very clear, appealing case for why you should have power. This claim can be made by a king, a duke, a ruling council, a political party, an individual, or a whole movement. It can be made explicitly in rhetoric (I am the new Napoleon!) or implicitly by borrowing the decorative motifs, vocabulary, and trappings of an era. An investment banking service that uses a Roman coin profile as its logo, names its different mutual funds after Roman legions, and has a pediment and columns on its corporate headquarters is trying to project legitimacy from the idea of antiquity as a golden age of power and stability.
The newborn United States of America when it decided to make the Washington Monument be a giant obelisk, that was another bid at legitimacy and projecting power by invoking the golden ages of ancient Egypt and conquering Rome, combined in the Washington Monument’s case with other things like, instead of the traditional gold tip on top, using high-tech more-expensive-than-gold aluminum, mixing golden age with power claims about wealth and science.
So…
…because the Renaissance had called itself a golden age, by the 17th century it had joined the list of epochs that you can invoke to gain legitimacy, and has been invoked that way many times. This is why 18th and especially 19th and earlier 20th century governments and elites raced to buy up Italian Renaissance art treasures and display them in their homes and museums. This is why Mussolini, while he mostly invoked imperial Rome, used the Renaissance too, and even made special arrangements to meet Hitler inside the Vasari Corridor in Florence to show off the art treasures of the Uffizi. And this is why the US Library of Congress building is painted all over inside with imitations of Renaissance classicizing frescos and allegorical figures in Renaissance style even though the quotations they include and values they celebrate are largely not Renaissance.
One consequence of golden ages being so powerful is that powers squabble over them: “I’m the true successor of [XXX]!” “No, I’m the true successor!” You see this in the fascinating modern day dispute over the name Macedonia in which both Greece and the country now called North Macedonia both want to be seen as the land of Alexander the Great, and argued over the name tooth and nail, dragging in both the UN and NATO. Since golden ages are mythical constructions (the events are real but the golden age-ness is mythmaking) they’re easy to redefine to serve claims of true successor status—all you have to do is claim that the true heart that made the golden age great was X, and the true spirit of X flourishes most in you. Any place (past or present) that calls itself a new Jerusalem, new Rome, or new Athens is doing this, usually accompanied by a narrative about how the original has been ruined by something: “Greece today is stifled by [insert flaw here: conquest, superstition, socialism, lack of socialism, a backwards Church, whatever], but the true spirit of Plato, Socrates and the Examined Life flourish in [Whateverplace]!”
Ancient Rome is particularly easy to use this way because Rome had several phases (republic, empire, Christian Rome) so if some rival has done a great job declaring itself the New Roman Empire you can follow up by saying the Empire was the corrupt decadent period and the Roman Republic was the true Rome! Simply quote Cicero and talk about wicked emperors and you can appropriate the good Rome and characterize your rivals as the bad Rome. If republic, empire, and Christian Rome are all claimed, you can do something more creative like the 19th century romantic movement which claimed the archaic pastoral Rome of Virgil’s Georgics, replacing pediments and legionary eagles with garlands and shepherds and claiming a mythic golden age no one had been using lately.
The same is true of claiming Renaissance. If you can make a claim about what made the Renaissance a golden age, and claim that you are the true successor of that feature of the Renaissance, then you can claim the Renaissance as a whole. This is made easier by the fact that “the Renaissance” is incredibly vague. When did it start? 1400? 1350? 1500? 1250? 1550? 1348? When did it end? 1600? 1650? 1700? You can find all these dates if you dig through books about “the Renaissance” written in different countries and different fields (art history, literary history). I pointed out that Petrarch’s Italia Mia is as far from Ercole’s Bentivoglio’s letter to Machiavelli as Napoleon’s rise from Yuri Gagarin’s space flight, but even at Machiavelli we’re still only half-way through the large, vague period that different people label Renaissance. On my own university campus, if I drop by different departments and ask colleagues when Renaissance begins, I get 1200 or 1250 from the Italian lit department (some of whom say Machiavelli is already “modern”), but in the English building I might get 1450 or even 1500. I think drawing a line after Black Death makes sense for Italy at least, or maybe at 1400, but there are plenty of counter-arguments, and people on campus who identify as Medievalists who study things later than some things I work on. I think it’s great for Medieval and Renaissance to overlap, since I—looking mainly forward—ask different questions about someone like Petrarch from the questions my Medievalist colleagues ask. The only “wrong” answer to where the line falls, in my opinion, is to believe there is a clear line.
And if we zoom into this long, vague period, when was the “high Renaissance” i.e. the best part, the most characteristic part? If you ask a political scientist it’s usually the very early 1400s, when Bruni and other innovative political thinkers were writing; if you ask an art historian it’s the decades right after 1500 when ¾ of the Ninja Turtles overlapped; if you ask a theater scholar it’s Shakespeare who was born fully 200 years later than Bruni and his peers discussing politics. It all depends on what you think defines the Renaissance, so if you have a different focus then different dates feel like periphery or core.
So, just as when we invoke Rome we can pick republican Rome, imperial Rome, pastoral Rome, Christian Rome, the conquering Rome of Julius Caesar or the peaceful Rome of the Pax Romana, similarly there are a huge range of Renaissances one can invoke: Bruni’s, Raphael’s, Machiavelli’s, Luther’s, Shakespeare’s. But choosing your Renaissance is an especially potent question because of… (drumroll please)… the X-Factor.
Okay, deep breath.
After the Renaissance, in the period vaguely from 1700 to 1850, everyone in Europe agreed the Renaissance had been a golden age of art, music, and literature specifically. Any nation that wanted to be seen as powerful had to have a national gallery showing off Renaissance (mainly Italian) art treasures, and capital buildings with Renaissance neoclassical motifs, while an individual with elite ambitions had to know classicizing Latin, and a bit of Greek, and have opinions about Raphael, Titian, Petrarch, and the polyphonic motets of Lassus. Seriously: in the original Doyle Holmes stories, so 1850-1910, after having Watson establish Holmes’s “Knowledge of literature—nil. Philosophy—nil.” still has Holmes carry a pocket Petrarch and write a monograph on the polyphonic motets of Lassus, because that’s what a smart, impressive person did in 1850. This also meant that Renaissance art treasures were protected and preserved more than Medieval ones—if you’re valorizing the Renaissance you’re usually criticizing the Middle Ages in contrast, so these generations learned to think of Renaissance art as good taste and the periods on both sides (Medieval and baroque) as bad taste, and a lot of great Medieval art was left to gather dust, or rot, or was even actively destroyed, since nothing invokes the Renaissance like sweeping away the “bad” medieval. As a result, the Renaissance became a self-fulfilling source base: go to a museum today and you see much more splendid Renaissance art than Medieval, leading to the natural conclusion that the Renaissance produced more art in general, but Middle Ages did make splendid art, it’s just that later centuries didn’t preserve it as carefully, so less survives, and what survives is more likely to be in storage than in the main gallery.
The transition from people being excited about Renaissance art and culture to being excited about the Renaissance as an era came in the mid-1800s, primarily with the work of Swiss historian Jacob Burkhardt, and his 1869 The Civilization of the Renaissance in Italy. It’s a gorgeous read, unskimmably rich prose, and Burkhardt’s work was a major breakthrough moment for the practice of history as a whole, because he showed how you could write a history, not of a country or a person, but of a culture, discussing the practices and ideas of an era, examining art and artists side-by-side with authors, soldiers, and statesmen as examples of people of a period and the way they thought, acted, and lived. The book pioneered cultural history, the practice of trying to study societies and their characteristics, acknowledging the interrelationship of politics with art and culture instead of examining them separately. Cultural history remains a major field, and one where some of the best work on once-neglected topics like women, pop culture, and non-elites has flourished. But…
Burkhardt was also the main figure who popularized the terms “modernity” and “modern.” He argued that the Renaissance was the birth of “modern man,” and that modern man was defined by a powerful sense of human excellence and human potential. According to Burkhardt, the core of this change—the spirit of the Renaissance which sparked the triumphant path of progress toward modernity—was the rise of individualism. As he says in the beginning of Part II:
In the Middle Ages both sides of human consciousness—that which was turned within as that which was turned without—lay dreaming or half awake beneath a common veil. The veil was woven of faith, illusion, and childish prepossession, through which the world and history were seen clad in strange hues. Man was conscious of himself only as member of a race, people, party, family, or corporation—only through some general category. In Italy this veil first melted into air; an objective treatment and consideration of the state and of all the things of this world became possible. The subjective side at the same time asserted itself with corresponding emphasis; man became a spiritual individual, and recognized himself as such.
The Medievalists reading this are gnashing their teeth, and yes, this moment is core to the persistence of the myth of the bad, backward, stagnant, sleepy middle ages, and equally core to the myth of the Renaissance Man: awake, ambitious, aware of his own power, rational, ripping through the cobwebs of superstition, desirous of remaking the world but also of intentionally fashioning him or herself into something splendid and excellent. A human being who realizes human beings can be their own masterpieces.
In the mid-19th-century, when Burkhardt wrote, Europe was very enamored of individualism, of new democratic ideas of government, of nationalism and ideas of individual consciousness and national consciousness, and of the notions of genius, both genius individuals and the geniuses of peoples. Thus, Burkhardt’s claim that the Renaissance was born from individualism gave all sorts of 19th century movements the ability to claim the Renaissance golden age as an ancestor. Germany, Britain, the young United States, despite having little to do with Italy, they could all claim to be the true inheritors of Renaissance greatness if they could claim that individualism and the opportunity to be a self-made man prospered more truly among their peoples than in Italy.
But there was more: by claiming that the Renaissance—and all its glittering art and innovation—was caused by individualism, Burkhardt was really advancing a claim about the nature of modernity. Individualism was an X-Factor which had appeared and made a slumbering world begin to move, sparking the step-by-step advance that led humanity from stagnant Medieval mud huts to towers of glass and iron—and by implication it would also define our path forward to an even more glorious future. In other words, the X-Factor that sparked the Renaissance was the defining spirit of modernity. If individualism was responsible, not only for the Renaissance, but for the wonders of modernity, then logically those regimes of Burkhardt’s day which most facilitated the expression of individualism could claim to be the heart of human progress and to hold the keys to the future; those nations which did not advance individualism (where socialism prospered, for example, or “collectivism” which was how 19th century Europe characterized most non-Western societies) were still the slumbering Middle Ages, in need of being awakened to their true potential by those nations which did possess the X-Factor of human progress.
I hope you winced a few times in the previous paragraph, recognizing toxic 19th century problems (eurocentrism, orientalism, “White Man’s Burden” thinking), as well as basic historical errors (spoiler: you can find plenty of individualism in Medieval texts, and lots of things that are absolutely not individualism in Renaissance ones). But those specifics aren’t the big problem. The big problem was how entrancing the idea of an X-Factor was, the notion that there is one true innovative spirit which defines both Renaissance and modern, and advances in a grand and exponential curve from Petrarch through Leonardo and Machiavelli on to [insert modern hero here]. Thus Burkhardt birthed what I call thequest for the Renaissance X-Factor. Because when the first scholars disagreed with Burkhardt, they didn’t objcet to the idea that the Renaissance was caused by a great defining X-Factor, they loved that idea, they simply argued about what exactly the X-Factor was.
Thanks to Burkhardt, the Renaissance came to be defined as the period after Medieval but before Enlightenment when something changed and pushed things toward modernity—the moment that the defining spirit of modernity appeared. From that point on, claiming you were the successor to the Renaissance didn’t just mean claiming a golden age like Rome, it let you also claim that modernity itself was somehow especially yours. If you could argue that the reason the Renaissance was great was that it did the thing you do, then you are the heart of modernity and progress, even of the future, while those who don’t celebrate that spirit are the enemies of progress. Thus every time someone proposed a new X-Factor, a different explanation for what made Renaissance different from Medieval, that made it possible to make new claims about the nature of modernity, and which nations or movements have it right. This model even lets one claim the future: the X-Factor was born in the Renaissance, grew in the Enlightenment and in modernity, and is the key to unlocking the next glorious age of human history as it unlocked both Renaissance and modern. This lets you advance teleological arguments about the inevitable triumph of [democracy, nationalism, atheism, capitalism, whatever]. It’s a version of history that’s not only legitimizing but comforting, since it lets you feel you know where history is headed, what will happen, who will win.
To give specific examples, if we’re in the middle of the Cold War, and an influential historian publishes a book arguing that the X-Factor that sparked the Renaissance was double-entry bookkeeping, i.e. the rise of banking and the merchant class, America can say: “The Renaissance X-Factor was the birth of capitalism! The fact that it was a golden age proves capitalism will make a golden age too, and the true successor of this golden age is our alliance of modern capitalist regimes!” If, on the other hand, we’re in a nationalist wave, say in 1848 or 1920, and someone argues that the X-Factor that sparked the Renaissance was the call for national unity articulated in Petrarch’s Italia Mia or Savonarola’s sermons (this is Pasquale Villari), and that what ended the Renaissance golden age was when Italy was conquered and divvied up among the Bourbons and Hapsburgs, then the Renaissance can be claimed as a predecessor by the Italian unification movement, the German unification movement, any nationalist movement anywhere can claim that uniting peoples into nations is what drives modernity. If we claim the Renaissance was birthed by the rise of secular thought, that Renaissance geniuses were the first people to break through the bonds of superstition, and that Leonardo and Machiavelli were secret atheists (this is Auguste Comte), then we can claim that secularization and the secular state is the heart of human progress and modernity. And if someone claims the X-factor was republican proto-democratic thought, the political writings and discourse of civic participation unique to the Italian city republics, Florence, Venice (this is Hans Baron), then we can claim that republican democracy is the key to human progress, that modern democracies are the heart of modernity, and everything else is backwards, outside, Medieval, bad, and needs to be replaced.
To this day, every time someone proposes a new X-Factor for the Renaissance—even if it’s a well-researched and plausible suggestion—it immediately gets appropriated by a wave of people & powers who want to claim they are the torch-bearers of that great light that makes the human spirit modern. And every time someone invokes a Renaissance X-Factor, the corresponding myth of the bad Middle Ages becomes newly useful as a way to smear rivals and enemies. As a result, for 160 years and counting, an endless stream of people, kingdoms, political parties, art movements, tech firms, banks, all sorts of powers have gained legitimacy by retelling the myth of the bad Middle Ages and golden Renaissance, with their preferred X-Factor glittering at its heart.
We scholars do our best to battle this, to introduce a complex and un-modern Renaissance, but the very usefulness of the myth guarantees that it will be repeated much more broadly than our no-fun efforts to correct it. A lot of Renaissance historians today reject the idea of a single X-Factor and try instead to talk about combinations of mixing factors. Many of us also try to argue that the Renaissance was not fundamentally modern, that it was its own distinctly un-modern thing. But it’s a hard sell, because the narrative of a special spirit launching us from Petrarch to the Moon Landing is enchanting, and because a complicated, messy, un-modern Renaissance snatches away the golden Renaissances most people meet first. Nobody in this century has read about the French Invasion of 1494, or even about the Guelphs and Ghibellines, before meeting the genius cults of Leonardo and Michelangelo.
Scraping the glitter off to reveal the imperfect and violent David underneath is an assault on our understandings of our past and present, on what it means to be ourselves, even on our sense of where the future is heading. People find that unsettling. And people who look to Renaissance celebrities as role models and intellectual ancestors don’t like to hear about their rough un-modern sides. So people get hostile, or unsettled, they keep telling the myths, and use cherry picked sources to glob the glitter-paint back on. It’s not always done in bad faith—if from early childhood you’ve always learned the Renaissance was sparkling and golden, and you see a bare patch where the glitter has come off, of course you’ll think that bare patch is the error, that the still-sparkly parts are the real thing. You treat the oddball patch as damage, and keep believing what that documentary or museum label told you years ago when you saw your first Renaissance masterpiece and fell in love. So the myth persists, and for every attempt to correct it we’re up against a dozen tour guide scripts, and TV specials, and corporate statements, and outdated textbooks, and new books (fiction and nonfiction alike) that glob the glitter on. So you can understand why, from time to time, Renaissance and Medieval specialists alike just have to stop and scream like Sisyphus.
Conclusion: We Should Aim for Something Better than the Renaissance
This, in not-very-brief, is why we keep telling the myth of the golden Renaissance, and bad Middle Ages.
Now, let’s look again at our other starting question: “If the Black Death caused the Renaissance will the COVID pandemic cause a golden age?” You see the problems with the question now: the Black Death didn’t cause the Renaissance, not by itself, and the Renaissance was not a golden age, at least not the kind that you would want to live in, or to see your children live in. But I do think that both Black Death and Renaissance are useful for us to look at now, not as a window on what will happen if we sit back and let the gears of history grind, but as a window on how vital action is.
The Black Death first: it didn’t cause the Renaissance, no one thing caused the Renaissance, it was a conjunction of many gradual and complicated changes accumulating over centuries (banking, legal reform, centralization of power, urbanization, technology, trade) which came together to make an age like the Medieval but ever-so-much-more-so. The idea that the Black Death caused a prosperity boom comes from old studies which showed that wages went way up after the Black Death, creating new possibilities for laborers to gain in wealth and rise in status (like the golden 1950s). But those were small studies from a few places (mainly bits of England), and we have newer studies now that show that wages only rose in a few places, that in other places wages didn’t rise, or actually went down, or that they started to rise but elites cracked down with new laws to control labor, creating (among other things) the first workhouses, laws limiting freedom of movement, and other new forms of unfreedom and control. What the Black Death really caused was change. It caused regime changes, instability letting some monarchies or oligarchies rise, or fall. It caused policy and legal changes, some oppressive, some liberating. And it caused economic changes, some regions or markets collapsing, and others growing.
If you really want to know what COVID will do, I think the place to look is not Renaissance Italy, but the Viking settlements in Greenland, which vanished around 1410. Did they all die of the plague? No. We’re pretty sure they never got the plague, they were too isolated. But the Greenland settlements’ economy had long depended on the walrus trade: they hunted walruses and sold the ivory and skins, and ships would come from Norway or Iceland to trade for walrus, bringing goods one couldn’t make in Greenland, like iron, or fine fabric, or wheat. But after 1348 the bottom dropped out of the walrus market, and the trading ships stopped coming. By 1400 no ships had visited Greenland for years except the few that were blown off-course by storm. And meanwhile there were labor shortages and vacant farms on the once-crowded mainland. So we think the Greenland Vikings emigrated, asked those stray ships to take them with them back to Europe, as many as could fit, abandoning one life to start another. That’s what we’ll see with COVID: collapse and growth, busts for one industry, booms for another, sudden wealth collecting in some hands, while elsewhere whole communities collapse, like Flint Michigan, and Viking Greenland, and the many disasters in human history which made survivors abandon homes and villages, and move elsewhere. A lot of families and communities will lose their livelihoods, their homes, their everythings, and face the devastating need to start again. And as that happens, we’ll see different places enact different laws and policies to deal with it, just like after the Black Death. Some places/regimes/policies will increase wealth and freedom, while others will reduce it, and the complicated world will go on being complicated.
That’s why I say we should aim to do better than the Renaissance.
Because we can. We have so much they didn’t. We know so much.
For one thing, we know how pandemics work. We know about germs, viruses, contagion, hand-washing, sanitation, lowering the curve. We can make plans, take action that does something. Forget 1348, even in 1918 we didn’t understand how to treat influenza, how it moved, and hand washing was still controversial. 1918 was a US election year but we didn’t discuss delaying or changing the election, there was nothing we could do to make it safer, we didn’t know about six-feet-apart, or sanitizing voting booths, or have the infrastructure to consider vote-by-mail, all we could do was let men (women still had two more years to wait) vote and die. We’ve come a long way.
This year, 2020, this is the first time in the history of this planet that any species has faced a pandemic knowing what it is, and how to take effective action. We aren’t taking perfect action, and we absolutely should be criticizing and condemning the many flaws—some small, some huge—in how it’s being dealt with, but there is real, efficacious action we can take. As an historian, not just of the plague of 1348, but of the plagues of 1435, and 1485, and 1494, and 1503, and 1596, and 1630, and 1656, what I see is those many generations who not only had to live through this over and over, but who had no hope that their children would ever be free of it. We know about vaccines, and that we’ll make one—it’ll take a while, and we’ll mess up various ways along the way, but none of us is afraid our grandchildren will grow up spending one year in ten locked up in their homes like this as COVID-19 spreads; we will solve it. We know we’ll solve it, and any other age in history would treasure that confidence like miracle. Because all Petrarch could say after losing his world in 1348 was that, the next time plague comes back, we should console ourselves by thinking of it as dying with much good company.
We know about mental health now too. We’re talking about the mental health crisis of COVID, the mental health costs of fear, poverty, racial injustice—in 1918 we were still excited by electroshock, and debating the radical new idea that outpatient psych treatment might be a thing, instead of doing only institutionalization. We have the language to talk about the mental cost of crisis, and that language alone opens so many possibilities for helping, acting, aiding that previous eras never had. Without the concept, we couldn’t start to try to treat it—now we can.
And we have more language: social safety net, social welfare, social services, concepts for thinking how state and society can put structures in place to relieve human suffering. We have economics now, not the kind of economics that’s trying to prognosticate the stock market, the basic kind with terms like GDP, and unemployment rate, and wealth gap, and retirement age, and inflation. There were economies in 1348, and even social services, hospitals, orphanages, city grain supplies, but we didn’t have a science for discussing it, vast banks of data comparing how different systems work, or help, or harm. After the Black Death when different places tried different policies for their recovery, they didn’t have comparisons, examples—we do. We won’t be guessing in the dark when each nation decides its recovery plan for this pandemic—we won’t be omniscient, but even partial knowledge makes us powerful. That raises the stakes.
Because, like after 1348, there is about to be big change. There are many options before us, different things that states can do post-COVID, some of which will help with poverty, empower labor, lend a helping hand to those exhausted Greenland Vikings as they start again, and there are other things states can do that will instead widen the gaps, entrench elites, help the rich get richer and see the disempowered locked more inescapably into modern versions of workhouses. Different places will make different choices. Some places will see regime changes, others just policy shifts, but there aren’t vast wheels of history that lock a pandemic into automatically yielding a boom or bust. There is no automatic outcome. Rather, all nations in the world are about to make a set of choices which will have a far larger, deeper impact on the next decades, on lives, rights, options, everything, than the normal choices states make in a normal year. The stakes are higher. Unlike in 1348 we have a lot of knowledge, answers, options, concepts we could try like safety nets, or UBI, or radical free markets, many very different things. Which means that acting now, demanding now, voting, pushing, proposing change, we’re shaping policies that will affect our big historical trajectory more than normal—a great chance to address and finally change systemic inequalities, or to let them entrench. There is no predetermined outcome of pandemic; pandemic is a hallway leading to a room where something big is going to be decided—human beings decide.
I love space exploration. I’ve written novels about it, and a song that makes everyone cry, I make myself tear up thinking about it all the time, especially civilian spaceflight and the hope that this chapter of history might be advanced by curiosity, teamwork, and human hope, not war or competition. But after looking forward to it for so long, the recent SpaceX launch was the first I’ve watched in a long time without tearing up. Because watching a space ship launch while looters are smashing shops outside my window (and cops ignoring them in favor of harassing peaceful protestors & giving carte blanche to the gunwielding vigilante on the corner) feels a lot like Leonardo painting the Mona Lisa while cities around were literally burning (and rich merchants’ private goons guarding their wealth & allies as faction dictated). This year, this specific year, 2020, with the world shut down by plague, and civil strife, and fire in the streets, and teetering distrust in governments, this is the first time our present has reminded me of the Renaissance. But we aren’t the Renaissance—we have social science, and efficacious medicine, and the Enlightenment under our belts, when we learned we can analyze our laws and institutions, and step by step replace them with better ones. We aim for better.
At the Renaissance Society of America Conference some years ago, two scholar friends got into a debate about whether Machiavelli’s world was fundamentally pre-modern, different from our own, or whether fundamentally it faced the same problems we do. Responding to the claim that the Renaissance was far more violent than our present, the advocate of Renaissance-as-modern quoted the statistic that modern Chicago had as many murders every year as Renaissance Florence. The rebuttal that surged in my mind was that the population of Florence was less than 100K, so Chicago’s millions have far fewer murders per capita, but the other speaker had a far better answer. We’re working to change that murder rate. We study it, understand it, plan interventions, act. We believe it’s a problem we can solve, should solve, that citizen and state should act, and if the state will not the state should change. We have policy studies, plans, alternatives.
Petrarch wanted to end the cruel wars for light causes that were wounding Italy, but had no plan beyond sending his poem out into the world, and urging elites to have their kids read Cicero. Machiavelli also wanted to end the cruel wars for light causes, and seeing that reading Cicero had failed he proposed a new way of evaluating history, collecting examples of what worked and didn’t in the past, basing our statecraft and actions on them so the next time we try things we’ll choose more wisely. It was the birth of social science. It took us a long time for us to get good at it, to turn the observations in The Prince into big databases and systematic studies, just as it took a long time for medicine to get from the four humors to our confidence that we can make a vaccine, but we can make one. We can make good social policy. Will we do it perfectly? No. Many bad policies will be advanced, just as vaccines and treatments will be distributed unfairly and slowed down by bigotry and selfishness. But we can do it, we have tools, as real in our hands and libraries as the knowledge of vaccines is real—tools Machiavelli and Petrarch would have given anything to have. We can aim for better than another Renaissance.
"Warm, generous, and inviting," Inventing the Renaissance provides a witty and irreverent journey through the fantasies historians have constructed about the supposed Dark Ages and golden Renaissance, and exposes the terrible yet often tender reality beneath.