Saturday, December 16, 2017
Friday, October 6, 2017
Though “epic” may be the technically correct term (in that it is Homeric in its scope), the word does not do justice to Gore Vidal’s spectacular and monumental novel Creation. Set primarily in ancient Persia during the reigns of Darius, Xerxes, and Artaxerxes, Creation combines the scale and vision of Homer’s Odyssey with the intimacy of Pearl S. Buck’s The Good Earth. The narrative is presented as the autobiography of Cyrus Spitama, half-Persian, half-Greek, grandson of the prophet Zoroaster and bosom friend of the Crown Prince and future Great King, Xerxes. It spans his long life, from his birth in a Zoroastrian cult, through his entry into the Persian court, his ambassadorships to India, China, and Greece, and finally his death at seventy-five.
Classical literature often has a distinctly foreign feel, but Vidal makes the ancient world seem real and near as few authors have done. He writes not as a modern man looking at this 2500-year-old culture through a modern lens, but as a Persian noble for whom this setting was modern. Cyrus is unburdened by the baggage of Western cultural norms and Vidal’s knowledge of future history, giving the reader the distinct and rare sensation of exploring the ancient world as a native, rather than a tourist.
During his travels, Cyrus meets figures such as the Buddha, Confucius, Lao Tsu, and Socrates (he doesn’t find the latter particularly impressive); he visits the hanging gardens of Babylon, where he watches the teenaged Xerxes seduce an adolescent temple priestess; he is married to a 12-year-old Indian princess; and he is kidnapped and sold into slavery in China. His journey spans nearly the entirety of the known world, and he reflects on each adventure with a mix of dry humor, the detachment that comes with the passing of decades, and the nostalgia of old age.
Creation is no light undertaking (my paperback copy is nearly 600 pages of seemingly microscopic font), yet as I neared its end, I found myself dreading the inevitable farewell to Cyrus and his devoted stenographer Democritus. Like the end of a long and full vacation in a foreign country, I wished I had time to see just one more sight, meet just one more character. Barring that, I plan to dive deeper into Vidal’s body of work; if his other novels are half as engaging as Creation, it will be time well spent.
Wednesday, August 23, 2017
Steven Johnson’s How We Got to Now: Six Innovations That Made the Modern World tells the stories behind technological developments that impacted the day-to-day lives of ordinary people in ways so unforeseeable that, in hindsight, they’re almost shocking. The flash bulb, for example—originally invented to enable photography inside lightless Egyptian tombs—finally gave journalist Jacob Riis the tool he needed to bring the horrifying conditions of New York City’s tenement slums into the public eye in a way that words and line drawings could never do, resulting in sweeping legislative reforms and, according to Johnson, “ignit[ing] a new tradition of muckraking that would ultimately improve the working conditions of factory floors too… chang[ing] the map of urban centers around the world.”
In a similar vein, Johnson shows, the invention of the printing press and the spread of literacy in the western world brought to light (somewhat literally) a surprising new problem: the prevalence of near-sightedness. This led to a sudden demand for corrective lenses, which in turn prompted a surge in glass- and lens-making technology, which then rapidly gave rise to both telescopes and microscopes. During the Renaissance and the Industrial Revolution, these developments had a profound effect on scientific discovery, leading to such momentous events as the discovery of germs and the widespread acceptance of a heliocentric solar system. Gutenberg would never have seen it coming.
Johnson calls these processes the “hummingbird effect,” noting that when examining the evolution of pollen, almost no one would predict that it would eventually result in the development of the hummingbird’s wing, which rotates differently from any other bird’s, allowing it to hover in mid-air while it feeds on flower nectar.
The stories that Johnson tells in his six densely-packed chapters are fascinating, well-sourced (the bibliography is stacked with both primary and secondary sources), and relevant; any modern reader can easily draw connections between the innovations Johnson explores and their own life. Fans of historical nonfiction or good science writing will find the book both interesting and informative.
I offer two minor critiques here—one of style, and one of substance.
Regarding style, though Johnson’s tone is both erudite and readable, he waxes rather pedantic from time to time. Rather than allowing the reader to speculate on their own about the implications of a particular point, Johnson prefers to spell it out for them, occasionally drifting into aphorism. “Ideas trickle out of science, into the flow of commerce, where they drift into the less predictable eddies of art and philosophy,” he writes. This sentence is vague enough that it could be cut and pasted onto practically any page of the book; it happens to appear in the chapter “Light.” He also tends to repeat his main idea more often than necessary—the theme of the book is innovations that led to far-reaching and unpredictable consequences, and he is determined that you won’t forget it. Also in the chapter “Light,” he writes, “Here again we see the strange leaps of the hummingbird’s wing at play in social history, new inventions leading to consequences their creators never dreamed of.” “Light,” incidentally, is the final chapter; it would be reasonable for Johnson to assume that the reader has gotten his point by now.
Regarding substance, Johnson acknowledges in his introduction that he is focusing exclusively on innovations from North America and Europe, because, he says, “certain critical experiences—the rise of the scientific method, industrialization—happened in Europe first, and have now spread across the world.” But this brand of western exceptionalism is disingenuous. Johnson chose to examine technologies that were first developed in Europe; to ignore technologies developed in other cultures and pretend that Europe’s innovations alone were somehow more far-reaching or significant is myopic and inaccurate. The flash bulb illuminated New York’s slums and led to worldwide social reform, but the gunpowder which made the first flash bulb possible was invented in China nearly a thousand years earlier, and one could just as easily trace those social reforms back to it, rather than stopping at the flash bulb.
Likewise, BBC writer Tim Harford points out that the revolution brought on by Gutenberg’s press would have been impossible without another Chinese invention: paper. And one could trace these innovations even further back to the development of written language itself, which was probably invented independently by more than one culture, including ancient Mesopotamia around 3500 B.C.E. Johnson chooses to focus solely on western inventions—that’s his prerogative as a writer. But to imply that western culture is somehow special or ahead of the curve when it comes to technological innovations that shaped modern life is misleading.
That said, Johnson endows his narratives with detail, robustness, and occasionally humor, making How We Got to Now an engaging, informative, and fun read.
Sunday, August 13, 2017
This is Daniel Brühl. If you are an American movie-goer, you might know him from his role as the villain Zemo in Captain America: Civil War (2016), or as the charming but dangerous Nazi officer in The Zookeeper’s Wife (2017), or as the charming but dangerous Nazi soldier in Inglourious Basterds (2009).
On the other hand, if you are a German movie-goer, you probably know Brühl as the sweet and devoted son in Good Bye Lenin! (2003), or as the ambitious journalist in the absurdist comedy/drama Me and Kaminski (2015).
Brühl was born in Spain and grew up in Germany, and though he speaks six languages fluently, he speaks them all with a German accent. Heroes with German accents are about as common in American film as unicorns are on the streets of Kansas City—that is to say, they don’t exist. Brühl might be handsome, funny, likeable, and talented, but he’s a native speaker of German, and though his English is flawless, his accent is unmistakable. As a result, he is and probably always will be the villain in American culture. (This, of course, despite the fact that Germany is currently one of the U.S.’s strongest allies.)
In a similar vein, Rob Lowe’s character in Thank You For Smoking (2005) comments that in modern American movies, the only people who smoke are “the usual RAVs... Russians, Arabs, and villains.” This comment is a bit redundant, however, since Russians and Arabs nearly always are villains in these films. Lowe’s character should have said “RAGs”—Russians, Arabs, and Germans—knowing the audience would automatically associate these groups with movie villains anyway.
The reasons that these groups are still consistently depicted as evil in American film are myriad and too complex to delve into here. Instead, I want to discuss the potential effects of these depictions.
Many, perhaps most, modern movie fans would agree that it is important for both children and adults to see women, people of color, and other marginalized groups in leadership and hero roles. Wonder Woman (2017) has been acclaimed partly for this reason. Seeing women exclusively in secondary or submissive roles; seeing Black men exclusively in predatory or other stereotypical roles; seeing women of color not represented on the screen at all—these patterns reinforce implicit biases already instilled in us by our culture.
The researchers at Project Implicit at Harvard University write that “Implicit preferences for majority groups (e.g., White people) are likely common because of strong negative associations with Black people in American society. There is a long history of racial discrimination in the United States, and Black people are often portrayed negatively in culture and mass media.”
If negative portrayals of Black people in the media contribute to implicit biases, then it stands to reason that consistently negative portrayals of people with specific non-English accents would have a similar effect. For example, if a large portion of Americans hear German, Russian, and Arabic accents only or primarily in the movie theater or in fictional TV series, and those accents belong almost exclusively to villains, an implicit association is likely to be created and/or reinforced.
Of course, German and Russian people do not typically experience serious discrimination or marginalization within American society (though refugees and other immigrants of Arabic and Middle Eastern descent most certainly do), so why is this a problem?
Here’s one reason: News outlets and other media typically use words such as “Russia” and “Germany” as metonyms; for example, “Russia invaded Crimea.” In this headline, “Russia” means the Russian military, or might even refer to Putin’s order to invade; it does not imply that the entire country of Russia moved wholesale into the Crimean peninsula.
Yet news outlets on all sides—left-leaning, right-leaning, and centrist—frequently conflate the will of a group of citizens with the actions of its government, overlooking the obvious fact that not all citizens (often, not even a majority) agree with those actions. In both mainstream and non-mainstream media, “Russia” should not be equated with “the Russian people” or even “the majority of the Russian people,” and yet it consistently is.
The current rhetoric regarding North Korea provides an even more poignant example. Americans are certainly justified in their fears of a nuclear attack, yet commentators across the board ignore the point that a war between the two countries would have a far more devastating effect on the citizens of North Korea, who by and large are pawns and victims of their leader’s sociopathic decision making. When news outlets discuss “North Korea,” they mean Kim Jong Un and his government; they largely ignore the millions of powerless citizens who are subject to his whims.
Hollywood encourages American audiences to view other nations as monoliths—all Germans are Nazis; all Middle Eastern people are terrorists; all French people are hypersexualized; all Brits are humorless and polite. These stereotypes may not have the same kinds of immediate, at-home effects as stereotypes about American people of color and women, but they are two sides of the same coin. They are tired, outdated tropes that discourage critical thinking and promote prejudice, discrimination, nationalism, and myopia. If it is healthy to see American women and people of color in hero and leadership roles, then we should demand the same of actors with commonly stereotyped nationalities and foreign accents, including the talented Mr. Brühl.
Friday, May 12, 2017
After my last review, in which I recommended that you opt for anything written by Gregory Maguire rather than Daniel Levine’s Hyde, I feel a bit hypocritical, because I found After Alice to be surprisingly mediocre. In his previous works, such as the Wicked series and Confessions of an Ugly Stepsister (his retelling of the Cinderella tale), Maguire uses well-known stories as scaffolds, as frameworks, and around them he builds rich and fantastical worlds with new characters, details, and perspectives that both fit within and enhance the original narratives. After Alice, on the other hand, is less an expansion of Lewis Carroll’s Wonderland and Looking Glass and more a rehashing of them. In it, a second child, Ada Boyce, follows Alice down the rabbit hole and traces her path through Wonderland, meeting the same characters and, in some cases, having nearly the same conversations. Maguire is successful in imitating Carroll’s absurdist style of dialogue, but there’s very little that’s original in this portion of the story.
In parallel to Ada’s adventures, we follow the story of Alice’s older sister, Lydia, as she navigates the separate worlds in and around her household. While her father entertains a meeting of intellectuals, including Charles Darwin, Lydia is mostly banished to the kitchen so as not to disturb the guests. There she quarrels with the servants and attempts to avoid Ada’s tiresome governess (who is distraught at having lost track of Ada), while searching for excuses to speak to one of her father’s guests, a handsome young American. Lydia’s story is rather more interesting than Ada’s, primarily because it is more original, and fans of Maguire’s own dense yet poetic style will enjoy this part of the tale.
Generally speaking, After Alice is a short and easy read, more homage than reimagining. Serious Maguire or Carroll fans will likely be disappointed, but those looking for an undemanding fantasy that can be finished in a summer weekend will find something in it to enjoy.
at May 12, 2017
Thursday, April 27, 2017
In her article “The Push to Ban Arabic Sermons in Europe’s Mosques,” published in The Atlantic on April 12, 2017, Sigal Samuel writes, “In several Western European countries, some politicians want to force imams to deliver sermons only in the official language: In Germany, imams should preach in German; in Italy, in Italian; in Britain, in English; in France, in French.
“To justify this requirement, two rationales are cited. Some say it will function as a counterterrorism strategy. Others say it will promote the social integration of Muslims. A few appeal to both lines of reasoning.”
We could discuss all the ways in which this is obviously Islamophobic and racist—as Samuel points out, no one is proposing that Catholic priests stop praying in Latin or that Jewish rabbis cease using Hebrew. Or we could discuss how, according to terrorism expert Scott Atran, “As a counterterrorism strategy, it’s likely to be worthless,” since “considerably less than 1 percent of ‘susceptible’ populations ... ever come close to joining violent extremist movements.”
But I want to talk instead about how language legislation of any kind, whether it’s the legal privileging of one language or dialect over another, or the systematic attempt to outlaw or eradicate a language completely, is morally untenable and antithetical to any portrait of a free society.
Linguists recognize that an individual’s native language and dialect are as integral a part of that person’s identity as their race or their gender. During the boarding school era of Native American colonization, when Native children were forcibly removed from their families and sent to white boarding schools, they were often physically punished for speaking the language of their tribes. Along with forcing Native children to cut their hair and wear western clothing, robbing whole generations of their language was seen as an integral step in killing Native culture, and in many places it was successful.
Further, it’s no accident that even as Black people make major strides in education, government, science, and other parts of mainstream American society, hallmarks of Black identity such as natural Black hairstyles and Black English are still often considered improper or unprofessional in workplaces and schools. The adoption of white standards of beauty and language are a prerequisite for advancement for Black people.
Attacking a culture by attacking its language is not a new practice, and talks of banning Arabic in European mosques are simply a novel way of doing it, as morally reprehensible as beating Native children for conversing with their peers in Lakota or Diné. But in addition to this, prohibiting Muslim people from worshipping in their native language is a legislative attempt to impede their ability to practice their religion at all.
Studies show—and multilingual people will attest—that emotions feel different in the speaker’s native language. Writing in Frontiers in Psychology, Catherine L. Caldwell-Harris says, “Bilingual speakers frequently report that swearing, praying, lying, and saying I love you feel differently when using a native rather than a foreign language.” In a secondary language, professing love or praying forgiveness can feel akin to communicating through an interpreter; it can erect an emotional filter, a barrier between the speaker and the intended recipient.
Regardless of your attitude toward Islam in particular or theism in general, if you agree that the free exercise of religion is a right worth defending, then you must acknowledge that the imposition of language bans in mosques (or any religious gathering place) is a serious infringement on that right. Even if you do not accept the fundamental premise of prayer—that a personal god is listening and, perhaps, responding—again, the fact remains that if you support the religious individual’s right to practice prayer and exercise their relationship (real or imagined) with that god, forcing them to do so in a secondary language necessarily deprives them of that right.
If you are non-religious, imagine a comparable scenario: that you are forced to interact with your spouse or your children exclusively in a language other than your native one. If English is your first language, imagine never hearing “I love you,” but only “Te quiero” or “Je t’aime” or “Ich liebe dich.” If you ever learned to swear in a second language, you’ll recognize that for a native English speaker, “Fick dich” or “Baise toi” simply does not carry the same weight as a sincere, well-aimed “Fuck you.” Although you understand their meanings, the words are physically processed in a different part of the brain and do not elicit the same emotional response.
Language legislation in any form—whether aimed at religion or some other aspect of human life—is a violation of an individual’s fundamental right to their identity. It has been imposed on marginalized cultures for centuries and, in some cases, has achieved its aim of annihilating those cultures. If we recognize the value of diversity, and our goal is not total homogeneity in appearance, thought, and speech, then we must speak out against these legally sanctioned attempts to eradicate linguistic practices that differ from the dominant culture.
Sunday, April 16, 2017
Hyde is a retelling of Robert Louis Stevenson’s classic The Strange Case of Dr. Jekyll and Mr. Hyde from the perspective of Edward Hyde. Except “retelling” isn’t the right word; “rewriting” would be more accurate. Levine keeps only the barest skeleton of plot from Stevenson’s original, dismissing Jekyll’s own account of events as “lies” and reinventing major characters wholesale, including Hyde himself. In the process, he turns Stevenson’s spare but acute examination of the nature of morality—which left most details to the reader’s imagination—into a lurid saga of child abuse, bizarre forms of self-denial, and dissociative identity disorder.
Even if I didn’t hold Stevenson’s work close to my heart, it would be difficult to judge Levine’s new version without comparison, particularly when his Hyde attacks the original directly as “abstruse and misleading nonsense.” That said, Hyde is, on its own, an entertaining read; it keeps the pages turning. And Levine deserves credit for his skillful blend of 19th and 21st century styles, which feels simultaneously modern and classic.
On the balance, however, Hyde seems in search of an audience that doesn’t exist. Those readers not well familiar with The Strange Case will miss the significance of many of Hyde’s twists and much of its commentary, while fans of Stevenson’s novella may suspect that Levine is trying, without success, to improve upon it. If you enjoy modern takes on classic stories, look instead for any of Gregory Maguire’s brilliant and imaginative work, and leave Hyde to his brooding on the shelf.
Though “epic” may be the technically correct term (in that it is Homeric in its scope), the word does not do justice to Gore Vidal’s spec...
This is Daniel Brühl. If you are an American movie-goer, you might know him from his role as the villain Zemo in Captain America: Civil ...