History Podcasts

Using Words to Win Political Battles

Using Words to Win Political Battles

The term "muckraker" was taken from the fictional character in John Bunyan's Pilgrim's Progress, a man who was consigned to rake muck endlessly, never lifting his eyes from his drudgery.People in the United States had long been displeased with the unsafe conditions, political corruption and social injustice of the industrial age, but it was not until the late 19th century that the proliferation of cheap newspapers and magazines galvanized widespread opposition. Writers directed their criticisms against the trusts (oil, beef and tobacco), prison conditions, exploitation of natural resources, the tax system, the insurance industry, pension practices and food processing, among others.Theodore Roosevelt, however, became angry when he read a bitter indictment of the political corruption of the day. The president, clearly one of the most fervent reformers, believed that some of the writers were going too far, and cited the muckraker image in a speech on April 14, 1906, criticizing the excesses of investigative journalism.

In "Pilgrim's Progress" the Man with the Muck Rake is set forth as the example of him whose vision is fixed on carnal instead of spiritual things. Yet he also typifies the man who in this life consistently refuses to see aught that is lofty, and fixes his eyes with solemn intentness only on that which is vile and debasing.Now, it is very necessary that we should not flinch from seeing what is vile and debasing. There is filth on the floor, and it must be scraped up with the muck rake; and there are times and places where this service is the most needed of all the services that can be performed. But the man who never does anything else, who never thinks or speaks or writes, save of his feats with the muck rake, speedily becomes, not a help but one of the most potent forces for evil.

The writers, many of whom had been Roosevelt's ardent supporters, harshly criticized him for apparently deserting their cause.Originally used in a pejorative sense, the term muckraker soon developed a positive connotation in the public mind. Leading writers of this genre included:

  • Lincoln Steffens, an investigator of corruption in state and municipal governments, published Shame of the Cities in 1904
  • Edwin Markham published an exposé of Child Labor in Children in Bondage (1914)
  • Jacob Riis depicted the misery of New York City slums in How the Other Half Lives (1890), an early advocacy of urban renewal
  • Ida Tarbell wrote a series of magazine articles detailing the business practices of Standard Oil, which appeared in McClure's and later were published in book form as The History of the Standard Oil Company (1904)
  • David Graham Phillips' Cosmopolitan article, "The Treason of the Senate," a bitter indictment of political corruption, provoked President Roosevelt's wrath, but created momentum that would culminate in the adoption of the 17th Amendment
  • Henry Demarest Lloyd's Wealth against Commonwealth (1894) chronicled the rise of John D. Rockefeller and Standard Oil
  • Ray Stannard Baker examined the sad state of race relations in America in Following the Color Line (1908)
  • Brand Whitlock expressed his opposition to capital punishment in the novel The Turn of the Balance (1907), while serving as the reform mayor of Toledo, Ohio
  • Samuel Hopkins Adams won fame from his muckraking exposés of the patent medicine industry
  • Upton Sinclair's The Jungle (1906) was largely responsible for federal legislation regulating food and drug practices; he was later a failed Socialist political candidate, a founder of the American Civil Liberties Association, a prolific fiction writer and Pulitzer Prize winner.

Public interest in the writings of the muckrakers began to wane around 1910; however, the momentum they created would continue to influence legislation for many more years.


Napoleon’s Return From Exile, Rallying an Army With His Words Alone

The ranks opened suddenly, and a figure stepped into view.

He was taller than many of his enemies described him. Taller and leaner, the angles of his face clearly defined. His eyes were colder than depicted in the paintings and the propaganda, and they sparkled with a strange ferocity as he surveyed the lines of armed men before him.

The 5th Infantry Regiment had leveled their weapons, the barrels of their guns held steady as the small army advanced towards them.

Napoleon Bonaparte had returned.

The old Emperor had moved quickly, but word of his approach moved quicker still. It was said that he and his men were yet to fire a single shot in their defense – his words alone were enough to win the people to his cause.

He promised free elections, political reform, a new era of peace and empowerment for the citizens of France. It was a stirring message, uplifting and powerful – wherever he went, his forces swelled.

By the time he reached Grenoble, however, the royalist authorities were well aware of his progress. Holding a line across the road, their rifles aimed squarely at Napoleon’s oncoming troops, the 5th Infantry Regiment were ready and waiting.

Less than ten months ago, France’s greatest general had been sent into exile.

The Coalition had marched on Paris, and after an increasing number of severe defeats and setbacks, the capital was taken. Following the Battle of Montmartre, Napoleon surrendered to his enemies and abdicated his throne.

Napoleon leaves Elba.

He was promptly exiled to the island of Elba, there to live out the rest of his days in seclusion while the powers of Europe rebuilt their nations. Of course, it was not to be.

From his new home, Napoleon had watched as tensions escalated across the continent. The Congress of Vienna, where heads of state from throughout Europe gathered to redefine the borders, was always going to be a difficult situation. However, against a backdrop of increasing civil unrest in France, fuelled by the actions of the new royalist regime, it looked as if peace might be short-lived.

Napoleon was exiled to the island of Elba. Mjobling – CC BY 3.0

Returning to their country for the first time in years, the old French nobility mistreated everyone from the veterans of Napoleon’s wars to the lower classes in general. On top of this, the people of France had to watch their once great empire being rapidly portioned off and reduced by the Coalition.

All this was fuel for the fire Napoleon was now about to light.

Vive l’Empereur!

So it was that, on the 26th of February 1815, the exiled Emperor left the island where his enemies had hoped he would end his days. In fact, some members of the French nobility were even pushing to have him assassinated, or at least moved further away, as they astutely feared he might take advantage of the growing unrest.

Of course, even as such plans were formulated, they were already too late.

During a brief window of opportunity, with both British and Spanish ships temporarily absent, Napoleon and 1000 loyal men left Elba and sailed away undetected. By the time word reached Paris of the exiled Emperor’s escape, he was back on French soil.

With tensions between the royalist nobility and the oppressed lower classes nearing breaking point, there could have been no better time for the old Emperor’s return.

Napoleon’s farewell to his Imperial Guard, 20 April 1814.

The people of France welcomed back their leader with open arms men flocked to his cause. His army had grown rapidly and, until Grenoble, no one had stood in his way.

Now, however, royalist troops barred the way. The 5th Infantry Regiment had taken their positions as the enemy approached, and as the vanguard of Napoleon’s forces came to a halt, a tense silence fell.

As the sun set, lighting up the western horizon, Napoleon strode out into the open.

He was unarmed, yet he showed no fear as he surveyed the line of gleaming rifles before him. For a moment he stood quite still, his face inscrutable. Then, without taking his eyes away from the royalist regiment, he seized the front of his coat and ripped it open.

“If there is any man among you who would kill his emperor,” Napoleon declared, “Here I stand!”

The 5th Infantry Regiment joined Napoleon on the spot.

Some accounts differ as to exactly what happened next, but most agree on the fundamentals of the event itself. After a moment of silence, voices within the ranks of the 5th Regiment began shouting

As the cry spread, it was taken up by more and more of the royalist soldiers. Before long they had lowered their weapons and, en masse, the entire regiment joined Napoleon’s army.

The following day, the 7th Infantry Regiment joined the cause, followed by an ever increasing number of soldiers. Marshal Ney, a high-ranking royalist commander, promised the King that he would bring Napoleon to Paris bound inside an iron cage. With 6000 men at this back, Ney then proceeded to march against the Imperialist army – only to swear his allegiance to Napoleon upon their meeting.

By the time the army reached Paris, they were able to enter the capital city unopposed. The royalists had fled before the Emperor’s advance and, once again, Napoleon Bonaparte had reclaimed his throne.

The Battle of Waterloo, and the end of the 100 Days.

In the end, of course, his reign would only last for a brief period. Remembered in history as Napoleon’s 100 Days, his fleeting return to power would end in the aftermath of the Battle of Waterloo. That crushing defeat for Napoleon and his troops saw the end of the war and the final abdication of the Emperor himself.

However, regardless of that outcome, Napoleon Bonaparte’s escape from exile remains a fascinating moment in his remarkable life. The subsequent march through France, gathering support and rallying troops with nothing but his words and charisma, defines perfectly one of Europe’s greatest military leaders.


More from Opinion

"In a 98 percent black school, where athletics were placed at a higher priority than academics, the guidance counselors didn’t quite know what to do with the fat kid who hadn’t played a sport in his life. My parents didn’t have the money for college, and I just kind of slipped through the cracks. All of the above is how I found myself dialing the number to the Army recruiting office in hopes of doing something and perhaps getting to go to college after my service."

There are millions of Black kids out there right now struggling just like I once was. They’re failed by crumbling public schools, taught little to nothing by bad teachers and overlooked by overloaded counselors. Their educational and financial futures can be saved by one thing: school choice.

As President Trump has said: “We’re fighting for school choice, which really is the civil rights [issue] of all time in this country. Frankly, school choice is the civil rights statement of the year, of the decade, and probably beyond, because all children have to have access to quality education.”

The school choice message is one that resonates. It is one that gets results.

Of course, embracing school choice and presenting that message to African American parents who don’t want their children in failing schools will do little to dissuade the endless rotation of Black liberal pundits who have made entire careers out of calling Trump “racist.” Despite them, there are signs that the message resonates among Black voters. Black women, in particular.

In Florida’s hotly contested 2018 race for governor, Ron DeSantis edged out Andrew Gillum by fewer than 33,000 votes. DeSantis was an advocate for charter schools, while Gillum was very much opposed to them. According to the Wall Street Journal:

"Of the roughly 650,000 black women who voted in Florida, 18 percent chose Mr. DeSantis, according to CNN’s exit poll of 3,108 voters. This exceeded their support for GOP U.S. Senate candidate Rick Scott (9 percent), Mr. DeSantis’s performance among black men (8 percent) and the GOP’s national average among black women (7 percent).

“While 18 percent of the black female vote in Florida is equal to less than 2 percent of the total electorate, in an election decided by fewer than (32,463) votes, these 100,000 black women proved decisive.”

The school choice message is one that resonates. It is one that gets results. While there is much on the President’s plate when it comes to shaping an agenda for 2020 and beyond, focusing on School Choice For Black Kids as he crafts a message to African American voters will put him on the right side of history in what is, in his words, one of the great civil rights battles of our time.

Black kids are on the front lines. And if President Trump can focus on this message to their parents, he will win in 2020. And our kids will win in the future.


Vocabulary: Political Words

If you can’t tell a lame duck from a rubber chicken, here’s a guide to help you understand the language of politics.

Every clique has its own language — an insider's jargon that people outside the group don't always understand. Filmmakers talk about "panning" and "fading." Retailers talk about "floor sales" and "back orders." Politicians have a language of their own too, and it often appears in media reports about politics.

What exactly do politicians mean when they talk about a "lame duck" or a "rubber chicken"? What is "red tape" and who is the "Silent Majority"? This glossary is designed to demystify some of these terms and explain their origins. The definitions that follow, with background drawn from Safire's New Political Dictionary, should help you understand political talk a little better the next time you hear it on the evening news or read about it online.

Big Government: A negative term, used mainly by conservatives to describe government programs in areas where they believe government shouldn't be involved, especially those that spend money on social problems

Bipartisan: A cooperative effort by two political parties

Bleeding Heart: A term describing people whose hearts "bleed" with sympathy for the downtrodden used to criticize liberals who favor government spending for social programs

Bully Pulpit: The Presidency, when used by the President to inspire or moralize. Whenever the President seeks to rouse the American people, he is said to be speaking from the bully pulpit. When the term first came into use, "bully" was slang for "first rate" or "admirable."

Campaign: (noun) An organized effort to win an election (verb) To strive for elected office

Caucus: An informal meeting of local party members to discuss candidates and choose delegates to the party's convention

Checks and Balances: The system of dividing power among the three branches of government (executive, legislative, and judicial) to prevent any one from having too much power. Each branch has some authority to check the power of the others, thereby maintaining a balance among the three.

Coattails: The power of a popular candidate to gather support for other candidates in his or her party. Winning candidates are said to have coattails when they drag candidates for lower office along with them to victory.

Convention: A national meeting of a political party, where delegates formally elect a party's nominee

Dark Horse: A long-shot candidate

Delegate: A representative to a party's national convention chosen by local voters to vote for a particular candidate. Each state is assigned a certain number of delegates based on its population.

Demagogue: A leader whose impassioned rhetoric appeals to greed, fear, and hatred, and who often spreads lies. Former U.S. Sen. Joseph McCarthy (see McCarthyism) is often cited as a classic demagogue.

Fence Mending: What politicians do when they visit their electoral districts to explain an unpopular action. The term originated in 1879, when Ohio Senator John Sherman made a trip home that most people considered a political visit. Sherman insisted, however, that he was home "only to repair my fences."

Filibuster: An attempt by a Senator or group of Senators to obstruct the passage of a bill, favored by the majority, by talking continuously. Because there is no rule in the Senate over how long a member can speak, a Senator can prevent a bill from coming up for a vote by talking endlessly. Senator Strom Thurmond of South Carolina set the record in 1957 by speaking for more than 24 hours without stopping.

Fishing Expedition: An investigation with no defined purpose, often by one party seeking damaging information about another. Such inquiries are likened to fishing because they pull up whatever they happen to catch.

Front Burner: Where an issue is placed when it must be dealt with immediately

Gerrymander: The reorganization of voting districts by the party in power to insure more votes for their candidates. The term originated in 1811, when Governor Elbridge Gerry of Massachusetts signed a bill that changed districts to favor the Democrats. The shape of one new district supposedly resembled a salamander, provoking a Boston newspaper editor to say, "Salamander? Call it a Gerrymander!"

GOP: Grand Old Party, nickname of the Republican Party

Grass Roots: Political activity that originates locally, or arises from ground level

Ideology: An integrated system of ideas about politics, values, and culture. Those who espouse an ideology are sometimes criticized as rigid and narrow-minded.

Incumbent: A current officeholder

Inside the Beltway: The area inside the Capital Beltway, a highway that encircles Washington, D.C. An issue described as "inside the Beltway" is believed to be of concern only to the people who work in and with the federal government and of little interest to the nation at large.

Lame Duck: An officeholder whose term has expired or cannot be continued, who thus has lessened power

Left-wing: Liberal. The labeling system originated from the seating pattern of the French National Assembly, which put liberals on the left, moderates in the middle, and conservatives on the right.

Lobby: A group seeking to influence an elected official, or the act of doing so. The term originated in the seventeenth century, when people waiting to speak with legislators at the English House of Commons waited in a large atrium outside the legislators' hall, called the lobby.

Machine Politics: Politics controlled by a tightly-run organization that stresses discipline and rewards its supporters. Machines are usually found in large cities and are frequently accused of corruption.

McCarthyism: The practice of smearing people with baseless accusations. Refers to the tactics of Senator Joseph McCarthy, who in the 1950s destroyed the careers of many prominent Americans by branding them Communists.

Muckraker: A journalist who seeks out the scandalous activities of public officials. Derived from the Man with the Muck Rake, a character in John Bunyan's The Pilgrim's Progress, who could never look up, only down.

Nomination: When a political party chooses its official candidate for a particular office

Nominee: The candidate chosen by a political party to run for a particular office

Photo-Op: Short for "photo opportunity," an event staged specifically for news cameras to help a politician appear in magazines and newspapers, on television, or online

Platform: The positions that a party adopts, and stands on, at the beginning of an election campaign

Political Party: An organization that seeks to achieve political power by electing its members to public office

Political Suicide: A vote or action that is likely to be so unpopular with voters as to cause a politician's probable loss in the next election

Poll: A survey used to gauge public opinion concerning issues or to forecast an election

Pork Barrel: Wasteful and unnecessary projects that politicians secure for their local districts, usually to gain favor with local voters. The term dates from the days when salted pork was occasionally handed out to slaves from large barrels. An observer once wrote that the mad rush of politicians to get their district's share of treasury funds looked like slaves rushing to the pork barrel.

Primary: A state election in which party members vote for a candidate from within their party. The vote determines how many of that state's delegates each candidate gets.

Pundit: A political analyst, commentator, or columnist who usually works for a newspaper or magazine, or in broadcasting. Derived from a Hindi phrase meaning "learned one."

Reactionary: A militant conservative opposite of "radical," which means ultraliberal

Red Tape: Government paperwork and procedures that are slow and difficult. Stems from an eighteenth-century British practice of binding official papers with a reddish twine.

Rubber Chicken Circuit: The endless series of public dinners and luncheons politicians must attend to raise funds and make speeches. The food often includes chicken, which is cooked hours earlier and then reheated, giving it a rubbery texture.

Silent Majority: The mass of Americans whose opinions are not loud and public, but who together have enormous power. Popularized by President Richard Nixon, who claimed that Vietnam War protesters comprised a minority, while a "silent majority" supported the war.

Slate: Candidates for various offices running as a team or a group of delegates running on behalf of one candidate

Smoke-Filled Room: The sort of place where behind-the-scenes political wheeling and dealing, often devious, occurs. Refers to the penchant of many political operatives for smoking cigars.

Spin: A politician's attempt to shape the way the public looks at an issue or event, much the way a tennis player uses spin to direct the ball. Political advisers who spin are known as "spin doctors."

Stump: To campaign in person on a local level

Swing Vote: The undecided, usually independent, portion of the electorate that can "swing" the outcome of an election one way or the other

Trial Balloon: An idea a politician suggests in order to observe the reaction. If public reaction is favorable, the politician takes credit for it if not, the idea dies quickly.

Whip: The party member who makes sure that all other members are present for crucial votes and that they vote in accordance with the party line. The term originated in British fox hunting, where the "whipper-in" was responsible for keeping the hounds from straying.

Whistle-Stopping: The practice of making speeches in many towns in a short time, often during a single day. When politicians traveled by train, small towns were called whistle-stops. Politicians would use the stop to deliver a quick campaign speech, often from the back of the train, before heading to the next stop.

Witch Hunt: A vindictive, often irrational, investigation that preys on public fears. Refers to witch hunts in 17th-century Salem, Massachusetts, where many innocent women accused of witchcraft were burned at the stake or drowned.


The Greatest Conservative President in American History

University of Colorado at Colorado Assistant Professor of Political Science Springs Joseph Postell makes the case for Calvin Coolidge as the greatest conservative president ever, in the February issue of Townhall Magazine.

Presidents’ Day offers us an opportunity to learn from the examples of great presidents who offer guidance for today’s challenges. For conservatives, the most important president to re-examine is Calvin Coolidge. Coolidge was the most effective, most eloquent, and most conservative American president since the Civil War. And his ideas have a great deal of meaning for us today.

Rather than provide a general overview of his political philosophy, this article will focus on two of Coolidge’s ideas that are profoundly relevant to today’s political issues: Coolidge’s defense of representative government and strong political parties, and his emphasis on equality and natural rights, including his actions on civil rights. In both of these areas Coolidge’s words offer great wisdom for considering the way forward for conservatives today.

AN UNPARALLELED RECORD OF LIMITING GOVERNMENT
Upon inspection, Coolidge’s policy record would make any conservative giddy if implemented today. True, Coolidge was not a free-trader, and he favored a national rather than a state-and-local solution to the problems of child labor. Still, on the whole, Coolidge’s record on federal taxing and spending dwarfs the accomplishments of great conservative presidents like Ronald Reagan.

America emerged from World War I with massive debt and a severe depression. Unemployment was 11.7 percent in 1921, and the debt had increased from $1.5 billion in 1916 to $24 billion in 1919. Warren Harding (Coolidge’s predecessor) and Coolidge went to work and quickly turned things around. From 1921-1924 annual federal spending was reduced by a remarkable 43 percent, from $5.1 billion to $2.4 billion (That’s not a misprint: a 43 percent reduction in federal spending!).

Tax measures passed in 1921, 1924, and 1926 reduced the top marginal income tax rate from 73 percent to 24 percent (Again, not a misprint). Such reductions in income tax rates are unimaginable today.

Thanks to his fiscal conservatism, Coolidge was able to reduce tax rates and still reduce the national debt by almost a third, from $24 billion to $16.9 billion. Much of this work was accomplished by the Bureau of the Budget, which imposed extensive cost savings in government. The Bureau’s director would check employees’ desks for excessive use of stationery, paper clips, and other supplies, and one official report proposed that government employees be given “only one pencil at a time and not receive a new one until the unused stub was returned.” Quite a far cry from the management of federal agencies today.

Given these accomplishments, it is surprising that Coolidge has been neglected by historians. Compared to many of the more highly-ranked presidents in historians’ rankings, Coolidge’s administration was far more peaceful and consequential. Coolidge’s policy achievements on taxes and federal spending were predicated upon, not a substitute for, his core political principles. In other words, to learn from Coolidge, we have to revise our understanding of what politics is about. We cannot define conservatism by the policies we think are best in this particular moment. Rather, we must define the policies we think are best as those which flow from the principles of conservatism. Policies follow from principles, not vice versa. To understand what Coolidge has to teach us, we must understand the principles that served as the foundation for his actions.

STRONG POLITICAL PARTIES AND REPRESENTATIVE GOVERNMENT
The most significant feature of American politics today is the chasm that divides our government officials and the resulting failure of our government to achieve meaningful progress on the looming fiscal issues that threaten our generation and future generations. The conventional wisdom is to blame the politicians. This is the easy way out, since it involves blaming someone other than ourselves.

The fact is that we, not our politicians, are to blame. Our elected representatives, now more than ever, are held accountable by a system of hyper-democracy where a single misstep can doom a career. During the 1950s, political histories have shown, politicians were giants on Capitol Hill. They wielded immense power. This power was often abused, but politicians were also able to exercise discretion, to work with opponents to make compromises, and had fewer incentives to sling mud. Today, our representatives are little more than mirrors of the most rabid and best organized constituencies powerful enough to hold them hostage. Political contests are now about power and mobilization, not cooperation and compromise.

Coolidge and his other Republican counterparts of the early 20th century (particularly President Taft) would be unsurprised at our present fate. They watched as America was transformed from a representative system with strong political parties to a system of direct democracy. They noted the ills that came with direct democracy, and argued that our political system would be fatally wounded by the demise of strong parties. They were right. Today, as we think about the best way to get things done, we need to consider the wisdom of a system that has strong political parties.

PROGRESSIVE POPULARITY CONTESTS
While a student at Amherst College, Coolidge fell under the tutelage of Anson D. Morse, a history professor who instilled in him an abiding affection for the virtues of political parties. Parties, Morse believed, played an essential role in any well-functioning democracy. Without them a representative democracy would descend into a mass of disorganized opinions and candidate-centered elections that would serve as little more than personality contests. Voters would decide on likability and sound bites marketed as widely as possible, rather than attachment to parties as collective organizations that represented clear principles.

Following Morse’s teaching, Coolidge consistently defended representative government and the role of parties in our system. During his career Progressives worked to destroy the power of political parties, which had come to dominate American government during the late 19th century. In place of representative government through party leadership, Progressives pushed for direct democracy and politicians who built personal organizations rather than worked through party channels. They pushed for reforms such as the referendum, recall, and direct primaries, all of which undermined the traditional understanding of a representative who would “refine and enlarge the public views,” as James Madison explained in “Federalist No. 10.”

The Progressives succeeded in imple- menting their agenda, and the rise of direct democracy created candidates and office-holders who were independent of their parties. While they remained affili- ated with a party, they were not beholden to party leaders for support, and so they acted independently of the party’s lead- ership. Instead, they followed their own political ideas, and more importantly, they became more beholden to new sources of support: organized interest groups and mobilized constituencies back home. The result of this was to pre- vent deliberation and compromise: when a representative’s job is merely to follow the will of those constituencies which are most essential to re-election and most effectively mobilized, a representative can- not make concessions to political opposition in order to get (in Ronald Reagan’s phrase) “half a loaf ” rather than nothing at all. In short, the effect of direct democracy as initiated by the Progressives was to produce individualized candidates rather than party cohesion, mobilization rather than compromise, and the use of force rather than argument to win political battles.

Coolidge understood this tendency, and he denounced it vehemently. As he wrote in 1920, “We have had too much legislating by clamor, by tumult, by pressure. Representative government ceases when outside influence of any kind is substituted for the judgment of the representative.” While he was quick to point out that “[t]his does not mean that the opinion of constituents is to be ignored,” he was adamant that representatives needed to have the freedom to work through their parties, to deliberate, and even to make compromises rather than constantly be held to the fire by angry constituents. As Coolidge argued, “The binding obligation of obedience against personal desire is denied in many quarters. If these doctrines prevail all organized government, all liberty, all security are at an end. Force alone will prevail.”

Coolidge’s indictment of direct de- mocracy as opposed to representative government through party organizations is one of the most significant lessons we can apply to our situation today. Much of the paralysis, the snark, and the incivil- ity in Washington today can be traced directly to the decline of political par- ties in American politics. Without party leadership, there can be no deliberation, no conference, no compromise, and no true representation. The Progressives re- defined the rules of the game, and today both sides are forced to play by the new code. But as Coolidge warned, though we might win temporary victories through the use of direct democracy, the long- term health of our Constitution will be in jeopardy.

NATURAL RIGHTS AND EQUALITY
Arguably Coolidge’s most important contribution to American conservatism is found in his statements on equality and natural rights. Of all our presidents since Abraham Lincoln, Coolidge was the staunchest defender of the natural rights enshrined in our Declaration of Independence. In his two greatest speeches, titled “The Inspiration of the Declaration” and “The Price of Freedom,” Coolidge laid out the basic logic of the Declaration of Independence, a logic which served as the core of his own political philosophy.

As he explained in “The Price of Freedom,” both religious teachings and the teachings of political philosophy (such as those of John Locke) gave the Founders the idea of “the divine origin of mankind.”

“From this conception,” Coolidge con- tinued, “there resulted the recognition that freedom was a birthright. It was the natural and inalienable condition of be- ings who were created ‘a little lower than the angels.’ With it went the principle of equality, not an equality of possessions, not an equality of degree, but an equality in the attributes of humanity, an equality of kind. Each is possessed of the divine power to know the truth.”

In this concise and eloquent statement, Coolidge set forth his own political philosophy and the philosophy of American conservatism. Why is there a basic equality among all human beings? Because we are all created “a little lower than the angels.” Not one of us is so great, so wise, so powerful, that we have a right to rule other people. But we also have something that sets us apart as human: “Each is possessed of the divine power to know the truth.”

So all human beings are fundamentally equal, because they are less than angels, but have certain attributes that animals do not have. This is the basic equality of human nature to which Coolidge refers. This means that we have a fundamental equality not of possessions, but an equality of natural rights. We have an equality of rights as the starting point for our pursuit of happiness, not an equality of outcomes.

THE PARTY OF CIVIL RIGHTS
Directly related to Coolidge’s defense of equality and natural rights was his admirable record on race. It surprises many today to hear that the Republican Party was the party of civil rights prior to the New Deal, and that the GOP commanded substantial majorities from the African-American community during the Coolidge years.

The reason is uncomplicated. The Republicans believed strongly in the principles of equal rights and individual liberty. Their counterparts either clung to neo-Confederate principles denying the soundness of the Declaration of Independence, or subscribed to a Progressive philosophy which denied the existence of natural rights altogether. It was Woodrow Wilson who approved explicitly of the introduction of segregation in the Treasury Department and the Post Office during his administration. When Democratic majorities in Congress passed laws banning interracial marriage in Washington, D.C., Wilson signed the legislation.

By contrast, Republicans such as Coolidge and Harding openly advocated the passage of anti-lynching legislation at the national level, arguing that, if anything, the 14th Amendment’s “Equal Protection Clause” allowed the national government to intervene when local officials deliberately failed to grant basic protections to an entire race of people. Harding even went to Birmingham, Alabama in 1921 to deliver a speech condemning lynching and calling for racial harmony. Both Coolidge and Harding pressed for the creation of commissions to help bridge the divide between the races, although Congress failed to go along on either the anti- lynching legislation or the commission proposals.

Coolidge also adamantly opposed the segregation of the civil service that Wilson had inaugurated, personally intervening in several instances where departments (such as the Interior Department) attempted to segregate workers. Sadly, the rise of “classified” or career bureaucrats reduced Coolidge’s influence in many agencies, and he was unable to eradicate segregation entirely.

Coolidge’s actions on civil rights flowed from his conviction that America was a land based on equality of rights and individual liberty. As he said in a dedication of a statute to Swedish immigrant John Ericsson, “As we do not recognize any inferior races, so we do not recognize any superior races. We stand on an equality of rights and of opportunity, each deriving honor from his own worth and accomplishments.” Thus, Coolidge refused to define individuals by their membership in a particular race or class. This was a message of hope for the African-American community, that their government would protect their equal opportunity to pursue their own happiness rather than treat them as a member of a segregated group.

Coolidge also practiced what he preached. During a conversation with his secret service agent Edmund Starling, Starling referred to the White House butler, Arthur Brooks, as a “fine, colored gentleman.” Coolidge immediately replied: “Brooks is not a colored gentleman. He is a gentleman.”

Coolidge’s emphasis on the natural rights philosophy of our Declaration of Independence is a crucial source of inspiration for conservatives seeking the alternative to our present approach to race relations. Just as important, it is central to any attempt by conservatives to re-engage minorities. The message of hope in the quest for true equality lies within our founding principles and documents, not in the inner recesses of centralized administrative agencies.

THE CONSTITUTIONAL CONSERVATIVE
In short, Coolidge was not a great president because he solved a crisis by taking leadership of the entire American political system. Coolidge and other constitutional conservatives like him did not believe it was a good thing to consolidate all political power in one person. A system of self-government, they argued, was incompatible with such an expansive conception of leadership. To crown a president the leader of the government would be equivalent to ending collective self-government of the people and by the people.

Part of Coolidge’s greatness, therefore, is that he was a restrained president, conforming to checks and balances and the rule of law, without being aloof from the people. Though he would likely have won a second full term in office in 1928, he declined to run. His reasoning was simple: “We draw our presidents from the people. It is a wholesome thing for them to return to the people. I came from them. I wish to be one of them again.” He did not envision a president as someone who stood above the people, godlike, towering over them. He came from the people, and returned quietly to engage in self-government alongside them.

Coolidge, our only president born on the Fourth of July, represents the spirit of America and therefore of American conservatism. No president in the 20th century more eloquently defended the ideas in the Declaration of Independence and the Constitution. No president advanced policies that fit with those ideas more effectively. Coolidge’s words, and his example, offer fitting lessons on how we should approach the challenges of today. •

Joseph Postell is Assistant Professor of Political Science at the University of Colorado at Colorado Springs. He is the co- editor of Toward an American Conservatism: Constitutional Conservatism during the Progressive Era, which explores the political ideas of conservatives such as William Howard Taft and Calvin Coolidge.


The history of 'rigged' US elections: from Bush v Gore to Trump v Clinton

Donald Trump may have shocked the American political establishment with his refusal to say whether he will accept the results of next month’s presidential election, but he is far from the only candidate for high office in the United States who has cast serious doubt on the integrity of the system and the campaign tactics of his opponents.

Over the past 16 years – ever since the epic, 36-day presidential showdown in Florida in 2000 that was resolved not by a full recount of the votes, but by a supreme court split along partisan lines – accusations of vote-rigging and out-and-out theft have become increasingly common among partisans on both sides, and the electoral process has become ever more politicized, rancorous and fraught with mistrust.

“I will tell you at the time,” Trump said at last Wednesday’s debate when asked if he would accept the election result on 8 November. “I’ll keep you in suspense.” The Republican candidate has repeatedly claimed, without evidence, that the election is “rigged” against him. “Of course there is large-scale voter fraud happening on and before election day,” he tweeted last week. All available evidence shows that in-person voter fraud is exceedingly rare.

Opinion polls suggest that Trump’s charges of a “rigged election” have struck a nerve: 41% of voters believe him when he says the election could be stolen, according to one survey. More than two-thirds of all Republicans believe that if Hillary Clinton is declared the winner, it will be because of illegal voting or vote-rigging, according to another.

Those attitudes are almost certainly the result of Republicans beating the drum for more than a decade about elections being skewed by the illegal participation of dead people, illegal immigrants and even the occasional household pet. To this day, many in the GOP are convinced Barack Obama was elected only because community organizing groups such as Acorn – now defunct – registered extraordinary numbers of ineligible or nonexistent voters in the inner cities, and because busloads of Mexicans came over the border to vote using someone else’s name.

Eight years before Trump ever publicly uttered the words “rigged election”, Obama’s first Republican opponent, John McCain, said in a presidential debate that Acorn was “on the verge of perpetrating one of the greatest frauds in voter history in this country, maybe destroying the fabric of democracy”. No credible evidence ever emerged of a single fraudulently cast ballot arising from Acorn’s activities.

While Democratic candidates have rarely resorted to such inflammatory language, their rank-and-file supporters certainly have, suggesting the problem crosses party lines. The 2004 election, which saw George W Bush re-elected despite the mounting unpopularity of the Iraq war, saw an explosion of unfounded conspiracy theories that Republicans were in cahoots with the manufacturers of electronic voting machines and would never lose an election again. (The theory fell apart as soon as Democrats retook control of the House of Representatives two years later.)

This year, a hard core of Bernie Sanders supporters remains convinced that the senator from Vermont was cheated out of the Democratic presidential nomination by the underhanded maneuvering of the Clinton campaign and the Democratic National Committee – despite the fact that Clinton won 3m more primary votes.

American history is hardly lacking in examples of real voter manipulation and electoral skullduggery, especially in the segregation-era deep south. To this day, the US electoral system is widely viewed as an anomaly in the western world because of persistent problems with the reliability of its voting machinery, frequent bureaucratic incompetence, the lack of uniform standards from state to state or even county to county, the systematic exclusion of more than 6 million felons and ex-prisoners, and the tendency of election officials to adopt rules that benefit their party over democracy itself.

Until 2000, though, these issues were not widely aired in public. Then the battle over Florida ripped a veil off a dysfunctional system and offered an opportunity not just for meaningful electoral reform – a slow and frustrating process – but also for new forms of political warfare unseen since the darkest days of the segregation era in which the electoral process itself became fair game, particularly for the Republicans.

Attorney Nicole Pollard, Judge Charles Burton and attorney John Bolton review questionable ballots for the 2000 presidential election in Florida. Photograph: Rhona Wise/EPA

It began, perhaps, when the hand recount of punch card ballots requested by Al Gore and the Democrats – something both parties had routinely pressed for in previous contested elections – was recast by many leading Republicans in their talking points as a form of “slow-motion grand larceny”.

Then, in Missouri, Republican Senator Kit Bond took one look at African American voters in overcrowded precincts in St Louis casting ballots beyond the official poll closing time – something that has since become standard practice in many states – and denounced what he called a “major criminal enterprise”.

Soon, a narrative took hold that Democrats were habitual vote-stealers – something that was indubitably true in the days of Boss Tweed in 1860s New York but now took the form of a racist dog-whistle because the voters under most suspicion were black or Latino. Within a few years, politicians such as Sarah Palin were openly distinguishing “real Americans” – meaning white Republicans – from the rest, and states under Republican control were passing voter ID laws to crack down on a problem – voter impersonation fraud – that experts have repeatedly found to be rare to non-existent.

As the federal courts have now begun to find, the effect of these laws has in fact been to discriminate against groups of voters – the transient, the elderly, students and the poor – who are much more likely to support Democrats.

Given the level of mistrust, rank-and-file members of both parties have increasingly come to define democracy by the elections their side wins, and any other outcome as prima facie evidence of theft and corruption.

Royal Masset, a former political director of the Texas Republican party, once described how he would receive dozens of calls from disappointed candidates after election day complaining about some unsubstantiated outrage, usually involving illegal immigrants or lightning-rod political figures such as Jesse Jackson. “Human beings do not accept defeat easily,” he observed in 2007.

If Trump is different, it is only because he began complaining about vote-rigging months before election day and because he threatens to depart from the tradition that says you fight as hard and as dirty as you want, but only until the final results come in.

The sheer volume of his complaints, however, may be doing his prospects little good: a fascinating American National Election Study survey conducted in 2012 shows that people are less likely to vote when their faith in the integrity of the system has been shaken, and much more likely to vote if they think the ballots are counted fairly. In other words, Trump’s rhetoric may just be depressing his own turnout and making defeat all the more likely.


The American Revolution

Talk of independence in the American colonies had been brewing for some time. Yet, it was not until near the end of the French and Indian War that the fire was truly aflame.

Officially, the American Revolution was fought from 1775 through 1783. It began with rebellion from the English crown. The official break-up came on July 4, 1776, with the adoption of the Declaration of Independence. The war ended with the Treaty of Paris in 1783, after years of battle all throughout the colonies.


How to Win an Election

By Drew Westen Ph. D. published April 29, 2020 - last reviewed on May 26, 2020

Do you care about the unemployed, Medicaid recipients, and DREAMers? Whether you do or not, you’re likely going to be hearing a lot about them in the coming months, as America heads into a heated presidential election.

And if you do care, you should wipe every one of those phrases out of your vocabulary.

Why? Because it is hard enough to run against a tough opponent who does not share your values. You don’t also want to run against the human brain. It is a formidable adversary. And every one of those words and phrases is working for the neural opposition.

For more than two decades, I have been studying political decision-making and have made some counterintuitive discoveries about the mind of the voter. My first research, inspired by the impeachment of President Bill Clinton, showed that neither knowledge of the Constitution nor knowledge of what he had done predicted people’s “reasoned beliefs” about whether he committed high crimes and misdemeanors. But three things did—their feelings for their political party (partisanship), their feelings about the man himself, and to a much lesser extent, their feelings about feminism.

Later neuroimaging studies confirmed no signs of intelligent life when people were purportedly reasoning about unflattering information regarding a preferred political candidate. Neural circuits typically active in reasoning tasks never turned on. What did light up were circuits that traffic in negative emotions and Houdini-like efforts to escape—until they came up with a satisfying rationalization that eliminated the problem.

Then something totally unexpected happened. Their brains actually gave them a little emotional pat on the back for their efforts. There was a flurry of dopamine activity in reward circuits. I came to understand that it is hard to change the partisan brain because we get rewarded for lying to ourselves.

I have found that it is sometimes difficult for people to swallow the truth about reason. But it’s important for anyone who cares about the outcome of elections. It’s not that rational thinking is irrelevant when we pull that voting lever, but that we think for a reason, and the reasons are always emotional in nature. The only things we reason about are the things that we care about. Our feelings are our guide to action. Reason provides a map of where we want to go, but first we have to want to go there.

In politics as in the rest of life, we think because we feel.

Politics, then, is less a marketplace of ideas than a marketplace of emotions. To be successful, a candidate needs to reach voters in ways that penetrate the heart at least as much as the head. That makes political messaging critical—and perhaps about to determine the course of American history.

The Political Brain

In 2007, as a research and clinical psychologist who had watched one Democratic presidential candidate after another go down in flames, I researched and wrote a book titled The Political Brain. It dissected how candidates might talk with voters if they started with an understanding of the way our minds actually work.

As was readily apparent from their campaigns over decades, Democrats and Republicans have had two very different implicit visions of the mind of the voter. Republicans talked about their values, such as faith, family, and limited government. Their think tanks are feel tanks and fuel tanks, generating and testing what the brilliant wordsmith on the right, Frank Luntz, called “words that work.”

Democrats, in contrast, talked about their policy prescriptions, bewitched by the dictum that “a campaign is a debate on the issues.” Their think tanks brought in fellows to work out policies based on the best available science. Perhaps blinded by their indifference to emotion, they left to chance the selling of those policies to the public.

Armed with a vision of the mind in which good ideas, even when described to people in terms they might not understand or find emotionally compelling, would somehow sell themselves, Democrats consistently lost elections. At the time I wrote the book, only one Democrat, Bill Clinton, had been elected and re-elected to the presidency since FDR six decades earlier.

Survey data across decades of elections show that success or failure at the ballot box tends to reflect, first and foremost, voters’ feelings toward the parties, the candidates, and the economy, in that order. Then come feelings toward candidates’ specific attributes, such as competence or empathy. Feelings on any given issue come in a distant fifth in predicting election outcomes. Voters’ beliefs about the issues barely register. And except for political junkies, most voters are neither interested in detailed policy prescriptions nor competent to assess them.

What voters want to know are the answers to two questions: Does this person, and does this party, share my values? And do they understand and care about people like me? Those turn out to be pretty rational questions. No one can predict a black swan or coronavirus pandemic, but you’re likely to feel comfortable with the decisions of leaders who share your values and care about people like you.

The 2016 and 2020 elections continue a familiar pattern. No doubt sexism (and some help from Vladimir Putin) contributed to Hillary Clinton’s defeat and to the poor showing of Senator Elizabeth Warren in the Democratic primaries. But Warren’s fate was as predictable as Hillary’s, thanks to her Hillary-like message, “I have a plan for that!”

Funny, I don’t remember Martin Luther King’s “I Have a Plan” speech. But there are reasons rooted in human psychology and evolution that we all remember his dream.

Three Principles of Effective Messaging

In the nearly 15 years since writing The Political Brain, I’ve had the opportunity to interact with about 100,000 voters as a political message consultant, juggling academic work with developing and testing messages for political and other organizations, whether in focus groups, telephone surveys, or online dial-testing, in which voters move their cursor along a bar, rating their response as they listen to messages or ads. (Full disclosure: Although I work for organizations on the Left, this article is about the role of psychology in the science and practice of politics and is not written from a partisan perspective. If anything, my findings shine a harsh light on the Democrats.)

Online dial-test technology gives voters the opportunity to provide a second-by-second rating of how compelling they find what they are hearing or watching, which gives me the opportunity to see how voters respond to every word or phrase and how different groups of voters respond to the same message.

Studying voters’ responses over the last several years has allowed me to distill three basic principles central to effective political messaging, all rooted in the way our minds and brains work.

Principle #1:

Know what networks you’re activating. Our brains are vast networks of neurons, which combine in millions of ways into circuits that not only maintain our lives but create all of our thoughts, feelings, and actions. Of most significance to persuasive messaging are networks of associations, sets of thoughts, feelings, images, memories, and values that become interconnected over time. These networks are primarily unconscious, always whirring in the background, directing our thoughts, feelings, and behavior.

Nothing could be more important in political communication than knowing which neural wires we are inadvertently tripping, which networks we want to activate or connect, and which ones we want to deactivate.

Consider this phrase: the unemployed. What’s wrong with that? Just about everything, because it trips some inadvertent wires.

For starters, it takes real people with pain-lined faces and turns them into nameless, faceless abstractions. If you want people to feel something for the unemployed, you need to do just the opposite. It also turns a group of people that likely includes someone you know and care about yourself—and among whose ranks you have likely been at some point —into a them. And then there’s the just-world hypothesis—our tendency to rationalize away bad things that happen to good people. Speaking of the unemployed cries out for just-world sentiments such as, “I wonder what they did to lose their jobs?” or “Perhaps they are just lazy.”

All of this happens unconsciously and in the flicker of an instant, so that by the time you’ve gotten half a sentence out, you’ve already taken two steps backward, and none forward. The alternative is simple and humanizing, rather than abstract and dehumanizing: People who’ve lost their jobs. You can literally feel the difference from the unemployed. And to inoculate against the just-world hypothesis, try People who’ve lost their job through no fault of their own.

Abstractions activate a thin strip of cortex—the dorsolateral prefrontal cortex—that plays a central role in neural circuits involved in reasoning and conscious thought. But those aren’t the circuits that move levers in the voting booth.

If you care about policies that help people who are unemployed, then you want to evoke empathy for those people. The brain has circuits that evolved specifically to create empathy toward others. These neural circuits, the ventromedial and orbital areas of the prefrontal cortex, are also just a stone’s throw away from circuits involved in emotional processing. And our language captures something important about emotions: They move us.

How about Medicaid recipients? What image comes to mind when you think of a recipient? If you’re like most white people, it is outstretched hands, which are looking for a handout—a phrase that evolved from the gesture. Recipients are also passive. The term does not connote people actively seeking work or trying to help themselves. That is why referring to people on Medicaid is equally destructive: It suggests they are on the dole—back to outstretched hands.

Making matters worse, although the majority of people who rely on Medicaid for their health are white, the phrase tends to activate unconscious or implicit prejudice, as most white people who hear Medicaid recipient picture poor people of color, with all the conscious and unconscious bias it entails.

The alternative, once again, is to turn people into people: People who rely on Medicaid for their health care. They are not recipients. They are not on anything.

And there is a way to turn Medicaid recipient into something powerful. No anti-Medicaid message I have ever tested can block the 20-point boost in favorability delivered by a message that begins with these words: Chances are, if you have a parent or grandparent in a nursing home, Medicaid is paying for their care. People in nursing homes are often disabled—cognitively, physically, or both—but we love and care about them, either because they are our beloved relatives or someone else’s, and they deserve dignity in later life or care for their illness. And the hero of the story is Medicaid, which is actively and benevolently paying for their care.

There’s still room to up the ante and make the statement even stronger—10 favorability points stronger. Just precede it with one clause: Whether you’re white, black, or brown, chances are, if you have a parent or grandparent in a nursing home.… Why does that improve the message? First, it deactivates the implicit stereotype among white people of Medicaid recipients and the negative emotion activated by that stereotype. Second, the them becomes us. The phrasing is inclusive in a way that doesn’t feel like pandering or indulging in identity politics. It is about all of us, and it doesn’t matter what color we are.

Which brings us to DREAMers. Like the unemployed and Medicaid recipients, it’s an appeal to empathy without the appeal. And it adds an element of unintelligibility to the average voter. Why are they called DREAMers? What are they dreaming about? What’s the story with the capital DREAM? (It’s an acronym for a piece of legislation with a name no one knows.) And perhaps most important, why do the children of undocumented immigrants get to have the American Dream when, for the first time in history, three-quarters of Americans do not believe that their own children will be better off than their parents—the core of the American Dream?

So who are DREAMers? Try this: children who have never pledged allegiance to any flag other than ours.

Principle #2:

Speak to voters’ values and emotions. Our emotions and values are not arbitrary. We have them for a reason. Positive emotions draw us toward things, people, and ideas we believe are good for ourselves and the people we care about negative emotions lead us to avoid or fight them. In politics, messages that tap into hope, satisfaction, pride, and enthusiasm, on the one hand, and fear, anxiety, anger, and disgust on the other, move people, first of all to vote, then to vote for one party or candidate versus another.

What all human beings find compelling is rooted in the structure and evolution of our brains, and it is expressed in our values as well as in our emotions. Family is a value that matters to people across the political spectrum. Natural selection, at its core, is about survival, reproduction, and alliances with people who help us survive, reproduce, and care for our kin.

For years, the political left ceded family values to the right, which tested that term 40 years ago, saw a winner, and spent hundreds of millions of dollars branding it as theirs. For more than a generation, that effort essentially tied the hands of Democrats in speaking about perhaps the most important value to our species and the source of the most powerful of emotions.

Principle #3:

Tell a coherent, memorable story. Our brains are wired to understand, to be drawn to and into, and to remember and pass along to others information presented in a particular form: narrative. As a species, we survived for about 200,000 years before the emergence of literacy, requiring some mechanism for transmitting knowledge and values across generations. All known human societies have myths and legends in the form of stories..

Issues are not narratives. Nor are 10-point plans. Narratives have protagonists and antagonists. At least in the West, narratives tend to have a particular story structure, or grammar, recognizable even to preschool children. It includes, among other elements, an initial situation, a problem, a battle to be fought or hill to be climbed, and a resolution. Stories also tend to have a moral. The particular values embedded in that moral are central to the difference between the right and the left.

Perhaps the most important lesson I have learned from testing hundreds of thousands of narratives on the most important issues of our time, from contraception and abortion to climate change and economics, is that virtually every successful political narrative has a structure derivative of this grammar. Except for attack ads, which diverge only partly from this structure, effective messages begin with a statement of values that transcends political divides (to establish a connection between speaker and listener), then raise concerns in vivid ways that activate emotions, particularly moral emotions, such as fairness or indignation. Finally, after briefly describing a solution, but skipping details, they end with a sense of hope.

A Family Doctor for Every Family

In 2020, high-quality affordable healthcare for all Americans is one of the top priorities for voters, as it has been for three decades. Illness doesn’t come in red or blue. Lack of insurance harms people on both sides of the aisle.

When Barack Obama first ran for president 12 years ago, several organizations that had been working on the issue hired me to develop pro-reform messages and test them against potential attacks. Public support for expanding healthcare coverage proved so widespread that the issue was bulletproof—but only with effective messaging. If any of the messages began with, I believe in universal healthcare, the percentage who supported reform roughly equaled the percentage opposing it, given a tough narrative from the right about “socialized medicine” or “a government bureaucrat between you and your doctor.” But if the same message began with, I believe in a doctor for every family, support exceeded opposition 70 percent to 30 percent.

Universal healthcare is abstract, cold, and sterile—just like the name of the bill that succeeded in at least partly expanding coverage, the Affordable Care Act (ACA). The name does not capture many of the most important values that move voters on healthcare: the choice to retain the close personal connection they may already have with their doctor and the ability to choose a plan they believe is best for their family.

Universal healthcare also unconsciously activates neural networks that evoke both prejudice and legitimate concerns about quality, as white voters picture images of clinics with long lines, packed with people of color getting the kind of inferior care many people of color currently receive. Had the bill instead been named A Family Doctor for Every Family—parallel to George W. Bush’s signature legislation on education, No Child Left Behind—it would have activated entirely different neural networks, connoting a personal connection with their doctor, high-quality care they have chosen, and coverage for everyone.

Within the first hundred days of the Obama administration, the House of Representatives passed a healthcare bill that captured what the American public wanted. The Democrats said it was about making sure everyone has good, affordable care. It took the Senate over a year to pass a watered-down version that dropped key provisions—a Medicare-like option to compete with private insurance and the power of the government to negotiate drug prices. By the time the bill passed, the Republicans had captured the narrative, turning a once-popular reform bill into “socialized medicine.”

Since then, Republicans have pushed to curtail the law and refused to amend it in ways that could improve care for millions more people. The ACA covers 20 million people who didn’t have coverage before, including all children, and permits coverage of people with pre-existing conditions. But it’s still possible to get stuck with a $10,000 deductible, and medical expenses remain the leading cause of bankruptcy.

Republican opposition to the Affordable Care Act, deficient as it is, cost it control of the House in the 2018 midterm election. The Democrats enter the 2020 race with the challenge of selling to voters a broken program and the promise of fixing it.

To start with, what do they even call it? Most voters do not know what the Affordable Care Act is. Many speak negatively about Obamacare, not realizing they use and like it. Research shows that the two worst things to call either extending the ACA or taking a different path to healthcare are the names Democrats are currently using: Obamacare and Medicare for All. It was Republicans who coined the name Obamacare in an attempt to kill it. Any program named after a president will have the lasting enmity of the voters of the other party.

Medicare for All, although assuring that everyone would be covered, scares tens of millions of voters who fear it would create lowest-common-denominator care. A defining feature of U.S. culture is individualism for all does not resonate with the political center.

Given that roughly 40 percent of voters support or oppose any given policy from the start because of their partisan affiliation, the associative networks of swing voters are instructive (see the diagram on page 69).

Voters want a solution to healthcare issues that addresses their values, interests, and concerns. They want high-quality, affordable care, with freedom to choose among plans, freedom from worries about problems like pre-existing conditions, and coverage for everyone. At the same time, they do not want to lose their current plan or doctor, they worry about any program that will create long lines and poor-quality care, and they worry about the cost of the program. Many swing voters are also leery of “socialized medicine.”

The evidence points to fixing the program begun by Obama. How to talk about it? Incorporating the principles of messaging, a winning campaign might sound like this:

It’s time to finish the job we started on healthcare, not tear it down. Democrats built a high, rock-solid floor on what insurance companies had to cover in every plan on the market, including pre-existing conditions, procedures your doctor orders, and life-saving preventive medicine, like breast cancer screening. Since the healthcare industry couldn’t lower the floor, they sent premiums, deductibles, and copays through the roof. So now it’s time to build the ceiling. That means limiting the amount insurance companies can raise fees every year. It means letting the most powerful union in the world that is supposed to represent the interests of working people—the United States government—negotiate drug prices with the pharmaceutical industry, or we’ll buy our drugs where they’re less expensive. And it means giving insurance companies some healthy competition by letting people of any age choose Medicare if they prefer it. Let’s finish the job so no Americans will ever again have to choose between taking their child to the doctor and putting food on the table for their family.

That would leave only the question of what to call it. Democrats might do well with “A Family Doctor for Every Family.” That’s a clinic Americans would be happy to visit.

What People Think

Networks of associations are always active.

Because so much is at stake in political messages, it’s important for candidates to capitalize on the associative power of words and phrases. Here is a glimpse into the minds of people when they hear the words healthcare reform.

Submit your response to this story to [email protected] If you would like us to consider your letter for publication, please include your name, city, and state. Letters may be edited for length and clarity.

Pick up a copy of Psychology Today on newsstands now or subscribe to read the rest of the latest issue.


Using Words to Win Political Battles - History

The Top Ten Battles of All Time

By Michael Lee Lanning
Lt. Col. (Ret.) U.S. Army

Battles win wars, topple thrones, and redraw borders. Every age of human history has experienced battles that have been instrumental in molding the future. Battles influence the spread of culture, civilization, and religious dogma. They introduce weapons, tactics, and leaders who dominate future conflicts. Some battles have even been influential not for their direct results, but for the impact of their propaganda on public opinion.

The following list is not a ranking of decisive engagements, but rather a ranking of battles according to their influence on history. Each narrative details location, participants, and leaders of the battle, and also provides commentary on who won, who lost, and why. Narratives also evaluate each battle's influence on the outcome of its war and the impact on the victors and losers.

Battle # 10 Vienna
Austria-Ottoman Wars, 1529

The Ottoman Turks' unsuccessful siege of Vienna in 1529 marked the beginning of the long decline of their empire. It also stopped the advance of Islam into central and western Europe, and ensured that the Christian rather than the Muslim religion and culture would dominate the region.

In 1520, Suleiman II had become the tenth sultan of the Ottoman Empire, which reached from the Persian frontier to West Africa and included much of the Balkans. Suleiman had inherited the largest, best-trained army in the world, containing superior elements of infantry, cavalry, engineering, and artillery. At the heart of his army were elite legions of Janissaries, mercenary slaves taken captive as children from Christians and raised as Muslim soldiers. From his capital of Constantinople, the Turkish sultan immediately began making plans to expand his empire even farther.

Suleiman had also inherited a strong navy, which he used with his army to besiege the island fortress of Rhodes, his first conquest. Granting safe passage to the defenders in exchange for their surrender, the Sultan took control of Rhodes and much of the Mediterranean in 1522. This victory demonstrated that Suleiman would honor peace agreements. In following battles where enemies did not surrender peacefully, however, he displayed his displeasure by razing cities, massacring the adult males, and selling the women and children into slavery.

By 1528, Suleiman had neutralized Hungary and placed his own puppet on their throne. All that now stood between the Turks and Western Europe was Austria and its Spanish and French allies. Taking advantage of discord between his enemies, Suleiman made a secret alliance with King Francis I of France. Pope Clement VII in Rome, while not allying directly with the Muslim Sultan, withdrew religious and political support from the Austrians.

As a result, by the spring of 1529, King Charles and his Austrians stood alone to repel the Ottoman invaders. On April 10, Suleiman and his army of more than 120,000, accompanied by as many as 200,000 support personnel and camp followers, departed Constantinople for the Austrian capital of Vienna. Along the way, the huge army captured towns and raided the countryside for supplies and slaves.

All the while, Vienna, under the able military leadership of Count Niklas von Salm-Reifferscheidt and Wilhelm von Rogendorf, prepared for the pending battle. Their task appeared impossible. The city's walls, only five to six feet thick, were designed to repel medieval attackers rather than the advanced cast-cannon artillery of the Turks. The entire Austrian garrison numbered only about 20,000 soldiers supported by 72 cannons. The only reinforcements who arrived in the city were a detachment of 700 musket-armed infantrymen from Spain.

Despite its disadvantages, Vienna had several natural factors supporting its defense. The Danube blocked any approach from the north, and the smaller Wiener Back waterway ran along its eastern side, leaving only the south and west to be defended. The Vienna generals took full advantage of the weeks before the arrival of the Turks. They razed dwellings and other buildings outside the south and west walls to open fields of fire for their cannons and muskets. They dug trenches and placed other obstacles on avenues of approach. They brought in supplies for a long siege within the walls and evacuated many of the city's women and children, not only to reduce the need for food and supplies but also to prevent the consequences if the Turks were victorious.

One other factor greatly aided Vienna: the summer of 1529 was one of the wettest in history. The constant rains delayed the Ottoman advance and made conditions difficult for the marching army. By the time they finally reached Vienna in September, winter was approaching, and the defenders were as prepared as possible.

Upon his arrival, Suleiman asked for the city's surrender. When the Austrians refused, he began an artillery barrage against the walls with his 300 cannons and ordered his miners to dig under the walls and lay explosives to breach the defenses. The Austrians came out from behind their walls to attack the engineers and artillerymen and dig counter-trenches. Several times over the next three weeks, the invaders' artillery and mines achieved small breaches in the wall, but the Viennese soldiers quickly filled the gaps and repelled any entry into the city.

By October 12, the cold winds of winter were sweeping the city. Suleiman ordered another attack with his Janissaries in the lead. Two underground mines near the city's southern gate opened the way briefly for the mercenaries, but the staunch Viennese defenders filled the opening and killed more than 1200. Two days later, Suleiman ordered one last attack, but the Viennese held firm once again.

For the first time, Suleiman had failed. Scores of his never-before-defeated Janissaries lay dead outside the walls. The Turkish army had no choice but to burn their huge camp and withdraw back toward Constantinople, but before they departed they massacred the thousands of captives they had taken on the way to Vienna. Along their long route home, many more Turks died at the hands of raiding parties that struck their flanks.

The loss at Vienna did not greatly decrease the power of the Ottoman Empire. It did, however, stop the Muslim advance into Europe. Suleiman and his army experienced many successes after Vienna, but these victories were in the east against the Persians rather than in the west against the Europeans. The Ottoman Empire survived for centuries, but its high-water mark lay somewhere along the Vienna city wall.

Following the battle for Vienna, the countries of the west no longer viewed the Turks and the Janissaries as invincible. Now that the Austrians had kept the great menace from the east and assured the continuation of the region's culture and Christianity, the European countries could return to fighting among themselves along Catholic and Protestant lines.

If Vienna had fallen to Suleiman, his army would have continued their offensive the following spring into the German provinces. There is a strong possibility that Suleiman's Empire might have eventually reached all the way to the North Sea, the alliance with France notwithstanding. Instead, after Vienna, the Ottomans did not venture again into Europe the Empire's power and influence began its slow but steady decline.

Battle # 9 Waterloo
Napoleonic Wars, 1815

The Allied victory over Napoleon Bonaparte at the Battle of Waterloo in 1815 brought an end to French domination of Europe and began a period of peace on the continent that lasted for nearly half a century. Waterloo forced Napoleon into exile, ended France's legacy of greatness, which it has never regained, etched its name on the list of history's best known battles, and added a phrase to the vernacular: "Waterloo" has come to mean decisive and complete defeat.

When the French Revolution erupted in 1789, twenty-year-old Napoleon left his junior officer position in the King's artillery to support the rebellion. He remained in the military after the revolution and rapidly advanced in rank to become a brigadier general six years later. Napoleon was instrumental in suppressing a Royalist uprising in 1795, for which his reward was command of the French army in Italy.

Over the next four years, Napoleon achieved victory after victory as his and France's influence spread across Europe and into North Africa. In late 1799, he returned to Paris, where he joined an uprising against the ruling Directory. After a successful coup, Napoleon became the first consul and the country's de facto leader on November 8. Napoleon backed up these aggrandizing moves with military might and political savvy. He established the Napoleonic Code, which assured individual rights of citizens and instituted a rigid conscription system to build an even larger army. In 1800, Napoleon's army invaded Austria and negotiated a peace that expanded France's border to the Rhine River. The agreement brought a brief period of peace, but Napoleon's aggressive foreign policy and his army's offensive posturing led to war between France and Britain in 1803.

Napoleon declared himself Emperor of France in 1804 and for the next eight years achieved a succession of victories, each of which created an enemy. Downplaying the loss of much of his navy at the Battle of Trafalgar in 1805, Napoleon claimed that control of Europe lay on the land, not the sea. In 1812, he invaded Russia and defeated its army only to lose the campaign to the harsh winter. He lost more of his army in the extended campaign on the Spanish peninsula.

In the spring of 1813, Britain, Russia, Prussia, and Sweden allied against France while Napoleon rallied the survivors of his veteran army and added new recruits to meet the enemy coalition. Although he continued to lead his army brilliantly, the stronger coalition defeated him at Leipzig in October 1813, forcing Napoleon to withdraw to southern France. Finally, at the urging of his subordinates, Napoleon abdicated on April 1, 1814, and accepted banishment to the island of Elba near Corsica.

Napoleon did not remain in exile for long. Less than a year later, he escaped Elba and sailed to France, where for the next one hundred days he struck a trail of terror across Europe and threatened once again to dominate the continent. King Louis XVIII, whom the coalition had returned to his throne, dispatched the French army to arrest the former emperor, but they instead rallied to his side. Louis fled the country, and Napoleon again claimed the French crown on March 20. Veterans as well as new recruits swelled Napoleon's army to more than 250,000.

News of Napoleon's return reached the coalition leaders while they were meeting in Vienna. On March 17, Britain, Prussia, Austria, and Russia agreed to each provide 150,000 soldiers to assemble in Belgium for an invasion of France to begin on July 1. Other nations promised smaller support units.

Napoleon learned of the coalition plan and marched north to destroy their army before it could organize. He sent part of his army, commanded by Emmanuel de Grouchy, to attack the Prussians under Gebhard von Bluecher in order to prevent their joining the Anglo-Dutch force near Brussels. Napoleon led the rest of the army against the British and Dutch.

The French army won several minor battles as they advanced into Belgium. Although the coalition commander, the Duke of Wellington, had little time to prepare, he began assembling his army twelve miles south of Brussels, just outside the village of Waterloo. There he arrayed his defenses on high ground at Mount St. Jean to meet the northward-marching French.

By the morning of June 18, Napoleon had arrived at Mount St. Jean and deployed his army on high ground only 1300 yards from the enemy defenses. Napoleon's army of 70,000, including 15,000 cavalrymen and 246 artillery pieces, faced Wellington's allied force of about 65,000, including 12,000 cavalry and 156 guns, in a three-mile line. Both commanders sent word to their other armies to rejoin the main force.

A hard rain drenched the battlefield, causing Napoleon to delay his attack as late as possible on June 18 so that the boggy ground could dry and not impair his cavalry and artillery. After ordering a sustained artillery bombardment, Napoleon ordered a diversionary attack against the allied right flank in the west in hopes of getting Wellington to commit his reserve. The British defenders on the west flank, including the Scots and Coldstream Guards, remained on the reverse slope of the ridge during the artillery bombardment and then came forward when the French advanced.

The attack against the Allied right flank failed to force Wellington to commit his reserve, but Napoleon pressed on with his main assault against the enemy center. As the attack progressed, Napoleon spotted the rising dust of Bluecher's approaching army, which had eluded Grouchy's, closing on the battlefield. Napoleon, disdainful of British fighting ability, and overly confident of his own leadership and the abilities of his men, continued the attack in the belief that he could defeat Wellington before the Prussians joined the fight or that Grouchy would arrive in time to support the assault.

For three hours, the French and the British fought, often with bayonets. The French finally secured a commanding position at the center at La Haye Sainte, but the Allied lines held. Late in the afternoon, Bluecher arrived and seized the village of Plancenoit in Napoleon's rear, which forced the French to fall back. After a brutal battle decided by bayonets, the French forced the Prussians to withdraw. Napoleon then turned back against Wellington.

Napoleon ordered his most experienced battalions forward from their reserve position for another assault against the Allied center. The attack almost breached the Allied defenses before Wellington committed his own reserves. When the survivors of Napoleon's best battalions began to withdraw from the fight, other units joined the retreat. The Prussians, who had regrouped, attacked the French flank, sending the remainder running in disorder to the south. Napoleon's last few reserve battalions led him to the rear where he attempted, without success, to regroup his scattered army. Although defeated, the French refused to give up. When the Allies asked a French Old Guard officer to surrender, he replied, "The Guard dies, it never surrenders."

More than 26,000 French were killed or wounded and another 9,000 captured at Waterloo. Allied casualties totaled 22,000. At the end of the one-day fight, more than 45,000 men lay dead or wounded within the three-square-mile battlefield. Thousands more on both sides were killed or wounded in the campaign that led to Waterloo.

Napoleon agreed once again to abdicate on June 22, and two weeks later, the Allies returned Louis to power. Napoleon and his hundred days were over. This time, the British took no chances they imprisoned Napoleon on remote St. Helena Island in the south Atlantic, where he died in 1821.

Even if Napoleon had somehow won the battle, he had too few friends and too many enemies to continue. He and his country were doomed before his return from Elba.

France never recovered its greatness after Waterloo. It returned territory and resumed its pre-Napoleon borders. With Napoleon banished, Britain, Russia, Prussia, and Austria maintained a balance of power that brought European peace for more than four decades--an unusually long period in a region where war was much more common than peace.

While a period of peace in itself is enough to distinguish Waterloo as an influential battle, it and Napoleon had a much more important effect on world events. While the Allies fought to replace the king of France on his throne, their leaders and individual soldiers saw and appreciated the accomplishments of a country that respected individual rights and liberties. After Waterloo, as the common people demanded a say in their way of life and government, constitutional monarchies took the place of absolute rule. Although there was post-war economic depression in some areas, the general plight of the common French citizen improved in the postwar years.

Through the passage of time, the name Waterloo has become synonymous with total defeat. Napoleon and France did indeed meet their Waterloo in southern Belgium in 1815, but while the battle brought an end to one age, it introduced another. Although the French lost, the spirit of their revolution. and individual rights spread across Europe. No kingdom or country would again be the same.

Battle # 8 Huai-Hai
Chinese Civil War, 1948

The Battle of Huai-Hai was the final major fight between the armies of the Chinese Communist Party (CCP) and the Nationalist Party of Kuomintang (KMT) in their long struggle over control of the world's most populous country. At the end of the battle, more than half a million KMT soldiers were dead, captured, or converted to the other side, placing China in the hands of the Communists who continue to govern today.

Struggles for the control of China and its provinces date back to the beginnings of recorded history. While some dynasties endured for many years and others for only short periods of time, the Chinese had fought among themselves and against foreign invaders throughout history only to find themselves divided once again at the start of the twentieth century. Political ideologies centered in Peking and Canton. Divisions in the country widened when the Japanese invaded in 1914. During World War I, the Chinese faced threats from within, from the Japanese, and from the newly formed Soviet Union.

When World War I finally ended, the Chinese continued their internal struggles with local dictators fighting to control small regions. In 1923, the country's two major parties, the CCP under Mao Zedong and the KMT controlled by Chiang Kai-shek, joined in an alliance to govern the country. The two sides had little in common, and in less than five years, the shaky alliance had come apart when their leaders' views on support from the Soviet Union clashed. Mao encouraged Soviet support while Chiang opposed it.

By 1927, the two parties were directly competing for control of China and its people. Mao focused on the rural areas while Chiang looked to the urban and industrial areas for his power. From 1927 to 1937, the two sides engaged in a civil war in which Chiang gained the upper hand through a series of successful offensives. Chiang almost destroyed the CCP army in 1934, but Mao and 100,000 men escaped before he could do so. For the next year, the Communists retreated from the Nationalists across 6,000 miles of China to Yenan, a retreat that became known as the Long March. Only 20,000 survived.

In 1937, Chiang and Mao once again put their differences aside to unite against another invasion by Japan. Mao and his army fought in the rural northern provinces, primarily employing guerrilla warfare. Mao also used this opportunity to solidify his support from the local peasants while stockpiling weapons provided by the Allies and captured from the Japanese. His army actually gained strength during the fighting. Meanwhile Chiang faced stronger Japanese opposition in the south, which weakened his army.

Despite efforts by the United States to mediate an agreement, the Communists and Nationalists resumed their armed conflict soon after the conclusion of World War II. In contrast to their weaker position prior to the war, the Communists now were stronger than the Nationalists. On October 10, 1947, Mao called for the overthrow of the Nationalist administration.

Mao, a student of Washington, Napoleon, and Sun Tzu, began to push his army south into the Nationalist zone. Whereas the Nationalists often looted the cities they occupied and punished their residents, the Communists took little retribution, especially against towns that did not resist. Now the Communists steadily achieved victories over the Nationalists. During the summer of 1948, the Communists experienced a series of victories that pushed the major portion of the Nationalist army into a cross-shaped area extending from Nanking north to Tsinan and from Kaifeng east through Soochow to the sea.

Mao decided that it was time to achieve a total victory. On October 11, 1948, he issued orders for a methodical campaign to surround, separate, and destroy the half-million-man Nationalist army between the Huai River and the Lung Hai Railway--the locations that gave the resulting battle its name. Mao divided his battle plan into three phases, all of which his army accomplished more smoothly and efficiently than anticipated.

The Communists divided the Nationalist-held territory into three areas. Then beginning in November, they attacked each in turn. Early in the campaign, many Nationalists, seeing no hope for their own survival, much less a Nationalist victory, defected to the Communists. Chiang, who also was encountering internal divisions within his party, attempted to reinforce each battle area, but poor leadership by the Nationalist generals, combined with Communist guerrilla activities, made his efforts ineffective. Chiang even had air superiority during the entire battle but was unable to coordinate ground and air actions to secure any advantage.

Over a period of two months, the Communists destroyed each of the three Nationalist forces. Support for Chiang from inside and outside China dwindled with each successive Communist victory. The United States, which had been a primary supporter, providing arms and supplies to the Nationalists, suspended all aid on December 20, 1948. U.S. Secretary of State George C. Marshall stated, "The present regime has lost the confidence of the people, reflected in the refusal of soldiers to fight and the refusal of the people to cooperate in economic reforms."

Within weeks of the U.S. announcement, the Communists overran the last Nationalist position and ended the Battle of Huai-Hai. Of the six highest-ranking Nationalist generals in the battle, two were killed in the fighting and two captured. The remaining two were among the few who escaped. By January 10, 1949, the half-million members of the Nationalist army had disappeared.

Within weeks, Tientsin and Peking fell to the Communists. On January 20, Chiang resigned his leadership of the Nationalists. The remaining Nationalist army and government continued to retreat until they finally withdrew to the island of Formosa. On Formosa, renamed Taiwan, Chiang regained power and developed the island into an Asian economic power. Mainland China, however, remained under the control of Mao and his Communists, who are still in power today.

The Communist takeover of China achieved by the Battle of Huai-Hai greatly influenced not only that country but the entire world. Over the next two decades, Mao focused almost exclusively on wielding complete control over his country. He ruthlessly put down any opposition and either executed or starved to death more than 20 million of his countrymen in order to bring to China the "joys" and "advantages" of Communism. Fortunately for the rest of the world, Mao remained focused on his own country. He disagreed with the Soviets on political and philosophical aspects of Communism, and the two nations viewed each other as possible opponents rather than allies.

China's internal struggles and its conflicts with its neighbors have restricted its active world influence. Even though it remains today the largest and strongest Communist nation and the only potential major Communist threat to the West, China remains a passive player, more interested in internal and neighboring disputes than in international matters.

Had the Nationalists been victorious at Huai-Hai, China would have played a different role in subsequent world events. There would have been no Communist China to support North Korea's invasion of the South, or North Vietnam's efforts to take over South Vietnam. Had Chiang, with his outward views and Western ties, been the victor, China might have taken a much more assertive role in world events. Instead, the Battle of Huai-Hai would keep China locked in its internal world rather than opening it to the external.

Battle # 7 Atomic Bombing of Japan
World War II, 1945

The United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki in August 1945 to hasten the end of World War II in the Pacific. Although it would be the first, and to date the only, actual use of such weapons of "mass destruction," the mushroom clouds have hung over every military and political policy since.

Less than five months after the sneak attack by the Japanese against Pearl Harbor, the Americans launched a small carrier-based bomber raid against Tokyo. While the attack was good for the American morale, it accomplished little other than to demonstrate to the Japanese that their shores were not invulnerable. Later in the war, U.S. bombers were able to attack the Japanese home islands from bases in China, but it was not until late 1944 that the United States could mount a sustained bombing campaign.

Because of the distance to Japan, American bombers could not reach targets and safety return to friendly bases in the Pacific until the island-hopping campaign had captured the Northern Mariana Islands. From bases on the Mariana Islands, long-range B-29 Superfortresses conducted high altitude bombing runs on November 24, 1944. On March 9, 1945, an armada of 234 B-29s descended to less than 7,000 feet and dropped 1,667 tons of incendiaries on Tokyo. By the time the fire storm finally abated, a sixteen-square-mile corridor that had contained a quarter million homes was in ashes, and more than 80,000 Japanese, mostly civilians, lay dead. Only the Allied fire bombing of Dresden, Germany, the previous month, which killed 135,000, exceed the destruction of the Tokyo raid.

Both Tokyo and Dresden were primarily civilian rather than military targets. Prior to World War II, international law regarded the bombing of civilians as illegal and barbaric. After several years of warfare, however, neither the Allies nor the Axis distinguished between military and civilian air targets. Interestingly, while a pilot could drop tons of explosives and firebombs on civilian cities, an infantryman often faced a court-martial for even minor mistreatment of noncombatants.

Despite the air raids and their shrinking territory outside their home islands, the Japanese fought on. Their warrior code did not allow for surrender, and soldiers and civilians alike often chose suicide rather than giving up. By July 1945, the Americans were launching more than 1200 bombing sorties a week against Japan. The bombing had killed more than a quarter million and left more than nine million homeless. Still, the Japanese gave no indication of surrender as the Americans prepared to invade the home islands.

While the air attacks and plans for a land invasion continued in the Pacific, a top-secret project back in the United States was coming to fruition. On July 16, 1945, the Manhattan Engineer District successfully carried out history's first atomic explosion. When President Harry Truman learned of the successful experiment, he remarked in his diary, "It seems to be the most terrible thing ever discovered, but it can be made the most useful."

Truman realized that the "most terrible thing" could shorten the war and prevent as many as a million Allied casualties, as well as untold Japanese deaths, by preventing a ground invasion of Japan. On July 27, the United States issued an ultimatum: surrender or the U.S. would drop a "super weapon." Japan refused.

In the early morning hours of August 6,1945, a B-29 named the Enola Gay piloted by Lieutenant Colonel Paul Tibbets lifted off from Tinian Island in the Marianas. Aboard was a single atomic bomb weighing 8,000 pounds and containing the destructive power of 12.5 kilotons of TNT. Tibbets headed his plane toward Hiroshima, selected as the primary target because of its military bases and industrial areas. It also had not yet been bombed to any extent, so it would provide an excellent evaluation of the bomb's destructive power.

At 8:15 A.M ., the Enola Gay dropped the device called "Little Boy." A short time later, Tibbets noted, "A bright light filled the plane. We turned back to look at Hiroshima. The city was hidden by that awful cloud . boiling up, mushrooming." The immediate impact of Little Boy killed at least 70,000 Hiroshima residents. Some estimates claim three times that number but exact figures are impossible to calculate because the blast destroyed all of the city's records.

Truman again demanded that Japan surrender. After three days and no response, a B-29 took off from Tinian with an even larger atomic bomb aboard. When the crew found their primary target of Kokura obscured by clouds, they turned toward their secondary, Nagasaki. At 11:02 A.M . on August 9, 1945, they dropped the atomic device known as "Fat Man" that destroyed most of the city and killed more than 60,000 of its inhabitants.

Conventional bombing raids were also conducted against other Japanese cities on August 9, and five days later, 800 B-29s raided across the country. On August 15 (Tokyo time), the Japanese finally accepted unconditional surrender. World War II was over.

Much debate has occurred since the atomic bombings. While some evidence indicates that the Japanese were considering surrender, far more information indicates otherwise. Apparently the Japanese were planning to train civilians to use rifles and spears to join the military in resisting a land invasion. Protesters of the Atomic bombings ignore the conventional incendiaries dropped on Tokyo and Dresden that claimed more casualties. Some historians even note that the losses at Hiroshima and Nagasaki were far fewer than the anticipated Japanese casualties from an invasion and continued conventional bombing.

Whatever the debate, there can be no doubt that the dropping of the atomic bombs on Japan shortened the war, The strikes against Hiroshima and Nagasaki are the only air battles that directly affected the outcome of a conflict. Air warfare, both before and since, has merely supplemented ground fighting. As confirmed by the recent Allied bombing of Iraq in Desert Storm and in Bosnia, air attacks can harass and make life miserable for civilian populations, but battles and wars continue to be decided by ground forces.

In addition to hastening the end of the war with Japan, the development and use of the atomic bomb provided the United States with unmatched military superiority--at least for a brief time, until the Soviet Union exploded their own atomic device. The two superpowers then began competitive advancements in nuclear weaponry that brought the world to the edge of destruction. Only tentative treaties and the threat of mutual total destruction kept nuclear arms harnessed, producing the Cold War period in which the U.S. And the USSR worked out their differences through conventional means.

Battle # 6 Cajamarca
Spanish Conquest of Peru, 1532

Francisco Pizarro conquered the largest amount of territory ever taken in a single battle when he defeated the Incan Empire at Cajamarca in 1532. Pizarro's victory opened the way for Spain to claim most of South America and its tremendous riches, as well as imprint the continent with its language, culture, and religion.

Christopher Columbus's voyages to the New World offered a preview of the vast wealth and resources to be found in the Americas, and Hernan Cortes's victory over the Aztecs had proven that great riches were there for the taking. It is not surprising that other Spanish explorers flocked to the area--some to advance the cause of their country, most to gain their own personal fortunes.

Francisco Pizarro was one of the latter. The illegitimate son of a professional soldier, Pizarro joined the Spanish army as a teenager and then sailed for Hispaniola, from where he participated in Vasco de Balboa's expedition that crossed Panama and "discovered" the Pacific Ocean in 1513. Along the way, he heard stories of the great wealth belonging to native tribes to the south.

After learning of Cortes's success in Mexico, Pizarro received permission to lead expeditions down the Pacific Coast of what is now Colombia, first in 1524-25 and then again in 1526-28. The second expedition experienced such hardships that his men wanted to return home. According to legend, Pizarro drew a line in the sand with his sword and invited anyone who desired "wealth and glory" to step across and continue with him in his quest.

Thirteen men crossed the line and endured a difficult journey into what is now Peru, where they made contact with the Incas. After peaceful negotiations with the Incan leaders, the Spaniards returned to Panama and sailed to Spain with a small amount of gold and even a few llamas. Emperor Charles V was so impressed that he promoted Pizarro to captain general, appointed him the governor of all lands six hundred miles south of Panama, and financed an expedition to return to the land of the Incas.

Pizarro set sail for South America in January 1531 with 265 soldiers and 65 horses. Most of the soldiers carried spears or swords. At least three had primitive muskets called arquebuses, and twenty more carried crossbows. Among the members of the expedition were four of Pizarro's brothers and all of the original thirteen adventurers who had crossed their commander's sword line to pursue "wealth and glory."

Between wealth and glory stood an army of 30,000 Incas representing a century-old empire that extended 2,700 miles from modern Ecuador to Santiago, Chile. The Incas had assembled their empire by expanding outward from their home territory in the Cuzco Valley. They had forced defeated tribes to assimilate Incan traditions, speak their language, and provide soldiers for their army. By the time the Spaniards arrived, the Incas had built more than 10,000 miles of roads, complete with suspension bridges, to develop trade throughout the empire. They also had become master, stonemasons with finely crafted temples and homes.

About the time Pizarro landed on the Pacific Coast, the Incan leader, considered a deity, died, leaving his sons to fight over leadership. One of these sons, Atahualpa, killed most of his siblings and assumed the throne shortly before he learned that the white men had returned to his Incan lands.

Pizarro and his "army" reached the southern edge of the Andes in present day Peru in June 1532. Undaunted by the report that the Incan army numbered 30,000, Pizarro pushed inland and crossed the mountains, no small feat itself. Upon arrival at the village of Cajamarca on a plateau on the eastern slope of the Andes, the Spanish officer invited the Incan king to a meeting. Atahualpa, believing himself a deity and unimpressed with the Spanish force, arrived with a defensive force of only three or four thousand.

Despite the odds, Pizarro decided to act rather than talk. With his arquebuses and cavalry in the lead, he attacked on November 16, 1532. Surprised by the assault and awed by the firearms and horses, the Incan army disintegrated, leaving Atahualpa a prisoner. The only Spanish casualty was Pizarro, who sustained a slight wound while personally capturing the Incan leader.

Pizarro demanded a ransom of gold from the Incas for their king, the amount of which legend says would fill a room to as high as a man could reach--more than 2,500 cubic feet. Another two rooms were to be filled with silver. Pizarro and his men had their wealth assured but not their safety, as they remained an extremely small group of men surrounded by a huge army. To enhance his odds, the Spanish leader pitted Inca against Inca until most of the viable leaders had killed each other. Pizarro then marched into the former Incan capital at Cuzco and placed his handpicked king on the throne. Atahualpa, no longer needed, was sentenced to be burned at the stake as a heathen, but was strangled instead after he professed to accept Spanish Christianity.

Pizarro returned to the coast and established the port city of Lima, where additional Spanish soldiers and civilian leaders arrived to govern and exploit the region's riches. Some minor Incan uprisings occurred in 1536, but native warriors were no match for the Spaniards. Pizarro lived in splendor until he was assassinated in 1541 by a follower who believed he was not receiving his fair share of the booty.

In a single battle, with only himself wounded, Pizarro conquered more than half of South America and its population of more than six million people. The jungle reclaimed the Inca palaces and roads as their wealth departed in Spanish ships. The Incan culture and religion ceased to exist. For the next three centuries, Spain ruled most of the north and Pacific coast of South America. Its language, culture, and religion still dominate there today.

Battle # 5 Antietam
American Civil War, 1862

The Battle of Antietam, the bloodiest day in American history, stopped the first Confederate invasion of the North. It also ensured that European countries would not recognize the Confederacy or provide them with much-needed war supplies. While the later battles at Gettysburg and Vicksburg would seal the fate of the rebel states, the defeat of the rebellion began along Antietam Creek near Sharpsburg, Maryland, on September 17, 1862.

From the day the American colonies gained their independence at the Battle of Yorktown in 1781, a conflict between the United States North and South seemed inevitable. Divided by geographical and political differences, and split over slavery and state's rights issues, the North and South had experienced mounting tensions during the first half of the nineteenth century. Finally, the election of Republican Abraham Lincoln in 1860 provided the spark that formally divided the country. Although Lincoln had made no campaign promises to outlaw slavery, many in the South viewed him as an abolitionist who would end the institution on which much of the region's agriculture and industry depended. In December 1860, South Carolina, acting on what they thought was a "state's right" under the U.S. Constitution, seceded from the Union. Three months later, seven other southern states joined South Carolina to form the Confederate States of America.

Few believed that the action would lead to war. Southerners claimed it was their right to form their own country while Northerners thought that a blockade of the Confederacy, supported by diplomacy, would peacefully return the rebel states to the fold. However, chances for a peaceful settlement ended with the Confederate bombardment of Fort Sumter, South Carolina, on April 12-14, 1861. Four more states joined the Confederacy a few days later.

Both sides quickly mobilized and aggressive Confederate commanders achieved success against the more reluctant and cautious Union leaders. While warfare on land favored the Confederates, they lacked a navy, which allowed the U.S. Navy to blockade its shores. This prevented the South from exporting their primary cash crop of cotton, as well as importing much-needed arms, ammunition, and other military supplies that the meager Southern industrial complex could not provide.

In May 1862, General Robert E. Lee took command of what he renamed as the Army of Northern Virginia. Lee soon became one the most beloved commanders in history. Yet, while his men adored him, his critics noted his inability to control his subordinate leaders.

Despite his shortcomings, Lee outmaneuvered and out-generaled his opponents in his initial battles. He turned back the Union march on Richmond and then moved north to win the Second Battle of Bull Run near Manassas, Virginia, on August 30, 1862. Both Lee and Confederate President Jefferson Davis realized, however, that the South could not win a prolonged war against the more populous and industrialized North. To endure and succeed, the South would need war supplies and naval support from Britain, France, and possibly even Russia. While these countries were sympathetic with the Southern cause, they were not going to risk bad relations or even war with the United States unless they were convinced the rebellion would succeed.

Following their victory at the Second Battle of Bull Run, Lee and Davis devised a plan that would meet their immediate needs for supplies as well as their long-range goal of European recognition. They would take the war into the North. On September 6, the Army of Northern Virginia crossed into Maryland with the intention of raiding and gathering supplies in southern Pennsylvania.

Union General George B. McClellan paralleled Lee, keeping his army between the invading rebels and Washington, D.C., where Lincoln feared they would attack. On September 9, 1862, Lee issued Order Number 191, calling for half of his force to move to Harrisburg, Pennsylvania, to control the region's rail center, while the other half marched to Harpers Ferry to capture the town's gun factory and to secure lines back to the South. Four days later, a Union soldier discovered a copy of the order in a field, wrapped around three cigars. He kept the cigars, but Lee's order was shortly in McClellan's hands.

Even though McClellan now possessed the complete Confederate battle plan and his forces outnumbered the rebels 76,000 to 40,000, he remained cautious because his own intelligence officers incorrectly warned that the Confederates' force was far larger. On September 14, McClellan began to close on Lee's army only to be slowed by small forces in passes in South Mountain. The brief delay allowed Lee to form his army along a low ridge near Antietam Creek just east of Sharpsburg, Maryland.

McClellan finally attacked on the morning of September 17, but his characteristic hesitation and poor communications caused the battle to be composed of three separate fights rather than one united effort. The battle began with a murderous artillery barrage, followed by an infantry assault on the Confederate left. Attacks and counterattacks marked the next two hours, with neither side able to maintain an advantage. Meanwhile, at midmorning, Union troops assaulted the rebel center that stood protected in a sunken road. By the time the rebels withdrew four hours later, the depleted, exhausted Union force was unable to pursue past what was now known as the "Bloody Lane."

In the afternoon, still another Union force attacked the rebel right flank to secure a crossing of Antietam Creek. Even though the waterway was fordable along much of its banks, most of the fight was concentrated over a narrow bridge. After much bloodshed, the Union troops pushed the Confederates back and were about to cut off Lee's route back south when rebel reinforcements arrived from Harpers Ferry. Even so, the third battlefront, like the other two, lapsed into a stalemate.

On the morning of September 18, Lee and his army withdrew back to Virginia. Since he was not forced to retreat, Lee claimed victory. McClellan, overly cautious as usual, chose not to pursue, although it is possible that if he had done so he could have defeated Lee and brought the war to a quick conclusion.

Between the two armies lay more than 23,000 dead or wounded Americans wearing either blue or gray. A single day of combat produced more casualties than any other in American history--more dead and wounded than the U.S. incurred in its Revolution, the War of 1812, the Mexican War, and the Spanish-American War combined. Casualties at Antietam even outnumbered those of the Longest Day, the first day of the Normandy Invasion, by nine to one.

The influence of Antietam reached far beyond the death and wounds. For the first time, Lee and the rebel army failed to accomplish their objective, and this provided a much-needed morale boost for the Union. More importantly, when France and England learned of the battle's outcome, they decided that recognition of the Confederate States would not be advantageous.

The battle also changed the objectives of the United States. Prior to Antietam, Lincoln and the North had fought primarily to preserve the Union. Lincoln had waited for the opportunity to bring slavery to the forefront. Five days after Antietam, he signed the Emancipation Proclamation. Although the Proclamation did not free slaves in Union states and, of course, had no power to do so in areas controlled by the rebels, it did advance the freeing of slaves as an objective of the war.

Prior to the battle and the Proclamation, European nations, although opposed to slavery, still had sympathies for the Southern cause. Now with slavery an open issue and the Confederate's ability to win in question, the South would have to stand totally alone.

While it took two-and-a-half more years of fighting and the battles of Gettysburg and Vicksburg to finally end the war, the Confederate States were doomed from the time they withdrew southward from Antietam Creek. An improving Union army, combined with a solid refusal of outside support for the Confederacy, spelled the beginning of the end.

Antietam ranks as one of history's most influential battles because if the South had been victorious outside Sharpsburg, it is very possible that France, England, and possibly even Russia would have recognized the new country. Their navies would have broken the Union blockade to reach the cotton needed for their mills and to deliver highly profitable war materials. France, who already had troops in Mexico, might have even provided ground forces to support the South. Lincoln most likely would not have issued his Emancipation Proclamation and might have been forced to make peace with the rebels, leaving the country divided. Although future events, such as the two World Wars, would likely have made the former enemies into allies, it is doubtful that, in their state of division, either the United States or Confederate States would have been able to attain the level of world influence or to develop into the political, trade, and military power that the unified United States would become.

Battle # 4 Leipzig
Napoleonic Wars, 1813

The allied victory over Napoleon at Leipzig in 1813 marked the first significant cooperation among European nations against a common foe. As the largest armed clash in history up to that time, Leipzig led to the fall of Paris and the abdication of Napoleon.

After the Russian army and winter had handed Napoleon a nasty defeat in 1812, Europeans felt confident that peace would prevail after more than a decade of warfare. They were wrong. As soon as Napoleon returned to France from icy Russia, he set about rebuilding his army, conscripting teens and young men. He strengthened these ranks of inexperienced youths with veterans brought back from the Spanish front.

While Napoleon had been weakened by Russia, he believed that the other European countries were too distrustful of each other to ally against him. In early 1813, he decided to advance into the German provinces to resume his offensive. Just as he had done before, he planned to defeat each army he encountered and assimilate the survivors into his own force.

European leaders were correct to fear that Napoleon could accomplish his objectives, but they remained reluctant to enter into alliances with neighbors who were former, and possibly future, enemies. Karl von Metternich, the foreign minister of Austria, saw that neither his nor any other European country could stand alone against the French. Even though he had previously negotiated an alliance with Napoleon, he now began to assemble a coalition of nations against the French emperor.

Metternich's diplomacy, combined with the massing of the French army on the German border, finally convinced Prussia, Russia, Sweden, Great Britain, and several smaller countries to ally with Austria in March 1813. Napoleon disregarded the alliance and crossed into Germany with the intention of defeating each opposing army before the "allies" could actually unite against him.

Napoleon won several of the initial fights, even defeating the Prussians at Lutzen on May 2. He soon realized, however, that his new army was not the experienced one he had lost in Russia. More importantly, he had not been able to replace much of his cavalry lost in the Russian winter, limiting his reconnaissance and intelligence-gathering capabilities.

When Napoleon learned that armies were marching toward Dresden from the north, south, and east against him, he negotiated a truce that began on June 4. Metternich met with Napoleon in an attempt to reach a peace settlement but, despite generous terms that allowed France to retain its pre-war borders and for him to remain in power, Napoleon refused to accept the agreement.

During the negotiations, both sides continued to add reinforcements. On August 16, the truce ended and combat resumed. For two months, the Allies harassed the French but avoided a pitched battle while they solidified their plans for a major attack. Napoleon's army, forced to live off the land and to rapidly march and countermarch against the multiple armies around them, steadily became more exhausted.

In September, the Allies began a general offensive in which the French won several small battles. Yet the Allies forced them back to Leipzig in October. Napoleon had 175,000 men to defend the town, but the Allies massed 350,000 soldiers and 1,500 artillery pieces outside his lines.

On the morning of October 16, 1813, Napoleon left part of his army in the north to resist an attack by the Prussians while he attempted to break through the Russian and Austrian lines in the south. The battle raged all day as the front swept back and forth, but by nightfall both sides occupied the same positions as when the battle began.

Little action took place on October 17 because both sides rested. The battle on October 18 closely resembled that of two days earlier. Nine hours of furious combat accomplished little except to convince Napoleon that he could not continue a battle of attrition against the larger Allied force. The odds against him increased when the Swedish army arrived to join the Allies and a unit of Saxons deserted the French to join the other side.

Napoleon attempted to establish another truce, but the Allies refused. During the night, the French began to withdraw westward by crossing the Elster River. A single stone bridge, which provided the only crossing, soon created a bottleneck. Napoleon deployed 30,000 soldiers to act as a rear guard to protect the crossing, but they were stranded when the bridge was destroyed. A few swam to safety, but most, including three senior officers, were killed or captured.

Once again, Napoleon limped back toward Paris. Behind him he left 60,000 dead, wounded, or captured French soldiers. The Allies had lost a similar number, but they could find replacements far more quickly and easily than Napoleon. Other countries, including the Netherlands and Bavaria--which Napoleon had added to his confederation by conquest--now abandoned him and joined the Allies. On December 21, the Allies invaded France and, following their victory at Paris on March 30, 1814, forced Napoleon into exile on Elba.

Napoleon soon returned, but after only one hundred days he suffered his final defeat by the Allies at Waterloo on June 18, 1815 . Metternich continued his unification efforts and signed most of the Allies to the Concert of Europe, which provided a balance of power and a peace that lasted until the Crimean War in 1854. Most of the alliance survived another three decades until the ambitions of Germany brought an end to European peace.

The Battle of Leipzig was important because it brought Napoleon a defeat from which he could not recover. More important, however, was the cooperation of armies against him. This alliance is so significant that Leipzig is frequently called the Battle of the Nations. For these reasons, Leipzig ranks as one of history's most influential battles.

Leipzig also eclipses Waterloo in its influence. While the latter was certainly more decisive, a victory by Napoleon at Leipzig would likely have broken the alliance and placed the French in a position to once again defeat each of the other nation's armies. A French victory at Leipzig would have meant no defeat of Napoleon at Paris, no abdication to Elba, and no return to Waterloo.

Battle # 3 Stalingrad
World War II, 1942-43

Stalingrad was the last great offensive by the German Nazis on the Eastern Front. Their defeat in the city on the Volga River marked the beginning of a long series of battles that would lead the Russians to Berlin and Hitter's Third Reich to defeat. The Battle of Stalingrad resulted in the death or capture of more than a quarter million German soldiers, and denied the rich Caucasus oil fields to the Nazis.

Despite the lack of success by the German army to capture the cities of Moscow and Leningrad in their blitzkrieg offensive in the fall and winter of 1941, Hitler remained determined to conquer Russia in order to destroy Communism and gain access to natural resources for the Third Reich. With his army stalled outside the cities to the north, Hitler directed an offensive against Stalingrad to capture the city's industrial assets and to cut communications between the Volga and Don Rivers. Along with the attack against Stalingrad, German columns were to sweep into the Caucasus to capture the oil fields that would fuel future Nazi conquests.

In the spring of 1942, German Army Group A headed into the Caucasus while Group B marched toward Stalingrad. Initially both were successful, but the German army, depleted by the battles of the previous year, was too weak to sustain two simultaneous offensives. The Germans might have easily captured Stalingrad had Hitler not continued to redirect units to the Caucasus. By the time he concentrated the offensive against Stalingrad, the Soviets had reinforced the area. Stalin directed the defenders of the city that bore his name, "Not a step backward." Hitler accepted the challenge and directed additional forces against the city.

On August 23, 1942, more than a thousand German airplanes began dropping incendiary and explosive bombs. More than 40,000 of the 600,000 Stalingrad civilians died in the fiery attack. The survivors picked up arms and joined the soldiers in defense of their city. The next day, the Sixth German Army, commanded by General Friedrich Paulus, pressed into the edge of the town and assumed victory when they found it mostly in ruins. They were wrong. Soldiers and civilians rose from the rubble to fight back with small arms and even hand-to-hand combat as they contested every foot of the destroyed town.

Elements of the Soviet Sixty-second Army joined the fight. Clashes over the city's Mamaev Mound resulted in the hill changing hands eight times as the battle line advanced and retreated. Near the center of the city, the Stalingrad Central Railway station changed hands fifteen times in bitter, close infantry combat. German artillery and air power continued to pound the city, but the Russians maintained such close contact with their opponents that much of the ordinance exploded harmlessly to their rear.

By September 22, the Germans occupied the center of Stalingrad, but the beleaguered Russian soldiers and civilians refused to surrender. They provided Soviet General Georgi Zhukov time to reinforce the city's flanks with additional soldiers, tanks, and artillery pieces. On November 19, the Russians launched a counter-offensive against the north and south flanks of the Germans.

The two attacks focused on lines held by Romanian, Italian, and Hungarian forces who were allied with the Germans, rather than the better trained and disciplined Nazi troops. On November 23, the two pincers linked up west of Stalingrad, trapping more than 300,000 German soldiers in a pocket thirty-five miles wide and twenty miles long.

General Paulus requested permission from Hitler to withdraw prior to the encirclement, but he was told to fight on. Reich Marshal Hermann Goering promised Hitler that he could supply the surrounded Paulus with 500 tons of food and ammunition per day. Goering and his Luftwaffe failed to deliver even 150 tons a day while the Russians destroyed more than 500 transport aircraft during the supply effort. A relief column led by General Erich von Manstein, one of Hitler's finest officers, attempted to reach the surrounded army but failed.

The Russians continued to reduce the German perimeter. By Christmas, the Germans were low on ammunition, nearly out of food, and freezing in the winter cold. On January 8, 1943, the Russians captured the last airfield inside the German lines and demanded the surrender of the entire army. Hitler radioed Paulus, "Surrender is forbidden. Sixth Army will hold their position to the last man and last round. " He also promoted Paulus to field marshal and reminded him that no German of that rank had ever surrendered on the battlefield.

The Germans did not hold out to the last round or the last man. By January 31, their numbers had plummeted to 90,000, many of whom were wounded. All were hungry and cold. Units began to give up, and within two days all resistance ceased. Field Marshal Paulus surrendered himself, 23 generals, 90,000 men, 60,000 vehicles, 1,500 tanks, and 6,000 artillery pieces.

Of the 90,000 Germans captured at Stalingrad, only about 5,000 survived the harsh conditions of the Soviet prisoner-of-war camps. Those who were not worked to death died of starvation and disease. Paulus, however, was not harshly treated by his captors but remained under house arrest in Moscow for eleven years. He was allowed in 1953 to return to Dresden in East Germany, where he died in 1957.

The siege of Stalingrad provided sufficient time for the German Army Group A to withdraw from the Caucasus. The loss of Army Group B in the rubble of Stalingrad and the toll experienced by Army Group A before its withdrawal, however, weakened the German army on the Eastern Front to the point where it could never again mount a major offensive. More than two years would pass before the Red Army occupied Berlin, but Stalingrad opened the way to the future victories that led to Hitler's Bunker and the defeat of Nazi Germany.

Victory at Stalingrad did not come easily or cheaply for the Russians. Nearly half a million soldiers and civilians died in defense of the city. Almost all of its homes, factories, and other buildings were destroyed. But the Russians had won, and that victory united the Russian people, giving them the confidence and strength that drove them on to Berlin.

Stalingrad proved to the Russians and their allies that they could both stop and defeat the great German army. The battle was the turning point of World War II. Victory at Stalingrad for the Germans would have led to victory in the Caucasus Mountains. With the oil and other resources from that area, the German army would have been able to turn more of their power to the Western Front. If the German armies in the east had survived to face the British, the Americans, and their Allies in the west, the war definitely would not have concluded as quickly. Perhaps even the eventual allied victory might have been in doubt.

While Stalingrad was the turning point of World War II, and the valor of its defenders will never be in doubt, the Soviet brand of Communism in whose name the battle was fought has not survived. Stalingrad did not even survive to see the demise of the Soviet Union. In the purge of all references to Stalin after his death, the city was renamed Volgograd. Yet, the brave defenders of Stalingrad, who fought for themselves and their city, deserve recognition as fighting one of history's most decisive and influential battles.

Battle # 2 Hastings
Norman Conquest of England, 1066

The Norman victory at the Battle of Hastings in 1066 was the last successful invasion of England--and the first and only since the Roman conquest a thousand years earlier. Its aftermath established a new feudal order that ensured that England would adopt the political and social traditions of continental Europe, rather than those of Scandinavia. The single battle also gained the country's crown for the Norman leader William.

Prior to the Battle of Hastings, the Vikings ruled Scandinavia, Northern Europe, and much of the British Isles. Areas they did not directly control were still vulnerable to their constant raids. Earlier Viking victories in France had led to intermarriage and the creation of a people who called themselves the Normans. Other Vikings conquered the British Isles and established their own kingdoms. Royal bloodlines ran through the leaders of all of the monarchies, but this did not prevent them from fighting each other.

Claims of crowns and territories reached a state of crisis with the death of Edward the Confessor, the King of England in 1066, who had left no heir. Three men claimed the throne: Harold Godwin, brother-in-law of Edward William, the Duke of Normandy and a distant relative of Edward's and King Harald Hardrada of Norway, the brother of Harold Godwin.

Both Harald and William assembled armies to sail to England to secure their claims. Godwin decided that William presented more of a threat and moved his English army to the southern coast across from Normandy. Weather, however, delayed William, and King Harald's ten thousand Vikings arrived first. On September 20, the Vikings soundly defeated the local forces around the city of York and seriously weakened the English army in the region.

Hearing of the battle, Godwin turned his army north and covered the two hundred miles to York in only six days. At Stamford Bridge, he surprised the Vikings and soundly defeated them. The retreating Viking survivors filled only twenty-four of the three hundred ships that had brought them to England.

Godwin had inflicted the most decisive defeat on the Vikings in more than two centuries, but there was no time to celebrate. A few days later, he learned that the Normans had landed at Pevensey Bay in Sussex and were marching inland. Godwin hurried back south with his army and on October 1 he arrived in London, where he recruited additional soldiers. On October 13, Godwin moved to Sussex to take defensive positions along the Norman line of march on Senlac Ridge, eight miles northwest of the village of Hastings. He did not have long to prepare because William approached the next day.

Godwin possessed both advantages and disadvantages. He had the advantage of the defense, and his army of 7,000 was about the same size as that of the Normans. Only about 2,000 of his men, however, were professionals. These housecarls, as they were known, wore conical helmets and chain-mail vests and carried five-foot axes in addition to metal shields. The remaining Saxons were poorly trained militiamen known as fyrds, who were basically draftees levied from the shires. Many of the fyrds, and most of the housecarls, were exhausted from their march as well as from the fierce battle with the Vikings.

William's army contained about 2,000 cavalrymen and 5,000 infantrymen, equally armed with swords or bows or crossbows. Despite the lack of numerical superiority and an enemy defense that would only allow for a frontal assault, William attacked.

The Normans advanced behind a rain of arrows from their archers, but the Saxon shields turned aside most of the missiles. Several direct attacks by the infantry fared no better. William then personally led a cavalry charge but was turned back by marshy ground and the Saxon defenses. Defeat, or at best stalemate, appeared to be the outcome of the battle for the invaders. The Normans were further demoralized when a story swept the ranks that William had been killed.

When the Norman leader heard the rumor, he removed his visor and rode to the head of his army. His soldiers, seeing that he was alive, rallied and renewed the assault. William also ordered his archers to fire at a high angle rather than in a direct line in order to reach behind the Saxon shields. The battle remained in doubt until William's cavalry turned and wildly fled from the battlefield. Whether the cavalry was retreating from fright or as a ruse, it had the same results. The Saxons left their defenses to pursue, only to be struck by the Norman infantry. At about the same time, an arrow hit Godwin in the eye, and he was killed by the advancing infantry. The leaderless Saxons began to flee.

William, soon to be known as the Conqueror, pursued the retreating Saxons and seized Dover. With little resistance, he entered London on December 25, 1066, and received the crown of England as King William I. Over the next five years, William brutally put down several rebellions and replaced the Anglo-Saxon aristocracy with his own Norman followers. Norman nobles built castles from which to rule and defend the countryside. Norman law, customs, traditions, and citizens intermingled with the Saxons to form the future of England as a nation.

Later the adage would declare, "There'll always be an England." The fact remains that the England that eventually came to exist began on the Hastings battlefield, and 1066 became a schoolbook standard marking the expansion of English culture, colonization, and influence around the world.

Battle # 1 Yorktown
American Revolution, 1781

The Battle of Yorktown was the climax of the American Revolution and directly led to the independence of the United States of America. While others may have been larger and more dramatic, no battle in history has been more influential. From the days following their victory at Yorktown, Americans have steadily gained power and influence up to their present role as the world's most prosperous nation and the only military superpower.

The idea that a group of poorly armed, loosely organized colonists would have the audacity to challenge the massive, experienced army and navy of their rulers seemed impossible when the revolution's first shots rang out at Lexington and Concord in 1775. The rebels' chances of success seemed even more remote when the American colonies formally declared their independence from Great Britain on July 4, 1776.

Despite the huge imbalance of power, the Americans understood that time was on their side. As long as George Washington and his army remained in the field, the newly declared republic survived. Washington did not have to defeat the British he simply had to avoid having the British defeat him. The longer the war lasted, the greater the odds that the British would become involved in wars that threatened their own islands and that the British public would tire of the war and its costs.

During the first year of the war, Washington had lost a series of battles around New York but had withdrawn the bulk of his army to fight another day. Many British commanders had unintentionally aided the American effort with their military ineptness and their belief that the rebels would diplomatically end their revolt.

Participants on both sides, as well as observers around the world, had begun to take the possibility of American independence seriously only with their victory at Saratoga in October 1777. The poorly executed plan by the British to divide New England from the southern colonies by occupying New York's Hudson River Valley had resulted not only in the surrender of nearly six thousand British soldiers but also in the recognition of the United States as an independent nation by France. The American victory at Saratoga and the entrance of the French into the war also drew Spain and the Netherlands into the fight against England.

By 1778, neither the British nor the Americans could gain the upper hand, as the war in the northern colonies had come to a stalemate. The British continued to occupy New York and Boston, but they were too weak to crush the rebel army. Washington similarly lacked the strength to attack the British fortresses.

In late 1778, British commander General Henry Clinton used his superior sea mobility to transfer much of his army under Lord Charles Cornwallis to the southern colonies, where they occupied Savannah and then Charleston the following year. Clinton's plan was for Cornwallis to neutralize the southern colonies, which would cut off supplies to Washington and isolate his army.

Washington countered by dispatching Nathanael Greene, one of his ablest generals, to command the American troops in the South. From 1779 to 1781, Greene and other American commanders fought a guerrilla-like campaign of hit-and-run maneuvers that depleted and exhausted the British. In the spring of 1781, Cornwallis marched into North Carolina and then into Yorktown on the Virginia peninsula flanked by the York and James Rivers. Although his army outnumbered the Americans two to one, Cornwallis fortified the small town and waited for additional men and supplies to arrive by ship.

Meanwhile, more than seven thousand French infantrymen, commanded by Jean Baptiste de Rochambeau, joined Washington's army outside New York, and a French fleet led by Admiral Paul de Grasse waited in the Caribbean, preparing to sail northward. Washington wanted de Grasse to blockade New York while the combined American-French armies attacked Clinton's New York force.

Rochambeau and de Grasse proposed instead that they attack Cornwallis. On August 21, 1781, Washington left a few units around New York and joined Rochambeau to march the two hundred miles to Yorktown in only fifteen days. Clinton, convinced that New York was still the rebels' primary target, did nothing.

While the infantry was on its march, the French navy drove away the British ships in the area at the Battle of Chesapeake Capes on September 5. De Grasse then blockaded the entrance to Chesapeake Bay and landed three thousand men to join the growing army around Yorktown.

By the end of September, Washington had united his army from the north with the rebel Southerners. He now had more than 8,000 Americans along with the 7,000 French soldiers to encircle the 6,000 British defenders. On October 9, 1781, the Americans and French began pounding the British with fifty-two cannons while they dug trenches toward the primary enemy defensive redoubts.

The American-Franco infantry captured the redoubts on October 14 and moved their artillery forward so they could fire directly into Yorktown. Two days later, a British counterattack failed. On October 17, Cornwallis asked for a cease-fire, and on the 19th he agreed to unconditional surrender. Only about one hundred and fifty of his soldiers had been killed and another three hundred wounded, but he knew that future action was futile. American and French losses numbered seventy-two killed and fewer than two hundred wounded.

Cornwallis, claiming illness, sent his deputy Charles O'Hara to surrender in his place. While the British band played "The World Turned Upside Down," O'Hara approached the allies and attempted to surrender his sword to his European peer rather than the rebel colonist. Rochambeau recognized the gesture and deferred to Washington. The American commander turned to his own deputy, Benjamin Lincoln, who accepted O'Hara's sword and the British surrender.

Several small skirmishes occurred after Yorktown, but for all practical purposes, the revolutionary war was over. The upheaval and embarrassment over the defeat at Yorktown brought down the British government, and the new officials authorized a treaty on September 3, 1783, that acknowledged the independence of the United States.

Yorktown directly influenced not only the United States but also France. The French support of the United States and their own war against Britain wrecked France's economy. More importantly, the idea of liberty from a tyrant, demonstrated by the Americans, motivated the French to begin their own revolution in 1789 that eventually led to the age of Napoleon and far greater wars.

The fledgling United States had to fight the British again in 1812 to guarantee its independence, but the vast area and resources of North America soon enlarged and enriched the new nation. By the end of the nineteenth century, the United States had become a world power by the end of the twentieth, it was the strongest and most influential nation in the world.

Before Yorktown, the United States was a collection of rebels struggling for independence. After Yorktown, it began a process of growth and evolution that would eventually lead to its present status as the longest-surviving democracy and most powerful country in history. The American Revolution, beginning at Lexington and Concord and drawing strength from Saratoga, culminated at Yorktown in the most influential battle in history.

Copyright 2005 Michael Lee Lanning All Rights Reserved

Michael Lee Lanning retired from the United States Army after more than twenty years of service. He is a decorated veteran of the Vietnam War, where he served as an infantry platoon leader and company commander. The 'Top Ten Battles' article presented here is from his latest book: "The Battle 100: The Stories Behind History's Most Influential Battles," illustrated by Bob Rosenburgh. Lanning has written fourteen books on military history, including "The Military 100: A Ranking of the Most Influential Military Leaders of All Time."

Terms of use: Private home/school non-commercial, non-Internet re-usage only is allowed of any text, graphics, photos, audio clips, other electronic files or materials from The History Place.


5. Don&rsquot Get Personal

In office politics, you&rsquoll get angry with people. It happens. There will be times when you feel the urge to give that person a piece of your mind and teach him a lesson. Don&rsquot.

People tend to remember moments when they were humiliated or insulted. Even if you win this argument and get to feel really good about it for now, you&rsquoll pay the price later when you need help from this person. What goes around comes around, especially at the workplace.

To win in the office, you&rsquoll want to build a network of allies which you can tap into. The last thing you want during a crisis or an opportunity is to have someone screw you up because they harbor ill-intentions towards you &ndash all because you&rsquod enjoyed a brief moment of emotional outburst at their expense.

Another reason to hold back your temper is your career advancement. Increasingly, organizations are using 360 degree reviews to promote someone. Even if you are a star performer, your boss will have to fight a political uphill battle if other managers or peers see you as someone who is difficult to work with. The last thing you&rsquoll want is to make it difficult for your boss to champion you for a promotion.


Political correctness: how the right invented a phantom enemy

T hree weeks ago, around a quarter of the American population elected a demagogue with no prior experience in public service to the presidency. In the eyes of many of his supporters, this lack of preparation was not a liability, but a strength. Donald Trump had run as a candidate whose primary qualification was that he was not “a politician”. Depicting yourself as a “maverick” or an “outsider” crusading against a corrupt Washington establishment is the oldest trick in American politics – but Trump took things further. He broke countless unspoken rules regarding what public figures can or cannot do and say.

Every demagogue needs an enemy. Trump’s was the ruling elite, and his charge was that they were not only failing to solve the greatest problems facing Americans, they were trying to stop anyone from even talking about those problems. “The special interests, the arrogant media, and the political insiders, don’t want me to talk about the crime that is happening in our country,” Trump said in one late September speech. “They want me to just go along with the same failed policies that have caused so much needless suffering.”

Trump claimed that Barack Obama and Hillary Clinton were willing to let ordinary Americans suffer because their first priority was political correctness. “They have put political correctness above common sense, above your safety, and above all else,” Trump declared after a Muslim gunman killed 49 people at a gay nightclub in Orlando. “I refuse to be politically correct.” What liberals might have seen as language changing to reflect an increasingly diverse society – in which citizens attempt to avoid giving needless offence to one another – Trump saw a conspiracy.

Throughout an erratic campaign, Trump consistently blasted political correctness, blaming it for an extraordinary range of ills and using the phrase to deflect any and every criticism. During the first debate of the Republican primaries, Fox News host Megyn Kelly asked Trump how he would answer the charge that he was “part of the war on women”.

“You’ve called women you don’t like ‘fat pigs,’ ‘dogs,’ ‘slobs,’ and ‘disgusting animals’,” Kelly pointed out. “You once told a contestant on Celebrity Apprentice it would be a pretty picture to see her on her knees …”

“I think the big problem this country has is being politically correct,” Trump answered, to audience applause. “I’ve been challenged by so many people, I don’t frankly have time for total political correctness. And to be honest with you, this country doesn’t have time either.”

Trump used the same defence when critics raised questions about his statements on immigration. In June 2015, after Trump referred to Mexicans as “rapists”, NBC, the network that aired his reality show The Apprentice, announced that it was ending its relationship with him. Trump’s team retorted that, “NBC is weak, and like everybody else is trying to be politically correct.”

In August 2016, after saying that the US district judge Gonzalo Curiel of San Diego was unfit to preside over the lawsuit against Trump Universities because he was Mexican American and therefore likely to be biased against him, Trump told CBS News that this was “common sense”. He continued: “We have to stop being so politically correct in this country.” During the second presidential debate, Trump answered a question about his proposed “ban on Muslims” by stating: “We could be very politically correct, but whether we like it or not, there is a problem.”

Every time Trump said something “outrageous” commentators suggested he had finally crossed a line and that his campaign was now doomed. But time and again, Trump supporters made it clear that they liked him because he wasn’t afraid to say what he thought. Fans praised the way Trump talked much more often than they mentioned his policy proposals. He tells it like it is, they said. He speaks his mind. He is not politically correct.

Trump and his followers never defined “political correctness”, or specified who was enforcing it. They did not have to. The phrase conjured powerful forces determined to suppress inconvenient truths by policing language.

There is an obvious contradiction involved in complaining at length, to an audience of hundreds of millions of people, that you are being silenced. But this idea – that there is a set of powerful, unnamed actors, who are trying to control everything you do, right down to the words you use – is trending globally right now. Britain’s rightwing tabloids issue frequent denunciations of “political correctness gone mad” and rail against the smug hypocrisy of the “metropolitan elite”. In Germany, conservative journalists and politicians are making similar complaints: after the assaults on women in Cologne last New Year’s Eve, for instance, the chief of police Rainer Wendt said that leftists pressuring officers to be politisch korrekt had prevented them from doing their jobs. In France, Marine Le Pen of the Front National has condemned more traditional conservatives as “paralysed by their fear of confronting political correctness”.

Trump’s incessant repetition of the phrase has led many writers since the election to argue that the secret to his victory was a backlash against excessive “political correctness”. Some have argued that Hillary Clinton failed because she was too invested in that close relative of political correctness, “identity politics”. But upon closer examination, “political correctness” becomes an impossibly slippery concept. The term is what Ancient Greek rhetoricians would have called an “exonym”: a term for another group, which signals that the speaker does not belong to it. Nobody ever describes themselves as “politically correct”. The phrase is only ever an accusation.

If you say that something is technically correct, you are suggesting that it is wrong – the adverb before “correct” implies a “but”. However, to say that a statement is politically correct hints at something more insidious. Namely, that the speaker is acting in bad faith. He or she has ulterior motives, and is hiding the truth in order to advance an agenda or to signal moral superiority. To say that someone is being “politically correct” discredits them twice. First, they are wrong. Second, and more damningly, they know it.

If you go looking for the origins of the phrase, it becomes clear that there is no neat history of political correctness. There have only been campaigns against something called “political correctness”. For 25 years, invoking this vague and ever-shifting enemy has been a favourite tactic of the right. Opposition to political correctness has proved itself a highly effective form of crypto-politics. It transforms the political landscape by acting as if it is not political at all. Trump is the deftest practitioner of this strategy yet.

M ost Americans had never heard the phrase “politically correct” before 1990, when a wave of stories began to appear in newspapers and magazines. One of the first and most influential was published in October 1990 by the New York Times reporter Richard Bernstein, who warned – under the headline “The Rising Hegemony of the Politically Correct” – that the country’s universities were threatened by “a growing intolerance, a closing of debate, a pressure to conform”.

Bernstein had recently returned from Berkeley, where he had been reporting on student activism. He wrote that there was an “unofficial ideology of the university”, according to which “a cluster of opinions about race, ecology, feminism, culture and foreign policy defines a kind of ‘correct’ attitude toward the problems of the world”. For instance, “Biodegradable garbage bags get the PC seal of approval. Exxon does not.”

Bernstein’s alarming dispatch in America’s paper of record set off a chain reaction, as one mainstream publication after another rushed to denounce this new trend. The following month, the Wall Street Journal columnist Dorothy Rabinowitz decried the “brave new world of ideological zealotry” at American universities. In December, the cover of Newsweek – with a circulation of more than 3 million – featured the headline “THOUGHT POLICE” and yet another ominous warning: “There’s a ‘politically correct’ way to talk about race, sex and ideas. Is this the New Enlightenment – or the New McCarthyism?” A similar story graced the cover of New York magazine in January 1991 – inside, the magazine proclaimed that “The New Fascists” were taking over universities. In April, Time magazine reported on “a new intolerance” that was on the rise across campuses nationwide.

If you search ProQuest, a digital database of US magazines and newspapers, you find that the phrase “politically correct” rarely appeared before 1990. That year, it turned up more than 700 times. In 1991, there are more than 2,500 instances. In 1992, it appeared more than 2,800 times. Like Indiana Jones movies, these pieces called up enemies from a melange of old wars: they compared the “thought police” spreading terror on university campuses to fascists, Stalinists, McCarthyites, “Hitler Youth”, Christian fundamentalists, Maoists and Marxists.

Many of these articles recycled the same stories of campus controversies from a handful of elite universities, often exaggerated or stripped of context. The New York magazine cover story opened with an account of a Harvard history professor, Stephan Thernstrom, being attacked by overzealous students who felt he had been racially insensitive: “Whenever he walked through the campus that spring, down Harvard’s brick paths, under the arched gates, past the fluttering elms, he found it hard not to imagine the pointing fingers, the whispers. Racist. There goes the racist. It was hellish, this persecution.”

In an interview that appeared soon afterwards in The Nation, Thernstrom said the harassment described in the New York article had never happened. There had been one editorial in the Harvard Crimson student newspaper criticising his decision to read extensively from the diaries of plantation owners in his lectures. But the description of his harried state was pure “artistic licence”. No matter: the image of college students conducting witch hunts stuck. When Richard Bernstein published a book based on his New York Times reporting on political correctness, he called it Dictatorship of Virtue: Multiculturalism and the Battle for America’s Future – a title alluding to the Jacobins of the French Revolution. In the book he compared American college campuses to France during the Reign of Terror, during which tens of thousands of people were executed within months.

N one of the stories that introduced the menace of political correctness could pinpoint where or when it had begun. Nor were they very precise when they explained the origins of the phrase itself. Journalists frequently mentioned the Soviets – Bernstein observed that the phrase “smacks of Stalinist orthodoxy”– but there is no exact equivalent in Russian. (The closest would be “ideinost”, which translates as “ideological correctness”. But that word has nothing to do with disadvantaged people or minorities.) The intellectual historian LD Burnett has found scattered examples of doctrines or people being described as “politically correct” in American communist publications from the 1930s – usually, she says, in a tone of mockery.

The phrase came into more widespread use in American leftist circles in the 1960s and 1970s – most likely as an ironic borrowing from Mao, who delivered a famous speech in 1957 that was translated into English with the title “On the Correct Handling of Contradictions Among the People”.

Ruth Perry, a literature professor at MIT who was active in the feminist and civil rights movements, says that many radicals were reading the Little Red Book in the late 1960s and 1970s, and surmises that her friends may have picked up the adjective “correct” there. But they didn’t use it in the way Mao did. “Politically correct” became a kind of in-joke among American leftists – something you called a fellow leftist when you thought he or she was being self-righteous. “The term was always used ironically,” Perry says, “always calling attention to possible dogmatism.”

In 1970, the African-American author and activist Toni Cade Bambara, used the phrase in an essay about strains on gender relations within her community. No matter how “politically correct” her male friends thought they were being, she wrote many of them were failing to recognise the plight of black women.

Until the late 1980s, “political correctness” was used exclusively within the left, and almost always ironically as a critique of excessive orthodoxy. In fact, some of the first people to organise against “political correctness” were a group of feminists who called themselves the Lesbian Sex Mafia. In 1982, they held a “Speakout on Politically Incorrect Sex” at a theatre in New York’s East Village – a rally against fellow feminists who had condemned pornography and BDSM. Over 400 women attended, many of them wearing leather and collars, brandishing nipple clamps and dildos. The writer and activist Mirtha Quintanales summed up the mood when she told the audience, “We need to have dialogues about S&M issues, not about what is ‘politically correct, politically incorrect’.”

By the end of the 1980s, Jeff Chang, the journalist and hip-hop critic, who has written extensively on race and social justice, recalls that the activists he knew then in the Bay Area used the phrase “in a jokey way – a way for one sectarian to dismiss another sectarian’s line”.

But soon enough, the term was rebranded by the right, who turned its meaning inside out. All of a sudden, instead of being a phrase that leftists used to check dogmatic tendencies within their movement, “political correctness” became a talking point for neoconservatives. They said that PC constituted a leftwing political programme that was seizing control of American universities and cultural institutions – and they were determined to stop it.

T he right had been waging a campaign against liberal academics for more than a decade. Starting in the mid-1970s, a handful of conservative donors had funded the creation of dozens of new thinktanks and “training institutes” offering programmes in everything from “leadership” to broadcast journalism to direct-mail fundraising. They had endowed fellowships for conservative graduate students, postdoctoral positions and professorships at prestigious universities. Their stated goal was to challenge what they saw as the dominance of liberalism and attack left-leaning tendencies within the academy.

Starting in the late 1980s, this well-funded conservative movement entered the mainstream with a series of improbable bestsellers that took aim at American higher education. The first, by the University of Chicago philosophy professor Allan Bloom, came out in 1987. For hundreds of pages, The Closing of the American Mind argued that colleges were embracing a shallow “cultural relativism” and abandoning long-established disciplines and standards in an attempt to appear liberal and to pander to their students. It sold more than 500,000 copies and inspired numerous imitations.

In April 1990, Roger Kimball, an editor at the conservative journal, The New Criterion, published Tenured Radicals: How Politics Has Corrupted our Higher Education. Like Bloom, Kimball argued that an “assault on the canon” was taking place and that a “politics of victimhood” had paralysed universities. As evidence, he cited the existence of departments such as African American studies and women’s studies. He scornfully quoted the titles of papers he had heard at academic conferences, such as “Jane Austen and the Masturbating Girl” or “The Lesbian Phallus: Does Heterosexuality Exist?”

In June 1991, the young Dinesh D’Souza followed Bloom and Kimball with Illiberal Education: the Politics of Race and Sex on Campus. Whereas Bloom had bemoaned the rise of relativism and Kimball had attacked what he called “liberal fascism”, and what he considered frivolous lines of scholarly inquiry, D’Souza argued that admissions policies that took race into consideration were producing a “new segregation on campus” and “an attack on academic standards”. The Atlantic printed a 12,000 word excerpt as its June cover story. To coincide with the release, Forbes ran another article by D’Souza with the title: “Visigoths in Tweed.”

These books did not emphasise the phrase “political correctness”, and only D’Souza used the phrase directly. But all three came to be regularly cited in the flood of anti-PC articles that appeared in venues such as the New York Times and Newsweek. When they did, the authors were cited as neutral authorities. Countless articles uncritically repeated their arguments.

In some respects, these books and articles were responding to genuine changes taking place within academia. It is true that scholars had become increasingly sceptical about whether it was possible to talk about timeless, universal truths that lay beyond language and representation. European theorists who became influential in US humanities departments during the 1970s and 1980s argued that individual experience was shaped by systems of which the individual might not be aware – and particularly by language. Michel Foucault, for instance, argued that all knowledge expressed historically specific forms of power. Jacques Derrida, a frequent target of conservative critics, practised what he called “deconstruction”, rereading the classics of philosophy in order to show that even the most seemingly innocent and straightforward categories were riven with internal contradictions. The value of ideals such as “humanity” or “liberty” could not be taken for granted.

It was also true that many universities were creating new “studies departments”, which interrogated the experiences, and emphasised the cultural contributions of groups that had previously been excluded from the academy and from the canon: queer people, people of colour and women. This was not so strange. These departments reflected new social realities. The demographics of college students were changing, because the demographics of the United States were changing. By 1990, only two-thirds of Americans under 18 were white. In California, the freshman classes at many public universities were “majority minority”, or more than 50% non-white. Changes to undergraduate curriculums reflected changes in the student population.

The responses that the conservative bestsellers offered to the changes they described were disproportionate and often misleading. For instance, Bloom complained at length about the “militancy” of African American students at Cornell University, where he had taught in the 1960s. He never mentioned what students demanding the creation of African American studies were responding to: the biggest protest at Cornell took place in 1969 after a cross burning on campus, an open KKK threat. (An arsonist burned down the Africana Studies Center, founded in response to these protests, in 1970.)

More than any particular obfuscation or omission, the most misleading aspect of these books was the way they claimed that only their adversaries were “political”. Bloom, Kimball, and D’Souza claimed that they wanted to “preserve the humanistic tradition”, as if their academic foes were vandalising a canon that had been enshrined since time immemorial. But canons and curriculums have always been in flux even in white Anglo-America there has never been any one stable tradition. Moby Dick was dismissed as Herman Melville’s worst book until the mid-1920s. Many universities had only begun offering literature courses in “living” languages a decade or so before that.

In truth, these crusaders against political correctness were every bit as political as their opponents. As Jane Mayer documents in her book, Dark Money: the Hidden History of the Billionaires Behind the Rise of the Radical Right, Bloom and D’Souza were funded by networks of conservative donors – particularly the Koch, Olin and Scaife families – who had spent the 1980s building programmes that they hoped would create a new “counter-intelligentsia”. (The New Criterion, where Kimball worked, was also funded by the Olin and Scaife Foundations.) In his 1978 book A Time for Truth, William Simon, the president of the Olin Foundation, had called on conservatives to fund intellectuals who shared their views: “They must be given grants, grants, and more grants in exchange for books, books, and more books.”

These skirmishes over syllabuses were part of a broader political programme – and they became instrumental to forging a new alliance for conservative politics in America, between white working-class voters and small business owners, and politicians with corporate agendas that held very little benefit for those people.

By making fun of professors who spoke in language that most people considered incomprehensible (“The Lesbian Phallus”), wealthy Ivy League graduates could pose as anti-elite. By mocking courses on writers such as Alice Walker and Toni Morrison, they made a racial appeal to white people who felt as if they were losing their country. As the 1990s wore on, because multiculturalism was associated with globalisation – the force that was taking away so many jobs traditionally held by white working-class people – attacking it allowed conservatives to displace responsibility for the hardship that many of their constituents were facing. It was not the slashing of social services, lowered taxes, union busting or outsourcing that was the cause of their problems. It was those foreign “others”.

PC was a useful invention for the Republican right because it helped the movement to drive a wedge between working-class people and the Democrats who claimed to speak for them. “Political correctness” became a term used to drum into the public imagination the idea that there was a deep divide between the “ordinary people” and the “liberal elite”, who sought to control the speech and thoughts of regular folk. Opposition to political correctness also became a way to rebrand racism in ways that were politically acceptable in the post-civil-rights era.

Soon, Republican politicians were echoing on the national stage the message that had been product-tested in the academy. In May 1991, President George HW Bush gave a commencement speech at the University of Michigan. In it, he identified political correctness as a major danger to America. “Ironically, on the 200th anniversary of our Bill of Rights, we find free speech under assault throughout the United States,” Bush said. “The notion of political correctness has ignited controversy across the land,” but, he warned, “In their own Orwellian way, crusades that demand correct behaviour crush diversity in the name of diversity.”

Illustration: Nathalie Lees

A fter 2001, debates about political correctness faded from public view, replaced by arguments about Islam and terrorism. But in the final years of the Obama presidency, political correctness made a comeback. Or rather, anti-political-correctness did.

As Black Lives Matter and movements against sexual violence gained strength, a spate of thinkpieces attacked the participants in these movements, criticising and trivialising them by saying that they were obsessed with policing speech. Once again, the conversation initially focused on universities, but the buzzwords were new. Rather than “difference” and “multiculturalism”, Americans in 2012 and 2013 started hearing about “trigger warnings”, “safe spaces”, “microaggressions”, “privilege” and “cultural appropriation”.

This time, students received more scorn than professors. If the first round of anti-political-correctness evoked the spectres of totalitarian regimes, the more recent revival has appealed to the commonplace that millennials are spoiled narcissists, who want to prevent anyone expressing opinions that they happen to find offensive.

In January 2015, the writer Jonathan Chait published one of the first new, high-profile anti-PC thinkpieces in New York magazine. “Not a Very PC Thing to Say” followed the blueprint provided by the anti-PC thinkpieces that the New York Times, Newsweek, and indeed New York magazine had published in the early 1990s. Like the New York article from 1991, it began with an anecdote set on campus that supposedly demonstrated that political correctness had run amok, and then extrapolated from this incident to a broad generalisation. In 1991, John Taylor wrote: “The new fundamentalism has concocted a rationale for dismissing all dissent.” In 2015, Jonathan Chait claimed that there were once again “angry mobs out to crush opposing ideas”.

Chait warned that the dangers of PC had become greater than ever before. Political correctness was no longer confined to universities – now, he argued, it had taken over social media and thus “attained an influence over mainstream journalism and commentary beyond that of the old”. (As evidence of the “hegemonic” influence enjoyed by unnamed actors on the left, Chait cited two female journalists saying that they had been criticised by leftists on Twitter.)

Chait’s article launched a spate of replies about campus and social media “cry bullies”. On the cover of their September 2015 issue, the Atlantic published an article by Jonathan Haidt and Greg Lukianoff. The title, “The Coddling Of the American Mind”, nodded to the godfather of anti-PC, Allan Bloom. (Lukianoff is the head of the Foundation for Individual Rights in Education, another organisation funded by the Olin and Scaife families.) “In the name of emotional wellbeing, college students are increasingly demanding protection from words and ideas they don’t like,” the article announced. It was shared over 500,000 times.

These pieces committed many of the same fallacies that their predecessors from the 1990s had. They cherry-picked anecdotes and caricatured the subjects of their criticism. They complained that other people were creating and enforcing speech codes, while at the same time attempting to enforce their own speech codes. Their writers designated themselves the arbiters of what conversations or political demands deserved to be taken seriously, and which did not. They contradicted themselves in the same way: their authors continually complained, in highly visible publications, that they were being silenced.

The climate of digital journalism and social media sharing enabled the anti-political-correctness (and anti-anti-political correctness) stories to spread even further and faster than they had in the 1990s. Anti-PC and anti-anti-PC stories come cheap: because they concern identity, they are something that any writer can have a take on, based on his or her experiences, whether or not he or she has the time or resources to report. They are also perfect clickbait. They inspire outrage, or outrage at the outrage of others.

Meanwhile, a strange convergence was taking place. While Chait and his fellow liberals decried political correctness, Donald Trump and his followers were doing the same thing. Chait said that leftists were “perverting liberalism” and appointed himself the defender of a liberal centre Trump said that liberal media had the system “rigged”.

The anti-PC liberals were so focused on leftists on Twitter that for months they gravely underestimated the seriousness of the real threat to liberal discourse. It was not coming from women, people of colour, or queer people organising for their civil rights, on campus or elsewhere. It was coming from @realdonaldtrump, neo-Nazis, and far-right websites such as Breitbart.

T he original critics of PC were academics or shadow-academics, Ivy League graduates who went around in bow ties quoting Plato and Matthew Arnold. It is hard to imagine Trump quoting Plato or Matthew Arnold, much less carping about the titles of conference papers by literature academics. During his campaign, the network of donors who funded decades of anti-PC activity – the Kochs, the Olins, the Scaifes – shunned Trump, citing concerns about the populist promises he was making. Trump came from a different milieu: not Yale or the University of Chicago, but reality television. And he was picking different fights, targeting the media and political establishment, rather than academia.

As a candidate, Trump inaugurated a new phase of anti-political-correctness. What was remarkable was just how many different ways Trump deployed this tactic to his advantage, both exploiting the tried-and-tested methods of the early 1990s and adding his own innovations.

First, by talking incessantly about political correctness, Trump established the myth that he had dishonest and powerful enemies who wanted to prevent him from taking on the difficult challenges facing the nation. By claiming that he was being silenced, he created a drama in which he could play the hero. The notion that Trump was both persecuted and heroic was crucial to his emotional appeal. It allowed people who were struggling economically or angry about the way society was changing to see themselves in him, battling against a rigged system that made them feel powerless and devalued. At the same time, Trump’s swagger promised that they were strong and entitled to glory. They were great and would be great again.

Second, Trump did not simply criticise the idea of political correctness – he actually said and did the kind of outrageous things that PC culture supposedly prohibited. The first wave of conservative critics of political correctness claimed they were defending the status quo, but Trump’s mission was to destroy it. In 1991, when George HW Bush warned that political correctness was a threat to free speech, he did not choose to exercise his free speech rights by publicly mocking a man with a disability or characterising Mexican immigrants as rapists. Trump did. Having elevated the powers of PC to mythic status, the draft-dodging billionaire, son of a slumlord, taunted the parents of a fallen soldier and claimed that his cruelty and malice was, in fact, courage.

This willingness to be more outrageous than any previous candidate ensured non-stop media coverage, which in turn helped Trump attract supporters who agreed with what he was saying. We should not underestimate how many Trump supporters held views that were sexist, racist, xenophobic and Islamophobic, and were thrilled to feel that he had given them permission to say so. It’s an old trick: the powerful encourage the less powerful to vent their rage against those who might have been their allies, and to delude themselves into thinking that they have been liberated. It costs the powerful nothing it pays frightful dividends.

Trump drew upon a classic element of anti-political-correctness by implying that while his opponents were operating according to a political agenda, he simply wanted to do what was sensible. He made numerous controversial policy proposals: deporting millions of undocumented immigrants, banning Muslims from entering the US, introducing stop-and-frisk policies that have been ruled unconstitutional. But by responding to critics with the accusation that they were simply being politically correct, Trump attempted to place these proposals beyond the realm of politics altogether. Something political is something that reasonable people might disagree about. By using the adjective as a put-down, Trump pretended that he was acting on truths so obvious that they lay beyond dispute. “That’s just common sense.”

The most alarming part of this approach is what it implies about Trump’s attitude to politics more broadly. His contempt for political correctness looks a lot like contempt for politics itself. He does not talk about diplomacy he talks about “deals”. Debate and disagreement are central to politics, yet Trump has made clear that he has no time for these distractions. To play the anti-political-correctness card in response to a legitimate question about policy is to shut down discussion in much the same way that opponents of political correctness have long accused liberals and leftists of doing. It is a way of sidestepping debate by declaring that the topic is so trivial or so contrary to common sense that it is pointless to discuss it. The impulse is authoritarian. And by presenting himself as the champion of common sense, Trump gives himself permission to bypass politics altogether.

Now that he is president-elect, it is unclear whether Trump meant many of the things he said during his campaign. But, so far, he is fulfilling his pledge to fight political correctness. Last week, he told the New York Times that he was trying to build an administration filled with the “best people”, though “Not necessarily people that will be the most politically correct people, because that hasn’t been working.”

Trump has also continued to cry PC in response to criticism. When an interviewer from Politico asked a Trump transition team member why Trump was appointing so many lobbyists and political insiders, despite having pledged to “drain the swamp” of them, the source said that “one of the most refreshing parts of … the whole Trump style is that he does not care about political correctness.” Apparently it would have been politically correct to hold him to his campaign promises.

As Trump prepares to enter the White House, many pundits have concluded that “political correctness” fuelled the populist backlash sweeping Europe and the US. The leaders of that backlash may say so. But the truth is the opposite: those leaders understood the power that anti-political-correctness has to rally a class of voters, largely white, who are disaffected with the status quo and resentful of shifting cultural and social norms. They were not reacting to the tyranny of political correctness, nor were they returning America to a previous phase of its history. They were not taking anything back. They were wielding anti-political-correctness as a weapon, using it to forge a new political landscape and a frightening future.

The opponents of political correctness always said they were crusaders against authoritarianism. In fact, anti-PC has paved the way for the populist authoritarianism now spreading everywhere. Trump is anti-political correctness gone mad.

Main illustration: Nathalie Lees

Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.


Watch the video: Die Schlacht von Cannae - History Time (January 2022).