Whiskey Rebellion: A Rebellion against Taxes?

The history of the Whiskey Rebellion is shrouded in myth. Many scholars consider it a victory for the young U.S. government. But was it really a win for the anti-tax patriots?

What caused the Whiskey Rebellion?

The Whiskey Rebellion was the second major internal uprising in U.S. history (preceded only by Shays’ Rebellion). It was a response to an excise tax created by Alexander Hamilton, who served as Secretary of the Treasury under George Washington.

The U.S. government racked up $79 million in debt during the Articles of Confederation period. The Federal government owed $54 million of that amount. The individual states owed $25 million. Alexander Hamilton saw this as an opportunity to centralize government. He proposed to consolidate the debt. In order to pay it back, he would create a tax on domestic spirits. This was seen as a relatively safe luxury tax. In addition, he had support from those who viewed alcohol as a sinful indulgence. Thus, the Whiskey Act was passed into law in 1791.

What happened during the Whiskey Rebellion?

The Whiskey Tax was extremely unpopular, especially on the frontier (back then, the frontier consisted of Kentucky as well as parts of Pennsylvania, Maryland, Virginia, North Carolina, South Carolina, and Georgia). Many people in these areas just refused to pay the tax. But in western Pennsylvania, protestors fought back.

In July 1794, more than 500 people attacked the tax inspector’s home. George Washington sent a massive militia, 13,000 people strong, to quell the rebellion. By the time the militia arrived, the rebellion had dispersed. Some 20 people were arrested, but no one was ever convicted of a crime.

Guerrilla Explorer’s Analysis

Many scholars consider this a victory for the federal government. In his book, Character: Profiles in Presidential Courage, Chris Wallace provides a fairly typical pro-state treatment:

By acting decisively to quell the threat, Washington had proven that the federal government would stand behind the law. Many continued to fear that the government would destroy their dearly purchased freedoms. But as President Washington noted in his farewell address, a strong government, not a weak one, was the “main pillar…of your tranquility at home; your peace abroad; of your safety; of your prosperity; of that very Liberty which you so highly prize.”

However, the true story of the Whiskey Rebellion lies elsewhere, namely in the frontier. The U.S. government was never able to collect the Whiskey Tax on the frontier. In fact, it hardly tried. In fact, the Whiskey Rebellion, by and large, was mostly a non-violent tax protest. People just refused to pay it. Eventually, Hamilton and his fellow Federalists lost power and all excise taxes were repealed.

Here’s more on the Whiskey Rebellion from Murray Rothbard at LewRockwell.com:

The Whiskey Rebellion has long been known to historians, but recent studies have shown that its true nature and importance have been distorted by friend and foe alike. The Official View of the Whiskey Rebellion is that four counties of western Pennsylvania refused to pay an excise tax on whiskey that had been levied by proposal of the Secretary of Treasury Alexander Hamilton in the Spring of 1791, as part of his excise tax proposal for federal assumption of the public debts of the several states.

Western Pennsylvanians failed to pay the tax, this view says, until protests, demonstrations, and some roughing up of tax collectors in western Pennsylvania caused President Washington to call up a 13,000-man army in the summer and fall of 1794 to suppress the insurrection. A localized but dramatic challenge to federal tax-levying authority had been met and defeated. The forces of federal law and order were safe.

This Official View turns out to be dead wrong…

(See the rest at LewRockwell.com)

Did the U.S. Government kill Big Bands?

In 1935, Benny Goodman launched the Big Band era with a famous performance in Los Angeles. By 1946, the Big Band era was dead. Despite high popularity, it was replaced by the far less dance-friendly (and far less popular) BeBop era. What happened to the Big Band era?

The U.S. government holds a substantial part of the blame. In 1944, the U.S. government imposed the so-called “Cabaret Tax,” partly to raise funds for World War II. Essentially, it placed a 30% tax rate on all establishments that “contained dance floors, served alcohol and other refreshments, and/or provided musical entertainment.” The tax, like so many others, was supposed to be temporary. But when it was reinstated, dance halls closed across the nation. Thanks to the extra cost of doing business, few places could afford to hire big bands. Thus, many big bands were forced to break apart. Musicians formed smaller bands and started playing non-danceable music. Thus, the era of Bebop began. Here’s more on the government’s war on Big Bands by Eric Felten at The Wall Street Journal (paywall protected):

These are strange days, when we are told both that tax incentives can transform technologies yet higher taxes will not drag down the economy. So which is it? Do taxes change behavior or not? Of course they do, but often in ways that policy hands never anticipate, let alone intend. Consider, for example, how federal taxes hobbled Swing music and gave birth to bebop.

With millions of young men coming home from World War II—eager to trade their combat boots for dancing shoes—the postwar years should have been a boom time for the big bands that had been so wildly popular since the 1930s. Yet by 1946 many of the top orchestras—including those of Benny Goodman, Harry James and Tommy Dorsey—had disbanded. Some big names found ways to get going again, but the journeyman bands weren’t so lucky. By 1949, the hotel dine-and-dance-room trade was a third of what it had been three years earlier. The Swing Era was over.

Dramatic shifts in popular culture are usually assumed to result from naturally occurring forces such as changing tastes (did people get sick of hearing “In the Mood”?) or demographics (were all those new parents of the postwar baby boom at home with junior instead of out on a dance floor?). But the big bands didn’t just stumble and fall behind the times. They were pushed…

(See the rest at The Wall Street Journal)

QWERTY vs. Dvorak: The Fable of the Keys?

A few weeks ago, someone told me the QWERTY keyboard (named for its first six keys) was a mistake. There was another design that had proven more efficient, easier to use, and less likely to cause injuries like carpal tunnel. It’s called the Dvorak Simplified Keyboard and was created by Dr. August Dvorak and his brother-in-law Dr. William Dealey in 1936. Unfortunately, this miraculous invention never took off because people are resistant to change…or so the story goes.

It turns out the most favorable research for the Dvorak keyboard was conducted by none other than Dr. Dvorak himself. Later studies showed there wasn’t much to gain – if anything at all – from switching over from QWERTY. Advocates claim Dvorak has the edge in terms of ergonomic design but this isn’t clear. If a benefit exists, it appears to be a small one. Here’s more on QWERTY vs. Dvorak from The Independent Institute:

At a conference attended the other day by your reporter, a distinguished academic economist (who had better remain nameless) cited the “QWERTY” layout of the standard typewriter keyboard as a clear example of how markets “can make mistakes”. It may have been the millionth such reference. Many a textbook cites this case as proof of a certain kind of market failure — that associated with the adoption and locking-in of a bad standard. For years, if you cited an example of a “pure public good” (another kind of market failure), it had to be a lighthouse. If you needed a case of “positive externalities” (yet another), you would very likely go for beekeeping. In its field, QWERTY has achieved the same iconic eminence.

Which is only apt, because the tale of QWERTY is a myth — just like those other two cases. More than 25 years ago, Ronald Coase, a Nobel laureate, showed that when lighthouses were first built in Britain they were provided by private enterprise; tolls were collected when ships reached port. So lighthouses are not pure public goods. At about the same time Steven Cheung examined beekeeping and apple-growing in the state of Washington. He found that apple-growers paid beekeepers for their bees’ pollinating endeavours; those services were not, in fact, an unpriced “externality”…

(Read the rest at The Independent Institute)

Why did the Poker Bubble Burst?

In late 2003, the American poker industry exploded. New players flooded the game. Tournaments flourished. Poker games became a fixture on television. By 2008, the bubble had burst. People left the game in droves. Tournaments got smaller. Television programs ended up on the chopping block. So, why did this happen? What caused the poker industry to boom and bust? Curiously enough, the answer lies in money, or at least the Federal Reserve’s control over it. Here’s more from Peter C. Earle at the Ludwig von Mises Institute:

Nearly a decade ago, poker exploded in popularity. Between television programming, media coverage, and pop culture references to it—in particular, the Texas Hold ‘Em variant—the game became virtually unavoidable. The American Gaming Association estimates that nearly 1 in 5 Americans played poker in 2004, up 50 percent from 2003; also, that nearly 20 percent of those new players had begun to play within the previous two years.

The creation myth associated with the poker boom credits the improbable victory of a prophetically-named Tennessee accountant, Chris Moneymaker, in the 2003 World Series of Poker (WSOP). Other accounts source James McManus’s 2003 book Positively Fifth Street and the 1998 poker film Rounders. Still other, more mystical explanations refer to the game’s sudden “cultural resonance.”

But fads and surges of popularity come and go; these explanations hardly account for why, in a short amount of time, tens of millions of people suddenly flooded into a familiar—indeed, 150 year old—American card game, frenetically expending tens of billions of dollars on it. Nor do they explain why between 2007 and 2008 poker television programs were suddenly cancelled, tournaments saw a drop in participation, and many poker-related businesses scaled back or failed.

Austrian business cycle theory (ABCT) can, however, explain the origins and outcome of the poker bubble as well as its simultaneity with the housing boom, which, as will be demonstrated, are by no means coincidental…

(Read the rest at the Ludwig von Mises Institute)

Calvin Coolidge: Did he save the U.S. Economy?

Amity Shlaes is out with her latest book, Coolidge, a new take on the controversial presidency of Calvin Coolidge. Conservatives love Silent Cal, giving him full credit for the Roaring Twenties. Liberals hate him, believing his free market policies caused the Great Depression. Shlaes falls firmly in the former camp. Personally, I think he’s partly responsible for both. That’s because the Roaring Twenties and the Great Depression were caused by the same thing…rapid expansion of the money supply.

From mid-1921 to mid-1929, the money supply grew 61.8%, with much of that coming during Coolidge’s terms in office. All that money fueled a boom as businesses raced to invest it (aka the Roaring 20s). Eventually, too much money chased too few worthy investments. Poor businesses failed. The economy crashed and the money supply contracted. The Federal Reserve, just like today, mistakenly tried to reinflate the bubble in 1932 only to find itself unable to do so.

Still, Coolidge wasn’t a bad president. He cut the national debt and reduced tax rates. He avoided wars in Latin America. For more on Calvin Coolidge, see this interview with Amity Shlaes conducted by Ed Driscoll at PJ Media:

MR. DRISCOLL:  The Forgotten Man helped to place FDR into context by focusing on many personal histories of the 1930s, beyond the palace intrigue of Capitol Hill. These days, whatever collective history we have of the 1920s seems to come from The Great Gatsby, The Untouchables,and TV shows like HBO’s Boardwalk Empire.  How badly do people today misremember the decade of the 1920s ?

MS. SHLAES:  We really misremember it and then you want to ask why.  So Forgotten Man was about the misremembering of the 1930s.  Coolidge is about the misremembering of the 1920s.

So the cliches you describe, and they’re fun and amusing, are that it was all a lie or about guns and alcohol, something illegal and contraband, corruption resulting from prohibition.  Or it was all a lie; Gatsby wasn’t real wealth.  He was an illusion.  He was a shimmer in a champagne glass.  Right?

So when you go back and look at the ’20s — this is the era of Coolidge, you see a lot of real growth.  Things we would envy, we wish we could have, such as employment was often below five percent.  When you wanted a job you got one.  Wages rose in real terms.  Not a lot but consistently.  You can go back and look at that, even for unskilled workers.  Well, what else — interest rates were pretty low.  The budget was in surplus.  We didn’t have a deficit.  The federal debt was huge from World War I.  We were bringing it down reliably…

(See the rest at PJ Media)

War on the Federal Reserve?

The Federal Reserve is no good. Its money monopoly has wrecked havoc for 100 years. So, I welcome currency competition from Virginia, although I’d prefer it came from the free market. That said, the Federal Reserve will continue to dominate as long as legal tender laws are in full effect. Here’s more on the war on the Federal Reserve from Fox News:

Virginia is one step closer to breaking ties with the country’s monetary system.

A proposal to study whether the state should adopt its own currency is gaining traction in the state legislature from a number of lawmakers as well as conservative economists. The state House voted 65-32 earlier this week to approve the measure, and it will now go to the Senate.

While it’s unlikely that Virginia will be printing its own money any time soon, the move sheds light on the growing distrust surrounding the nation’s central bank. Four other states are considering similar proposals. In 2011, Utah passed a law that recognizes gold and silver coins issued by the federal government as tender and requires a study on adopting other forms of legal currency.

Virginia Republican Del. Robert Marshall told FoxNews.com Tuesday that his bill calls for creation of a 10-member commission that would determine the “need, means and schedule for establishing a metallic-based monetary unit.” Essentially, he wants to spend $20,000 on a study that could call for the state to return to a gold standard…

(See the rest at Fox News)

The End of the U.S. Postal Service?

And so Saturday mail comes to an end. Is anyone really that surprised? No competition = No reason to innovate or improve service. Where’s Lysander Spooner when you need him?

Here’s more on the U.S. Postal Service ending Saturday deliveries at Fox News:

The U.S. Postal Service plans to announce Wednesday that it will end Saturday mail delivery, in one of the most significant steps taken to date to cut costs at the struggling agency.

A source familiar with the decision confirmed the plan to Fox News.

Under the proposal, the Postal Service will continue to deliver packages six days a week. The plan, which is aimed at saving about $2 billion, would start to take effect in August.

(See the rest at Fox News)

Happy Birthday Income Tax (Now, go away already!)

It’s been one hundred years since the modern income tax was created, via the 16th Amendment to the U.S. Constitution. Back then income tax rates ran from 1% for annual incomes over $3, 000 to 7% for annual incomes over $500,000 (that’s $11.6 million in today’s dollars!). Current tax rates run from 10% to 39.6%. Meanwhile, the income tax code has gone from a hefty 400 pages to a whopping 44,000 pages. My how times have changed. Here’s more from Delaware Online:

Pop Quiz: What book has more than 7 million words in multiple chapters, attempts to influence our behavior toward good ends, is complex and often contradictory, and requires interpretation by learned studiers of its texts to distill its basic principles for the masses of us for who this tome is supposed to provide benefit? It’s not the King James version of the Bible. It’s the current United States Tax Code.

The giveaway: While the U.S. Tax Code has more than 7 million words, The Bible is a relatively slim pamphlet at only 774,746 words. It wasn’t always this way. In 1913, the year the personal incomewe now labor under was instituted, the number of pages contained in the entire Tax Code stood at 400 (most of those dealing with tariffs). The Bible actually was longer at 1,291 pages.

As of 2010, the United States Tax Code stands at a whopping 71,684 pages (according to CCH Standard Federal Tax Reporter, though in fairness, that includes repealed or modified portions of earlier versions of the tax code. The current, live portion runs a mere 44,000 pages.) The original 1913 Tax Form 1040 blissfully topped out at a rate of 7 percent – the “fair share” due of the uber rich in the eyes of then President Woodrow Wilson who obviously never had been a community organizer at any point in his career…

(See the rest at Delaware Online)

Student Loans: Crisis…or Conspiracy?

Over the past few months, reports of a “student loan crisis” have erupted throughout the United States. But is this really a crisis? Or a student loan conspiracy of epic proportions?

The Student Loan Conspiracy?

We first visited the student loan issue back in October 2011. To put it simply, the high cost of college and a difficult job market has “created a generation of heavily indebted students with few means to pay back their loans.”

Now, we have some new information to kick around. According to the Federal Reserve Bank of New York, the average student holds $23,300 in student loan debt. Breaking it down, about 43% of all students have loan balances less than $10,000. The rest owe more than $10,000. Amazingly enough, 27% of eligible payers “have past due balances.”

There are two pieces to this conspiracy. First, why is college so expensive? And second, why do so many people spend so much money pursuing college degrees? We talked a lot about the first question in October. So, we wanted to focus more on the second one this time around.

Why is College so Popular?

America’s intense pursuit of college degrees in a curious phenomenon. Not only are degrees ultra-expensive, but students appear to get poor value for their money. According to Richard Arum’s and Josipa Roksa’s book Academically Adrift: Limited Learning on College Campuses, 36% of U.S. college students show “no significant gains in learning” after four years of college. Even worse, there seems to be a mismatch between the skills acquired in college and the skills required for navigating the real world.

So again, we must ask, how did we get into this situation? Why are high school graduates spending money they don’t have in order to obtain college degrees that do shockingly little to prepare them for the real world?

The answer, in our view, is Griggs v. Duke Power. During the 1950s, Duke Power restricted black people from working in all departments except for the low-paying Labor department. In 1955, they started requiring high school diplomas for certain positions.

After the passage of the Civil Rights Act of 1964, Duke Power ended its race-based hiring policies. Instead, it instituted IQ tests. At the time, black people were less likely to hold high school diplomas. They also performed worse on the IQ tests. Thus, they were selected for Duke Power positions at a far lower rate than white candidates.

I won’t go into the particulars here. But eventually, a man named Willie Griggs filed a class action lawsuit against Duke Power Company. The case made its way through the legal system. In 1971, the Supreme Court ruled against Duke Power. In doing so, it prohibited the use of general IQ tests when screening applicants, regardless of whether there was an actual intent to discriminate. In order to pass muster, IQ tests were required to be a “reasonable measure of job performance.”

“…in 1971 the U.S. Supreme Court issued a ruling (Griggs v. Duke Power) saying that if companies use aptitude testing to screen potential employees, they must be prepared to show that their tests are precisely calibrated to the needs of the job. Otherwise, they will be guilty of employment discrimination if their tests screen out minority workers who might have been able to do the work. Rather than face discrimination suits by the federal government, most employers started using a less precise but legally safe method of screening applicants—college degrees.” ~ George C. Leef, Why on Earth Do We Have a Student Loan Crisis?

Griggs vs. Duke Power had far-reaching impact. It largely ended the practice of aptitude tests. But companies still needed a way to screen job applicants. So, they turned to college degrees, “even for jobs that could easily be learned by anyone with a decent high school education.” As a result, college enrollments (and student loans) exploded.

“In 1940, just 10% of high school graduates went to college. By 1970, that number was at 40%. And by the 1990s, it had risen to 70%. That’s because a college degree has become little more than a ‘signaling game.’ By attending college, students “signal” to potential employers that they’re smart, hard-working, and easily trained. The ability to send that signal to employers, which was once accomplished via aptitude tests, is the sole reason that most students attend college in the first place.” ~ David Meyer, The Student Loan Conspiracy?

Guerrilla Explorer’s Analysis

General aptitude tests aren’t perfect. In fact, they’re heavily flawed. In Griggs vs. Duke Power, it was discovered that those who’d passed aptitude tests and held high school degrees performed their jobs at the same level as those who’d failed the tests and didn’t hold degrees.

So, why don’t employers just create new aptitude tests that are “reasonably related” to individual jobs? The biggest reason is the threat of lawsuits. Even a well-crafted aptitude test could backfire in this respect. It’s far easier to just use college degrees as a screening mechanism and avoid the lawsuit risk altogether.

And that’s unfortunate. Aptitude tests hold significant advantages over college degrees. They’re cheap, quick, and can be tailored to fit individual jobs. College degrees are ultra-expensive, ultra time-consuming, and ultra-unfocused. So unfocused in fact, that the 1971 ruling should have invalidated college education screening as well.

“Recall that the problem in Griggs was that the specified requirements for job applicants were not clearly and directly related to the actual demands of the work. If challenged, could employers who have set the college degree as a requirement show that it has anything at all to do with the ‘business necessity’ of the employer or are ‘job-related’? That is very doubtful. Employers have grown to rely upon a new credential that is imperfect and probably rules out many qualified candidates. If the EEOC and the courts were to scrutinize the college degree requirement, they might well conclude that it has a ‘disparate impact.'” ~ Bryan O’Keefe and Richard Vedder, Griggs v. Duke Power: Implications for College Credentialing

The Student Loan Conspiracy isn’t a deliberate one. But the unintended consequences of Griggs vs. Duke Power have been highly destructive all the same. Many people waste years of their lives and accumulate thousands of dollars in student loan debt just to be eligible for basic jobs.

Companies will always need a way to screen potential employees. We here at Guerrilla Explorer don’t favor aptitude tests or anything else for that matter. We just think companies should be allowed to screen in whatever fashion they choose rather than fearing discrimination lawsuits. Without that lawsuit risk, however, we think many employers would switch to specifically-designed aptitude tests. Perhaps then, the Student Loan Crisis would finally come to an end.

The Threat of Happiness Research?

Two days ago, the Organisation for Economic Co-Operation and Development (OECD) relaunched its “Better Life Index.” According to it, women are generally happier than men and Australia is the happiest country (assuming all categories are equally-weighted). But does happiness research pose a threat to society?

What is Happiness Research?

Happiness research has exploded over the last few decades. The idea is to quantitatively measure happiness as well as what makes people happy. This allows for all sorts of comparisons between groups as well as nations.

Happiness research tends to treat individuals with a broad brush. But happiness is entirely subjective. Different people have different preferences. Some people value money derived from work more than leisure time and vice versa. The Better Life Index attempts to deal with this fact. It allows users to personalize their indexes based on how much they value eleven separate categories: community, education, environment, civic engagement, health, housing, income, jobs, life satisfaction, safety, and work-life balance. So, this would appear to be a marked improvement.

Cardinal Utility – The Fatal Flaw of Happiness Research?

Unfortunately, the Better Life Index doesn’t deal with the underlying problem. Happiness research depends on something known as cardinal utility. Cardinal utility holds that personal preferences can be accurately measured by a third-party. However, cardinal utility is an outdated view. No one believes it…no one except happiness researchers that is.

Take the Better Life Index. It asks people to self-report how much they value various categories. However, a stated preference don’t necessarily equal a demonstrated preference. What’s the difference? A stated preference is saying you’d take a pay cut to have more leisure time. A demonstrated preference is actually following through on it.

“The concept of demonstrated preference is simply this: that actual choice reveals, or demonstrates, a man’s preferences; that is, that his preferences are deducible from what he has chosen in action. Thus, if a man chooses to spend an hour at a concert rather than a movie, we deduce that the former was preferred, or ranked higher on his value scale.” ~ Murray Rothbard, Toward a Reconstruction of Utility and Welfare Economics

Actions speak louder – much louder – than words. There are several ways stated preferences can mess up happiness research. First, people alter their preferences all the time. So, even if a person creates an “accurate” Better Life Index, he might change his mind when it comes time to make an actual choice. Second, a person might think they prefer doing one thing. But when presented with a choice, that same person might do something else entirely.

“In vacuo, a few consumers are questioned at length on which abstract bundle of commodities they would prefer to another abstract bundle, and so on. Not only does this suffer from the constancy error, no assurance can be attached to the mere questioning of people when they are not confronted with the choices in actual practice. Not only will a person’s valuation differ when talking about them from when he is actually choosing, but there is also no guarantee that he is telling the truth.” ~ Murray Rothbard, Toward a Reconstruction of Utility and Welfare Economics

Guerrilla Explorer’s Analysis

So, maybe happiness research isn’t all that accurate. So what? It’s just for fun right? Well, maybe not. It might seem innocuous, but there’s a dark side to it. Politicians and bureaucrats from around the world are taking a page out of Jeremy Bentham’s book and claiming public policy can be used to engineer societal happiness.

“The most commonly cited statistic in happiness economics is the rule that somewhere between $40,000 and $110,000, a higher salary doesn’t buy much more joy or satisfaction. Many people draw the bright white line at $70,000. This provides a strong utilitarian impulse to raise taxes on the rich, who apparently can’t buy much happiness with their extra millions, and to funnel the money to the poor to bring them closer to $70,000. … But one reason why incomes differ is that some people care more about making money than others.” ~ Derek Thompson, The New Economics of Happiness

We barely understand happiness. And we certainly can’t measure it with any type of accuracy. So, the idea that politicians and bureaucrats can engineer it is an illusion. But that won’t stop them from trying. In the end, the research that is supposed to be improving lives might just end up ruining them.

“Apologists for Marxism have made myriad excuses for their ideology’s failure to provide the same standard of living and liberty as was enjoyed in capitalist nations. Until recently, few have been so brazen as to claim that lowering living standards and curtailing freedom were the intended consequences, let alone that people would be happier with less of either. … Limiting choice, reducing wealth and lowering aspirations are now openly advocated as desirable ends in themselves.” ~ Christopher Snowdon, The Spirit Level Delusion: Fact-Checking the Left’s New Theory of Everything

For Further Reading: The Trojan Horse of Happiness Research by Thomas J. DiLorenzo