Category Archives: Socialism

A Lesson for Socialists- WSJ

I am not sure why this is a difficult concept.

=====

Years ago, when an editor asked me if Boeing would be around to pay off a 100-year bond it had recently offered, I flippantly replied that 100 years was only two product cycles for the company.

I underestimated the duration of its products. The Boeing 747 first flew in 1969 and a freighter version will continue to be built near Seattle at least through 2022. The Boeing 737, which first flew in 1967, faces an order backlog that extends through 2027. An all-new replacement for the commuter workhorse is unlikely to appear until the 2030s.

Which makes all the more anomalous Airbus’s decision to end production of its impressive and giant A380, which has been flying only since 2005.

Socialism is currently in vogue. If the word means anything in today’s context, it means projects of unusual government ambition, built on our globally shared capitalist technological and commercial base. The A380 was exactly such a project. Underwritten by massive European government subsidies, the plane was an engineering sensation. Passengers loved the roomy jet. Yet now it’s kaput. What went wrong? Or to phrase the question more usefully, what technological and commercial realities would its sponsors have had to overrule to assure its success?

The list is not a short one. They would have had to overrule the desire of passengers to fly direct, bypassing the crowded hub airports (like London’s Heathrow) for which the A380 was built.

They would have had to overrule the preference of business travelers for frequent departures. With 535 seats to fill, the superjumbo was hopelessly matched against operators offering more convenient schedules by using smaller planes.

Most of all, they would have had to overrule the public’s appetite for lower fares. On a per-seat basis, a new generation of super-efficient twin-engine planes such as the Boeing 787 proved cheaper to operate even though the four-engine A380 could accommodate twice as many customers.

In the end, enough socialism could be mobilized to get the plane built, but not enough to make it commercially viable. Europe’s governments would have needed to extend their dominion beyond their own taxpayers who financed it. They would have needed to dictate to the world’s airlines and travelers and even the aerospace industry’s global supplier base, which proved unwilling to develop a new fuel-efficient engine for a plane with a doubtful future.

This should guide us in our thinking about what kind of “socialism” is possible today. Governments can tax their own people until they rebel at the ballot box, refuse to pay, or emigrate. They have no power, in our world, to dictate what kinds of goods and services and technologies (green or otherwise) the global marketplace will accept.

Its A380 debacle shows how hard it is for state planners to outguess markets.

When the end came, it came because the A380’s last dedicated customer, the government- backed Emirates Airline of Dubai, gave up on the superjumbo. Planes in pristine condition were lingering unsold on the used-plane market. A 10-year-old jet was recently retired by Singapore Airlines. Now it’s being broken up for scrap, proving once again socialism’s knack for making grown men cry.

Boeing’s management was vilified at the time for declining to compete with Airbus to replace its own fabulously successful 747 jumbo jet. But Boeing treated its business like a business. Its forecasts showed the market was likely to evolve in ways unfavorable to another very large passenger plane.

French and German politicians ignored such considerations. They were more interested in making a showy statement about Europe’s technological prowess. Boeing chafed for decades at the subsidies they poured into Airbus. Airbus, for its part, was not above portraying the money U.S. taxpayers spent defending the free world as a backdoor handout to Boeing through its defense business. This debate is likely now to get an ugly second wind if U.S. negotiators insist that Airbus pay back the estimated $20 billion in “launch aid” the A380 failed to recoup (the answer will certainly be no).

The parallel to California’s bullet train hardly needs to be drawn. Gov. Gavin Newsom seems already to be walking back his apparent cancellation of the grossly over-budget project. He may hope that Green New Deal dollars from Washington will become available after 2020 to replace the funds California isn’t willing to provide.

But California voters have already gotten the right message: Billions were poured into the project so former Gov. Jerry Brown wouldn’t have to admit a mistake.

The same consideration for years deterred Airbus from blowing the whistle on the A380, but let’s end on a positive note. Today the socialist miscalculations of our infallible leaders are measured mainly in dollars. This represents a great leap forward over the socialist failures that characterized the last century.

BUSINESS WORLD

By Holman W. Jenkins, Jr.

Share

A Short History of American Medical Insurance – Imprimis

One of the best “short” analysis of how the USA got to where we are relative to health insurance.
Imprimis is a publication of Hillsdale College.
==========
Imprimis – September 2018
By John Steele Gordon

The following is adapted from a talk delivered on board the Crystal Symphony on July 25, 2018, during a Hillsdale College educational cruise to Hawaii.

Perhaps the most astonishing thing about modern medicine is just how very modern it is. More than 90 percent of the medicine being practiced today did not exist in 1950. Two centuries ago medicine was still an art, not a science at all. As recently as the 1920s, long after the birth of modern medicine, there was usually little the medical profession could do, once disease set in, other than alleviate some of the symptoms and let nature take its course. It was the patient’s immune system that cured him—or that didn’t.

It was only around 1930 that the power of the doctor to cure and ameliorate disease began to increase substantially, and that power has continued to grow nearly exponentially ever since. This new power to extend life, interacting with the deepest instinctual impulse of all living things—to stay alive—has had consequences that our society is only beginning to comprehend and address. Since ancient times, for example, doctors have fought death with all the power at their disposal and for as long as life remained. Today, the power to heal has become so mighty that we increasingly have the technical means to extend indefinitely the shadow, while often not the substance, of life. When doctors should cease their efforts and allow death to have its inevitable victory is an issue that will not soon be settled, but it cannot be much longer evaded.

Then there is the question of how to pay for modern medicine, the costs of which are rising faster than any other major national expenditure. In 1930, Americans spent $2.8 billion on health care—$23 per person and 3.5 percent of the Gross Domestic Product. In 2015 we spent about $3 trillion—$9,536 per person and 15 percent of GDP. Adjusted for inflation, this means that per capita medical costs in the United States have risen by a factor of 30 in 90 years.

Consider the 1980s, when medical expenses in the U.S. increased 117 percent. Forty-three percent of the rise was due to general inflation. Ten percent can be attributed to the American population growing both larger and older (as it still is). Twenty-three percent went to pay for technology, treatments, and pharmaceuticals that had not been available when the decade began—a measure of how fast medicine has been advancing. But that still leaves 24 percent of the increase unaccounted for, and that 24 percent is due solely to an inflation peculiar to the American medical system itself.

Whenever one segment of an economy exhibits, year after year, inflation above the general rate, and when there is no constraint on supply, then either a cartel is in operation or there is a lack of price transparency—or both, as is the case with American medical care.

So it is clear that there is something terribly wrong with how health care is financed in our country. And a consensus on how to fix the problem—how to provide Americans the best medicine money can buy for the least amount of money that will buy it—has proved elusive. But the history of American medical care, considered in the light of some simple but ineluctable economic laws, can help point the way. For it turns out that the engines of medical inflation were deeply, and innocently, inserted into the health care system just as the medical revolution began.

***

It was the Greeks—the inventors of the systematic use of reason that 2,000 years later gave rise to modern science—who first recognized that disease is caused by natural, not supernatural, forces. They reduced medicine to a set of principles, usually ascribed to Hippocrates but actually a collective work. In the second century, the Greek physician Galen, a follower of the Hippocratic School, wrote extensively on anatomy and medical treatment. Many of these texts survived and became almost canonical in their influence during the Middle Ages. So it is fair to say that after classical times, the art of medicine largely stagnated. Except for a few drugs—such as quinine and digitalis—and an improved knowledge of gross anatomy, the physicians practicing in the U.S. at the turn of the nineteenth century had hardly more at their disposal than the Greeks had in ancient times.

In 1850 the U.S. had 40,755 people calling themselves physicians, more per capita than the country would have in 1970. Few of this legion had formal medical education, and many were unabashed charlatans. This is not to say that medical progress was standing still. The stethoscope was invented in 1816. The world’s first dental school opened in Baltimore in 1839. The discovery of anesthesia in the 1840s was immensely important—although while it made extended operations possible, overwhelming postoperative infections killed many patients, so most surgery remained a last-ditch effort. Another major advance was the spread of clean water supplies in urban areas, greatly reducing epidemics of waterborne diseases, such as typhoid and cholera, which had ravaged cities for centuries.

Then finally, beginning in the 1850s and 1860s, it was discovered that many diseases were caused by specific microorganisms, as was the infection of wounds, surgical and other. The germ theory of disease, the most powerful idea in the history of medicine, was born, and medicine as a science was born with it. Still, while there was a solid scientific theory underpinning medicine, most of its advances in the late nineteenth and early twentieth centuries were preventive rather than curative. Louis Pasteur and others, using their new knowledge of microorganisms, could begin developing vaccines. Rabies fell in 1885, and several diseases that were once the scourge of childhood, such as whooping cough and diphtheria, followed around the turn of the century. Vitamin deficiency diseases, such as pellagra, began to decline a decade later. When the pasteurization of milk began to be widely mandated around that time, the death rate among young children plunged. In 1891, the death rate for American children in the first year of life was 125.1 per 1,000. By 1925 it had been reduced to 15.8 per 1,000, and the life expectancy of Americans as a whole began a dramatic rise.
Hospital “Insurance”

One of the most fundamental changes caused by the germ theory of disease, one not foreseen at all, was the spread of hospitals for treating the sick. Hospitals have an ancient history, but for most of that history they were intended for the very poor, especially those who were mentally ill or blind or who suffered from contagious diseases such as leprosy. Anyone who could afford better was treated at home or in nursing facilities operated by a private physician. Worse, until rigorous antiseptic and later aseptic procedures were adopted, hospitals were a prime factor in spreading, not curing, disease. Thus, until the late nineteenth century, hospitals were little more than a place for the poor and the desperate to die. In 1873, there were only 149 hospitals in the entire U.S. A century later there were over 7,000, and they had become the cutting edge of both clinical medicine and medical research.

But hospitals had a financial problem from the very beginning of scientific medicine. By their nature they are extremely labor intensive and expensive to operate. Moreover, their costs are relatively fixed and not dependent on the number of patients being served. To help solve this problem, someone in the late 1920s had a bright idea: hospital insurance. The first hospital plan was introduced in Dallas, Texas, in 1929. The subscribers, some 1,500 schoolteachers, paid six dollars a year in premiums, and Baylor University Hospital agreed to provide up to 21 days of hospital care to any subscriber who needed it.

While this protected schoolteachers from unexpected hospital costs in exchange for a modest fee, the driving purpose behind the idea was to improve the cash flow of the hospital. Thus the scheme had an immediate appeal to other medical institutions, and it quickly spread. Before long, groups of hospitals were banding together to offer plans that were honored at all participating institutions, giving subscribers a choice of which hospital to use. This became the model for Blue Cross, which first operated in Sacramento, California, in 1932.

Although called insurance, these hospital plans were unlike any other insurance policies. Previously, insurance had always been used to protect only against large, unforeseeable losses, and came with a deductible. But the first hospital plans didn’t work that way. Instead of protecting against catastrophe, they paid all costs up to a certain limit. The reason, of course, is that they were instituted not by insurance companies, but by hospitals, and were primarily designed to generate steady demand for hospital services and guarantee a regular cash flow.

In the early days of hospital insurance, this fundamental defect was hardly noticeable. Twenty-one days was a very long hospital stay, even in 1929, and with the relatively primitive medical technology then available, the daily cost of hospital care per patient was roughly the same whether the patient had a baby, a bad back, or a brain tumor. Today, on the other hand, this “front-end” type of hospital insurance simply would not cover what most of us need insurance against: the serious, long-term, expensive-to-cure illness. In the 1950s, major medical insurance, which does protect against catastrophe rather than misfortune, began to provide that sort of coverage. Unfortunately it did not replace the old plans in most cases, but instead supplemented them.

The original hospital insurance also contained the seeds of two other major economic dislocations, unnoticed in the beginning, that have come to loom large. The first dislocation is that while people purchased hospital plans to be protected against unpredictable medical expenses, the plans only paid off if the medical expenses were incurred in a hospital. As a result, cases that could be treated on an outpatient basis instead became much more likely to be treated in the hospital—the most expensive form of medical care.

The second dislocation was that hospital insurance did not provide indemnity coverage, which is when the insurance company pays for a loss and the customer decides how best to deal with it. Rather than indemnification, the insurance company provided service benefits. In other words, it paid the bill for services covered by the policy, whatever the bill was. As a result, there was little incentive for the consumer of medical services to shop around. With someone else paying, patients quickly became relatively indifferent to the cost of medical care.

These dislocations perfectly suited the hospitals, which wanted to maximize the amount of services they provided and thereby maximize their cash flow. If patients are indifferent to the costs of medical services they buy, they are much more likely to buy more of them and the cost of each service is likely to go up. There is no price competition to keep prices in check.

Predictably, the medical profession began to lobby in favor of retaining this system. In the mid-1930s, as Blue Cross plans spread rapidly around the country, state insurance departments moved to regulate them and force them to adhere to the same standards as regular insurance plans. Had hospital insurance come to be regulated like other insurance, those offering it would have begun acting more like insurance companies, and the economic history of modern American medicine might have taken a very different turn. But that didn’t happen, largely because doctors and hospitals, by and for whom the plans had been devised in the first place, moved to prevent it from happening. The American Hospital Association and the American Medical Association worked hard to exempt Blue Cross from most insurance regulation, offering in exchange to enroll anyone who applied and to operate on a nonprofit basis.

The Internal Revenue Service, meanwhile, ruled that these companies were charitable organizations and thus exempt from federal taxes. Freed from taxes and from the regulatory requirement to maintain large reserve funds, Blue Cross and Blue Shield (a plan that paid physicians’ fees on the same basis as Blue Cross paid hospital costs) came to dominate the market in health care insurance, holding about half of the policies outstanding by 1940. In order to compete, private insurance companies were forced to model their policies along Blue Cross and Blue Shield lines. Thus hospitals came to be paid almost always on a cost-plus basis, receiving the cost of the services provided plus a percentage to cover the costs of invested capital. Any incentive for hospitals to be efficient and reduce costs vanished.

In recent years, hospital use has been falling steadily as the population has gotten ever more healthy and surgical procedures have become far less traumatic. The result is a steady increase in empty beds. There were over 7,000 hospitals in the U.S. in 1975, compared to about 5,500 today. But that reduction has not been nearly enough. Because of the cost-plus way hospitals are paid, they don’t compete for patients by means of price, which would force them to retrench and specialize. Instead they compete for doctor referrals, and doctors want lots of empty beds to ensure immediate admission and lots of fancy equipment, even if the hospital just down the block has exactly the same equipment. The inevitable result, of course, is that hospital costs on a per-patient per-day basis have skyrocketed.

Doctors, meanwhile, were paid for their services according to “reasonable and customary” charges. In other words, doctors could bill whatever they wanted to as long as others were charging roughly the same. The incentive to tack a few dollars on to the fee became strong. The incentive to take a few dollars off, in order to increase market share, ceased to exist. As more and more Americans came to be covered by health insurance, doctors were no longer even able to compete with one another.
Modern Developments

During World War II, another feature of the American health care system with large financial implications for the future developed: employer-paid health insurance. With twelve million working-age men in the armed forces and the economy in overdrive, the American labor market was tight in the extreme. But wartime wage and price controls prevented companies from competing for available talent by means of increased wages and salaries. They had to compete with fringe benefits instead, and free health insurance was tailor-made for this purpose.

The IRS ruled that the cost of employee health care insurance was a tax-deductible business expense, and in 1948 the National Labor Relations Board ruled that health benefits were subject to collective bargaining. Companies had no choice but to negotiate with unions about them, and unions fought hard to get them.

The problem was that company-paid health insurance further increased the distance between the consumer of medical care and the purchaser of medical care. When individuals have to pay for their own health insurance, they at least have an incentive to buy the most cost-effective plan available, given their particular circumstances. But beginning in the 1940s, a rapidly increasing number of Americans had no rational choice but to take whatever health care plan their employers chose to provide.

There is another aspect of employer-paid health insurance, unimagined when the system first began, that has had pernicious economic consequences in recent years. Insurers base the rates they charge, naturally enough, on the total claims they expect to incur. Auto insurers determine this by looking at what percentage of a community’s population had auto accidents in recent years and how much repairs cost in that community. This is known as community rating. They also look at the individual driver’s record, the so-called experience rating. Most insurance policies are based on a combination of community and experience ratings. And for most forms of insurance, the size of the community that is rated is quite large, eliminating the statistical anomalies that skew small samples. For example, a person isn’t penalized because he happens to live on a block with a lot of lousy drivers. But employer-paid health insurance is an exception. It can be based on the data for each company’s employees, allowing insurance companies to cherry-pick businesses with healthy employees, driving up the cost of insurance for everyone else. The effects of this practice are clear: 65 percent of workers without health insurance work for companies with 25 or fewer employees.

By 1960, as the medical revolution was quickly gaining speed, the economically flawed private health care financing system was fully in place. Then two other events added to the gathering debacle.

In 1965, government entered the medical market with Medicare for the elderly and Medicaid for the poor. Both doctors and hospitals had fought tooth and nail to prevent what they called “socialized medicine” from gaining a foothold in the U.S. As a result of their strident opposition, when the two programs were finally enacted, they were structured much like Blue Cross and Blue Shield, only with government picking up much of the tab. And when Medicare and Medicaid proved a bonanza for health care providers, their vehement opposition quickly faded away. The two new systems greatly increased the number of people who could afford advanced medical care, and the incomes of medical professionals soared, roughly doubling in the 1960s.

But perhaps the most important consequence of these new programs was the power over hospitals they gave to state governments. State governments became the largest single source of funds for virtually every major hospital in the country, giving them the power to influence—or even dictate—the policy decisions made by these hospitals. As a result, these decisions were increasingly made for political, rather than medical or economic, reasons. To take one example, closing surplus hospitals or converting them to specialized treatment centers became much more difficult. Those adversely affected—the local neighborhood and hospital workers unions—would naturally mobilize to prevent it. Society as a whole, which stood to gain, would not.

Finally, there was the litigation explosion of the last 50 years. For every medical malpractice suit filed in the U.S. in 1969, 300 were filed in 1990. While reforms at the state level (notably in Texas) have reduced the number, lawsuits have sharply driven up the cost of malpractice insurance—a cost passed directly on to patients and their insurance companies. Neurosurgeons, even with excellent records, can pay as much as $300,000 a year for coverage. Doctors in less lawsuit-prone specialties are also paying much higher premiums and are forced to order unnecessary tests and perform unnecessary procedures to avoid being second-guessed in court.

***

Given this short history, it followed as the night follows day that medical costs began to rise over and above inflation, population growth, and the cost of medical advances. The results for the country as a whole are plain to see. In 1930 we spent 3.5 percent of American GDP on health care; in 1950, 4.5 percent; in 1970, 7.3 percent; in 1990, 12.2 percent. Today we spend 15 percent. American medical care over this period has saved the lives of millions who could not have been saved before—life expectancy today is 78.6 years. It has relieved the pain and suffering of tens of millions more. But it has also become a monster that is devouring the American economy.

Is there a way out?

One possible answer, certainly, is a national health care service, such as that pioneered in Great Britain after World War II. But our federal government already runs three single-payer systems—Medicare, the Veterans Health Administration, and the Indian Health Service—each of which is in a shambles, noted for fraud, waste, and corruption. Why would we want to turn over all of American medicine to those who have proved themselves incompetent to run large parts of it?

A far better and cheaper alternative would be to reform the economics of the present system.

The most important thing to do, by far, is to require medical service providers to make public their inclusive prices for all procedures. Most hospitals keep their prices hidden in order to charge more when they can, such as with the uninsured. But some facilities do post their prices. The Surgery Center of Oklahoma, for instance, does so on its website. A knee replacement there will cost you $15,499, a mastectomy $6,505, a rotator cuff repair $8,260.

Once prices are known and can be compared, competition—capitalism’s secret weapon—will immediately drive prices towards the low end, draining hundreds of billions of dollars in excess charges out of the system. Posting prices will also force hospitals to become more efficient and innovative, in order to stay competitive.

Any politician who pontificates about reforming health care without talking about making prices public is carrying water for one or more of the powerful lobbyists that have stymied real reform, such as the American Hospital Association, the American Medical Association, and the health workers unions.

Second, we should reform how malpractice is handled. We should get rid of the so-called American rule, where both sides pay their own legal expenses regardless of outcome, and adopt the English rule—employed in the rest of the common-law world—where the loser pays the expenses of both sides.

Third, we need to ensure that the consumers of medical care—you and me—care about the cost of medical care. Getting patients to shop for lower-cost services is vital.

A generous health insurance policy more or less covers everything from a sniffle to a heart transplant. It shouldn’t. An insurance policy that covers routine care isn’t even an insurance policy, properly speaking—it is a very expensive pre-payment plan that jacks up premiums. Just as oil changes are not covered by automobile insurance, annual flu shots and scraped knees should not be covered by medical insurance. One way to achieve this would be for employers to provide major medical insurance plus a health savings account to take care of routine health care. If the money in the account is not spent on health care, it would be rolled over into the employee’s 401(k) account at the end of the year, giving him an incentive to shop wisely for routine medical care.

Finally, we need to get the practitioners of modern medicine to recognize an age-old reality: there is no cure for old age itself. Maybe someday we’ll be able to 3-D print a new body and have the data in our brain downloaded to it. But for the time being, when the body begins to break down systemically, we should let nature take its course.

There are enormous forces arrayed against these economically sensible reforms. Defenders of the status quo are the most potent lobbyists in Washington and the state capitals. This is not to mention the leftist proponents of single payer, who favor whatever will increase the power and scope of government. So it won’t be an easy fight. But at least we have one thing on our side—Stein’s law, named after the famous economist Herbert Stein: “If something cannot go on forever, it will stop.”

https://imprimis.hillsdale.edu/short-history-american-medical-insurance/

Share

Is the Government Really Helping the “Poor”?

Quite interesting. The picture often looks quite different when you put all the variables on the table or into the algorithm.
======

By Phil Gramm And John F. Early

‘The War on Poverty is not a struggle simply to support people,” declared President Lyndon B. Johnson in 1964. “It is an effort to allow them to develop and use their capacities.” During the 20 years before the War on Poverty was funded, the portion of the nation living in poverty had dropped to 14.7% from 32.1%. Since 1966, the first year with a significant increase in antipoverty spending, the poverty rate reported by the Census Bureau has been virtually unchanged.

Last year a United Nations investigator using census data found “shocking” evidence that 40 million Americans live in “squalor and deprivation,” in a country where “tax cuts will fuel a global race to the bottom.” He continued: “The criminal justice system is effectively a system for keeping the poor in poverty,” and reported that “the demonizing of taxation means that legislatures effectively refuse to levy taxes.”

If that doesn’t sound like the country you live in, that’s because it isn’t. The Census Bureau counts as poor all people in families with incomes lower than the established income thresholds for their respective family size and composition. The thresholds, first set in 1963, are based on a multiple of the cost of a budget for adequately nutritious food, adjusted for inflation. While the Census Bureau reports that in 2016 some 12.7% of Americans lived in poverty, it is impossible to reconcile this poverty rate, which has remained virtually unchanged over the last 50 years, with the fact that total inflation-adjusted government- transfer payments to low-income families have risen steadily. Transfers targeted to low-income families increased in real dollars from an average of $3,070 per person in 1965 to $34,093 in 2016.

Even these numbers significantly understate transfer payments to low-income families since they exclude Medicare and Social Security, which provide large subsidies to low-income retirees. Compared with what they pay in Social Security taxes, the lowest quintile of earners can receive as much as 10 times the lifetime benefits received by the highest quintile of earners and three times as much as the middle quintile.

The measured poverty rate has remained virtually unchanged only because the Census Bureau doesn’t count most of the transfer payments created since the declaration of the War on Poverty. The bureau measures poverty using what it calls “money income,” which includes earned income and some transfer payments such as Social Security and unemployment insurance. But it excludes food stamps, Medicaid, the portion of Medicare going to low-income families, Children’s Health Insurance, the refundable portion of the earned-income tax credit, at least 87 other means-tested federal payments to individuals, and most means-tested state payments.

If government counted these missing $1.5 trillion in annual transfer payments, the poverty rate would be less than 3%. The 3% poverty rate determined by counting more of the government transfers to low-income families is virtually identical to the number economists Bruce Meyer and James Sullivan found in a 2016 study, which measured actual consumption by poor families. The number also reconciles the current disparity between the low income levels used by the Census Bureau to define poverty and studies such as the Department of Energy Residential Consumption Survey, which find consistently rising spending among poor families on cars, home electronics, cable, household appliances, smartphones and living space.

The 3% poverty rate would fall even further if it accounted for transfers within families, some $500 billion of private charitable giving, and the multibillion-dollar informal economy, where income is unreported.

Transfer payments essentially have eliminated poverty in America. Transfers now constitute 84.2% of the disposable income of the poorest quintile of American households and 57.8% of the disposable income of lower-middle- income households. These payments also make up 27.5% of America’s total disposable income.

The stated goal of the War on Poverty is not just to raise living standards, but also to make America’s poor more self-sufficient and to bring them into the mainstream of the economy. In that effort the war has been an abject failure, increasing dependency and largely severing the bottom fifth of earners from the rewards and responsibilities of work.

In 1965, before funds were appropriated for War on Poverty programs, all five income quintiles had more families in which at least one person worked than families in which the head of household was of prime working age. So broadly based was the work ethic that the lowest income quintile had only 5.4% more families with working-age heads and no one working than did the middle quintile. The lower-middle quintile actually had proportionately fewer families where no one worked than did the middle quintile.

The expanding availability of antipoverty transfers has devastated the work effort of poor and lower-middle income families. By 1975 the lowest-earning fifth of families had 24.8% more families with a prime-work age head and no one working than did their middle-income peers. By 2015 this differential had risen to 37.1%. And by that same year, even families in the lower-middle income quintile headed by working-age persons were almost 6% more likely to have no one working than a similar family in the middle-income quintile.

Even these numbers understate the decline in work among low-income Americans that has accompanied the War on Poverty. Compared with the low-income quintile, the lower-middle quintile today has three times as many families with two or more workers, and the middle quintile has five times as many. The trend illustrates how the War on Poverty produced an unprecedented decline in work effort among those who received benefits.

The massive reduction in material poverty that government transfers have allowed has come at a considerable underappreciated cost. The War on Poverty has increased dependency and failed in its primary effort to bring poor people into the mainstream of America’s economy and communal life. Government programs replaced deprivation with idleness, stifling human flourishing. It happened just as President Franklin Roosevelt said it would: “The lessons of history,” he said in 1935, “show conclusively that continued dependency upon relief induces a spiritual and moral disintegration fundamentally destructive to the national fiber.

Mr. Gramm is a former Chairman of the Senate Banking Committee. Mr. Early served twice as assistant commissioner at the Bureau of Labor Statistics and is president of Vital Few LLC. Bob Ekelund and Mike Solon contributed to this article.

Share

Marx’s Apologists Should be Red in the Face

WSJ – 5/3/2018 Paul Kengor
OK, one of my daughter’s favorite professors…
=====
May 5 marks the bicentennial of Karl Marx, who set the stage with his philosophy for the greatest ideological massacres in history. Or did he?

He did, but deniers still remain. “Only a fool could hold Marx responsible for the Gulag,” writes Francis Wheen in “Karl Marx: A Life” (1999). Stalin, Mao and Kim Il Sung, Mr. Wheen insists, created “bastard creeds,” “wrenched out of context” from Marx’s writings.

Marx has been accused of ambiguity in his writings. That critique is often justified, but not always. In “The Communist Manifesto,” he and Friedrich Engels were quite clear that “the theory of the Communists may be summed up in the single sentence: abolition of private property.”

“You are horrified at our intending to do away with private property,” they wrote. “But in your existing society, private property is already done away with for nine-tenths of the population.” And this: “In one word, you reproach us with intending to do away with your property. Precisely so; that is just what we intend.”

Marx and Engels acknowledged that their views stood undeniably contrary to the “social and political order of things.” Communism seeks to “abolish the present state of things” and represents “the most radical rupture in traditional relations.” Toward that end, the manifesto offers a 10-point program, including “abolition of property in land,” “a heavy progressive or graduated income tax,” “abolition of all right of inheritance,” “centralization of credit in the hands of the state, by means of a national bank with state capital and an exclusive monopoly,” “centralization of the means of communication

The bicentennial of the man whose ideas killed untold millions.

and transport in the hands of the state” and the “gradual abolition of all the distinction between town and country by a more equitable distribution of the population over the country.”

In a preface to their 10 points, Marx and Engels acknowledged their coercive nature: “Of course, in the beginning, this cannot be effected except by means of despotic inroads.” In the close of the Manifesto, Marx said, “The Communists . . . openly declare that their ends can be attained only by the forcible overthrow of all existing social conditions.”

They were right about that. Human beings would not give up fundamental liberties without resistance. Seizing property would require a terrible fight, including the use of guns and gulags. Lenin, Trotsky, Stalin and a long line of revolutionaries and dictators candidly admitted that force and violence would be necessary.

We’re told the philosophy was never the problem—that Stalin was an aberration, as were, presumably, Lenin, Trotsky, Ceausescu, Mao, Pol Pot, Ho Chi Minh, the Kims and the Castros, not to mention the countless thousands of liquidators in the NKVD, the GRU, the KGB, the Red Guard, the Stasi, the Securitate, the Khmer Rouge, and on and on.

Couldn’t any of them read? Yes, they could read. They read Marx. The rest is history— ugly, deadly history.

Mr. Kengor is professor of political science at Grove City College. His books include “A Pope and a President: John Paul II, Ronald Reagan and the Extraordinary Untold Story of the 20th Century” and “The Politically Incorrect Guide to Communism.”

Share