Search

You searched for: Content Type Journal Article Remove constraint Content Type: Journal Article Publishing Institution The Objective Standard Remove constraint Publishing Institution: The Objective Standard Political Geography America Remove constraint Political Geography: America Topic Health Remove constraint Topic: Health
Number of results to display per page

Search Results

  • Author: Andrew Bernstein
  • Publication Date: 12-2012
  • Content Type: Journal Article
  • Journal: The Objective Standard
  • Institution: The Objective Standard
  • Abstract: On the morning of September 11, 2001, Mohammed Atta and his minions flew stolen planes into the World Trade Center and the Pentagon, destroying the former and murdering thousands of innocent civilians. What motivated this atrocity? What filled the murderers with such all-consuming hatred that they were willing to surrender their own lives in order to kill thousands of innocent human beings? The clear answer is that these were religious zealots engaged in holy war with their primordial enemy—the embodiment of the modern secular West: the United States of America.In their evil way, the Islamists provide mankind with some clarity. They remind us of what real religion is and looks like—not the Christianity or Judaism of the modern West, watered down and diluted by the secular principles of the Renaissance and the Enlightenment; but real faith-based, reason-rejecting, sin-bashing, kill-the-infidels religion. The atrocities of 9/11 and other similar terrorist acts by Islamists do not clash with their creed. On the contrary, they are consistent with the essence of religion—not merely of Islam—but religion more broadly, religion as such. This is an all-important lesson that humanity must learn: Religion is hazardous to your health. Unfortunately, conventional views of religion hold just the opposite. Many people believe that religion is the necessary basis of morality—that without belief in God, there can be no ethics, no right or wrong. A character in Dostoyevsky's The Brothers Karamazov famously expressed this view: “In a world without God, all things become permissible.” In the 21st century, many people still believe this. But the converse is true. A rational, fact-based, life-promoting morality is impossible on religious premises. Indeed, religion clashes with every rational principle and factual requirement of a proper, life-advancing ethics. A proper ethics, one capable of promoting flourishing human life on earth, requires the utter repudiation of religion—of all of its premises, tenets, implications, and consequences. To begin understanding the clash between religion and human life, consider the Dark Ages, the interminable centuries following the fall of Rome in the 5th century AD. The barbarian tribes that overran Rome eventually converted to Christianity, which, in the form of the Catholic Church, became the dominant philosophic and cultural force of medieval Europe. Unlike the essentially secular classical world, or the post-Renaissance modern world, the medieval world zealously embraced religion as the fundamental source of truth and moral guidance. What were the results in human life?
  • Topic: Health, Islam
  • Political Geography: United States, America, Europe
  • Author: Paul Hsieh
  • Publication Date: 03-2011
  • Content Type: Journal Article
  • Journal: The Objective Standard
  • Institution: The Objective Standard
  • Abstract: If someone in America needs medical care but cannot afford it, should he rely on charity or should others be forced to pay for it? President Obama and his political allies say that Americans should be forced to pay for it. Forcing some Americans to pay medical bills for other Americans, says Obama, is a “moral imperative”1 and “the right thing to do.”2 Throughout the health-care debate of 2010–11, Obama repeatedly referred to government-run health care as “a core ethical and moral obligation,” arguing that, “No one should die because they cannot afford health care, and no one should go broke because they get sick.”3 In speeches, he repeatedly cited the story of Natoma Canfield, an Ohio cancer patient without health insurance, as a justification for his health-care legislation.4 Many of Obama's supporters on the political left made similar moral claims. Vanderbilt University professor Bruce Barry wrote in the New York Times that, “Health insurance in a civilized society is a collective moral obligation.”5 T. R. Reid, former foreign correspondent for the Washington Post, called universal health care a “moral imperative.”6 Ezra Klein, another writer for the Washington Post, agreed that it is an “ethical obligation.”7 But all such claims are wrong—morally wrong. There is no “right” to health care. Rights are not entitlements to goods or services produced by others; rather, they are prerogatives to freedom of action, such as the right to free speech, the right to contract, or the right to use one's property. Any attempt to enforce a so-called “right” to health care necessarily violates the actual rights of those who are forced to provide or pay for that care. If a patient needs a $50,000 operation but cannot afford it, he has the right to ask his friends, family, neighbors, or strangers for monetary assistance—and they have the right to offer it (or not). But the patient has no right to take people's money without their permission; to do so would be to violate their rights. His hardship, genuine as it may be, does not justify theft. Nor would the immoral nature of the act be changed by his taking $100 each from five hundred neighbors; that would merely spread the crime to a larger number of victims. Nor would the essence of the act change by his using the government as his agent to commit such theft on an even wider scale. The only moral way for this patient to receive the assistance he needs is for others to offer it voluntarily. Morally, he must rely on charity. Fortunately for him, there is no shortage of people willing to offer charity, nor is there a shortage of reasons why one might self-interestedly wish to do so. . . .
  • Topic: Health
  • Political Geography: America, Washington
  • Author: Ari Armstrong
  • Publication Date: 12-2011
  • Content Type: Journal Article
  • Journal: The Objective Standard
  • Institution: The Objective Standard
  • Abstract: New Brunswick: Transaction Publishers, 2012. 180 pp. $34.95 (hardcover). Reviewed by Ari Armstrong How often does an author defend the right of citizens to own guns and the right of homosexuals to marry—in the same book chapter? In his new book Capitalist Solutions, Andrew Bernstein applies the principle of individual rights not only to “social” issues such as gun rights and gay marriage but also to economic matters such as health care and education and to the threat of Islamic totalitarianism. Bernstein augments his philosophical discussions with a wide range of facts from history, economics, and science. The release of Capitalist Solutions could not have been timed more perfectly: It coincides with the rise of the “Occupy Wall Street” movement that focuses on “corporate greed” and the alleged evils of income inequality. Whereas many “Occupiers” call for more government involvement in various areas of the economy—including welfare support and subsidies for mortgages and student loans—Bernstein argues forcefully that government interference in the market caused today's economic problems and that capitalism is the solution. The introductory essay reviews Ayn Rand's basic philosophical theories, with an emphasis on her ethics of egoism and her politics of individual rights. Bernstein harkens back to this philosophical foundation throughout his book, applying it to the issues of the day. . . .
  • Topic: Economics, Education, Health
  • Political Geography: America
  • Author: Sarah Gelberg
  • Publication Date: 04-2010
  • Content Type: Journal Article
  • Journal: The Objective Standard
  • Institution: The Objective Standard
  • Abstract: When my two-year-old cat, Lily, began vomiting and refused her food and water, I took her to my veterinarian who, after a battery of X-rays and other tests, found nothing conclusive. The vet offered a preliminary diagnosis of gastritis, an inflammation of the stomach lining, and sent us home with medication to treat the condition. When twenty-four hours of the treatment yielded no improvement, we returned to the vet, who admitted Lily for observation overnight. The next evening, the vet phoned to say: "Lily is still vomiting and refusing food and water, so we ran a second set of X-rays and a comparison of the two sets revealed that her intestines are bunching as if something's lodged inside. There's an emergency veterinary clinic twenty miles away that has an ultrasound machine, which will enable us to see what's inside. Please come pick up Lily and drive her there; we'll notify them that you're on your way." The ultrasound revealed a large quantity of thread tangled in Lily's digestive tract. Unbeknownst to me, she had extracted a bobbin of thread from my sewing kit and swallowed the contents. The condition required surgery, which the vet at the emergency clinic performed that night, removing the thread (which was lodged in Lily's stomach, small intestine, and large intestine) without complications. Lily remained in intensive care for two days before the vet sent her home with a scar on her stomach, some antibiotics, and a list of instructions for postoperative care. She recovered fully and was back to mischief in short order. As this story indicates, the state of animal health care in America, in terms of the quality of the diagnostics and treatments available, is in many ways on par with that of human health care. And the fact that advancements in veterinary medicine have progressed in close parallel with those in human medicine should come as little surprise: Animals are important to us. They provide us with, among other things, food, labor, and companionship. To ensure that our animals are respectively tasty, reliable, healthy, and happy, we need the services of well-trained veterinarians equipped with the latest technologies. That demand is nicely satisfied. Most veterinarians in private practice specialize in either large-animal or small-animal medicine, a division that roughly corresponds to the distinction between livestock, such as cows and sheep, and companion animals, such as dogs and cats. Small-animal veterinary medicine is, in important respects, remarkably similar to human medicine. The skills required in small-animal medicine are, by and large, the same as those required in human medicine,1 and today's veterinary schools are every bit as rigorous as their counterparts in human medicine. After earning their undergraduate degrees, veterinary students must complete four years of medical training and then pass national and state licensure exams. Those who choose to become specialists must also complete an internship and residency and pass an examination for their chosen specialty.2 The technologies used by veterinarians and those used by medical doctors are similar as well. Vets use many of the same drugs as medical doctors, albeit in different concentrations, doses, and formulations;3 and their facilities are equipped with essentially the same kind of medical equipment to treat essentially the same kinds of medical problems. In fact, a great deal of the medical equipment used in veterinary medicine, including surgical instruments, common devices such as stethoscopes, and CT scan machines, is either identical to that used in human medicine or downsized to accommodate the smaller size of most pets.4 In the United States, advancements in human medicine-whether in training, medications, or facilities-are generally mirrored in small-animal veterinary medicine. Fortunately for our pets, however, veterinary medicine has not paralleled human medicine in two important respects: accessibility and affordability.
  • Topic: Health
  • Political Geography: America
  • Author: Paul Hsieh
  • Publication Date: 07-2010
  • Content Type: Journal Article
  • Journal: The Objective Standard
  • Institution: The Objective Standard
  • Abstract: On March 23, 2010, President Barack Obama signed into law the Patient Protection and Affordable Care Act (known colloquially as "ObamaCare"), declaring that the law would enshrine "the core principle that everybody should have some basic security when it comes to their health care."1 But, for reasons I have elaborated in previous articles in TOS, far from establishing security regarding Americans' health care, this new law will make quality health care harder to come by and more expensive for everyone. Unfortunately, until our politicians rediscover the principle of individual rights, choose to uphold it, and reverse this monstrosity of a law, we Americans are stuck with it and will have to cope the best we can.
  • Topic: Government, Health
  • Political Geography: United States, America
  • Author: Stella Daily Zawistowski
  • Publication Date: 10-2010
  • Content Type: Journal Article
  • Journal: The Objective Standard
  • Institution: The Objective Standard
  • Abstract: In their desire for less expensive, higher quality, more accessible health care, Americans have accepted a false alternative: fully regulated, socialized medicine, as advocated by Democrats, or semi-regulated, semi-socialized medicine, as advocated by most Republicans. But if Americans want better health care, they must come to recognize that government intervention, great and small, is precisely to blame for America's health care ills. And they must begin to advocate a third alternative: a steady and uncompromising transition toward a rights-respecting, fully free market in health care. In order to see why this is so, let us first consider the unfree, rights-violating nature of American health care today. Under our current semi-socialized health care system (which both Democrats and Republicans created), the government violates the rights of everyone who provides, purchases, insures, or needs health care. It violates the rights of doctors by forcibly subverting their medical judgment to the whims of government bureaucrats or to the heavily regulated insurance companies; it violates the rights of citizens in general by forcing them to buy insurance with a mandated set of benefits; it violates the rights of insurers by prohibiting them from selling plans of their design to customers of their choice at prices they deem economically appropriate; it violates the rights of pharmaceutical companies by forcing them to conduct trials that, in their professional judgment, are unnecessary; and it violates the rights of suffering and dying patients who wish to take trial medications but are forbidden to by law. These instances merely indicate the numerous ways in which the government violates the rights of health care participants, but they are enough to draw the conclusion that Americans are substantially unfree to act in accordance with their own judgment—a fact that alone is sufficient reason to condemn our current system as immoral. But, as we shall see, the immoral nature of the current system is also precisely what makes it impractical. The system is in shambles because of these rights violations, a fact that will bear out on examination of the three aspects of health care of most concern to Americans: its cost, its quality, and its accessibility.
  • Topic: Health
  • Political Geography: America
  • Author: Paul Hsieh
  • Publication Date: 09-2008
  • Content Type: Journal Article
  • Journal: The Objective Standard
  • Institution: The Objective Standard
  • Abstract: Identifies the theory behind the Massachusetts mandatory health insurance program, exposes the program as a fiasco, explains why the theory had to fail in practice, and sheds light on the only genuine, rights-respecting means to affordable, accessible health care for Americans.
  • Topic: Government, Health
  • Political Geography: America
  • Author: Stella Daily
  • Publication Date: 09-2008
  • Content Type: Journal Article
  • Journal: The Objective Standard
  • Institution: The Objective Standard
  • Abstract: This article is dedicated to Anna Tomalis, a young girl who died of liver cancer on August 15, 2008. Anna's parents desperately sought experimental treatment that might have saved her life, but were delayed for months by FDA bureaucracy. Anna finally received approval to obtain treatment through a clinical trial in July, but died after receiving just one round of treatment. She was thirteen years old. Abigail Burroughs was not the typical cancer patient: She was just nineteen years old when she was diagnosed with squamous cell cancer that had spread to her neck and lungs. Her prognosis was poor, but a then-experimental drug, Erbitux, offered the hope of saving her life. Abigail was denied that hope by the Food and Drug Administration. Because the drug was considered experimental, she could receive it only as part of a clinical trial-and Abigail was ineligible to participate in any trials at the time. Despite the best efforts of her family, friends, and doctor, Abigail was unable to receive the treatment that might have saved her life. At twenty-one years old, Abigail died of her disease. Abigail's father, Frank Burroughs, thought other patients with life-threatening illnesses should not be denied the ability to try any treatment that might give them a chance. In his daughter's name, he formed the Abigail Alliance for Better Access to Developmental Drugs, which sued the FDA in 2003. The group argued that the FDA's restrictions on access to experimental treatments constitute a violation of the right to self-defense as well as of the Fifth Amendment right not to be deprived of life, liberty, or property without due process of law. In August 2007, the Appeals Court of the District of Columbia struck a blow against the Abigail Alliance, and against individual rights, when it ruled that patients, even the terminally ill, do not have the right to receive treatment that has not been approved by the FDA. Erbitux has since been approved by the FDA to treat cancer of the head and neck-too late, of course, for Abigail Burroughs. How has America come to a point where the government denies dying patients the right to try to save their own lives? To answer that question, let us begin with a brief history of the Food and Drug Administration.A Brief History of the FDA Prior to the 20th century, the government did not regulate pharmaceutical products in the United States. Although Congress had considered federal regulations on food and drug safety as early as 1879, it had refrained from passing any legislation in this regard. However, with the muckraking journalism of the early 1900s, and especially with the publication of Upton Sinclair's novel The Jungle, which portrayed unsavory practices in the meatpacking industry, the American public clamored for laws to ensure the safe production of food and drugs. This public outcry pushed Congress to pass federal legislation in 1906. As the resulting Food and Drugs Act applied to drugs specifically, products were required to be sold only at certain levels of purity, strength, and quality; and ingredients considered dangerous (such as morphine or alcohol) had to be listed on the product's label. Violators would be subject to seizure of goods, fines, or imprisonment. Thus, in order to enforce the Act, the Food and Drug Administration was born. In its early years, the agency focused primarily on food rather than on pharmaceuticals, but in 1937 it increased its focus on drugs after a new formulation of sulfanilamide, a drug that had previously been successfully used to treat certain bacterial infections, proved to be deadly. The drug's manufacturer, S. E. Massengill Company, had dissolved an effective drug in a toxic solvent. More than one hundred people, babies and children among them, died as a result of taking Massengill's product, known as Elixir Sulfanilamide. Under the 1906 Food and Drugs Act, the FDA was not authorized to prosecute Massengill for selling an unsafe drug, and the agency had the power to recall Elixir Sulfanilamide only via a technicality. Because "elixir" was defined as a drug dissolved in alcohol, and because Massengill's formulation used the nonalcoholic solvent ethylene glycol, the product was technically mislabeled, bringing it under FDA jurisdiction and enabling the agency to recall the product. The public and legislators wanted more: They wanted the FDA not only to recall mislabeled products, but to prevent the sale of unsafe drugs in the first place. Thus, popular demand gave rise to the Food, Drug, and Cosmetics Act of 1938, which greatly expanded the FDA's authority. The most important change brought about by this Act was a shift in the burden of proof. Rather than prosecuting a drugmaker after the fact for having fraudulently marketed a product, the FDA would now require proof of safety before a drug could be marketed at all. (Note that this required manufacturers to prove a negative-i.e., that a given drug would not harm consumers.) After World War II, pharmaceutical companies came under still more scrutiny. Then, as now, complaints about the cost of drugs reached Congress, and in 1961 Senator Estes Kefauver led the charge in an investigation not only of drug pricing, but of the relationship between the drug industry and the FDA. Kefauver sought to pass legislation that would increase the agency's authority over drug production, distribution, and advertising. Whereas previously proof of safety alone was required to gain FDA approval, the proposed law would require drug manufacturers also to prove the efficacy of their products. Kefauver's bill might have languished in congressional debate but for the emergence at that time of data showing that thalidomide, which was then sold as a sleep aid and antinausea medication for pregnant women, caused severe birth defects in the children of women who took it. Thalidomide had not yet been approved for use in the United States at that time due to concerns of an FDA reviewer over a different side effect noted in the drug's application for approval. The drug was widely used in other countries, however, and the babies of many women who used it were born with grotesquely deformed limbs. As their harrowing images flooded the media, Americans realized they had narrowly escaped inflicting these deformities on their own children. The resulting public outcry led to Kefauver's bill being made law in 1962. This law served as the cornerstone for the wide powers that the FDA acquired thereafter, from requiring companies to include warnings in drug advertisements to dictating the way companies must investigate their own experimental compounds. Thus, although the scope and power of the FDA were modest at the agency's inception, its scope widened and its power increased markedly in the decades that followed. Now, a century later, the agency's purview includes foods and drugs for humans and animals, cosmetics, medical devices (including everything from breast implants to powered wheelchairs), blood and tissues, vaccines, and any products deemed to be radiation emitters (including cell phones and lasers). And the agency's power is nothing short of enormous. . . .
  • Topic: Health
  • Political Geography: America
  • Author: Eric Daniels
  • Publication Date: 09-2008
  • Content Type: Journal Article
  • Journal: The Objective Standard
  • Institution: The Objective Standard
  • Abstract: One of the distinguishing features of American life is the large degree of freedom we have in making choices about our lives. When choosing our diets, we have the freedom to choose everything from subsisting exclusively on junk food to consuming meticulously planned portions of fat, protein, and carbohydrate. When choosing how to conduct ourselves financially, we have the freedom to choose everything from a highly leveraged lifestyle of debt to a modest save-for-a-rainy-day approach. In every area of life, from health care to education to personal relationships, we are free to make countless decisions that affect our long-term happiness and prosperity-or lack thereof. According to Richard Thaler and Cass Sunstein, professors at the University of Chicago and authors of Nudge: Improving Decisions About Health, Wealth, and Happiness, this freedom and range of options is problematic. The problem, they say, is that most people, when given the opportunity, make bad choices; although Americans naturally want to do what is best for themselves, human fallibility often prevents them from knowing just what that is. "Most of us are busy, our lives are complicated, and we can't spend all our time thinking and analyzing everything" (p. 22). Average Americans, say Thaler and Sunstein, tend to favor the status quo, fall victim to temptation, use mental shortcuts, lack self-control, and follow the herd; as a result, they eat too much junk food, save too little, make bad investments, and buy faddish but useless products. Many Americans, according to the authors, are more like Homer Simpson (impulsive and easily fooled) than homo economicus (cool, calculating, and rational). "One of our major goals in this book," they note, "is to see how the world might be made easier, or safer, for the Homers among us" (p. 22). The particular areas where these Homers need the most help are those in which choices "have delayed effects . . . [are] difficult, infrequent, and offer poor feedback, and those for which the relation between choice and experience is ambiguous" (pp. 77-78).The central theme of Nudge is the idea that government and the private sector can improve people's choices by manipulating the "choice architecture" they face. As Thaler and Sunstein explain, people's choices are often shaped by the way in which alternatives are presented. If a doctor explains to his patient that a proposed medical procedure results in success in 90 percent of cases, that patient will often make a different decision from the one he would have made if the doctor had told him that one in ten patients dies from the procedure. Free markets, the authors argue, too often cater to and exploit people's tendencies to make less than rational choices. Faced with choices about extended warranties or health care plans or investing in one's education, only the most exceptional and rational people will make the "correct" choices. Most people, the authors argue, cannot avoid the common foibles of bad thinking; thus we ought to adopt a better way of framing and structuring choices so that people will be more likely to make better decisions and thereby do better for themselves. Hence the title: By presenting information in a specific way, "choice architects" can "nudge" the chooser in the "right" direction, even while maintaining his "freedom of choice."
  • Topic: Health
  • Political Geography: America, Chicago