THE ETERNAL SICK DAY (PREDATING MANGIONE)
I wrote this piece a few years back, when I was frustrated by the lack of sovereignty I saw in people. It includes a brief history of how the health insurance industry came to dominate the U.S.
Children born in the United States learn an interesting lesson very early in life. If you are “sick,” you stay home, eat ice cream, and watch TV. Very soon after our first experience with being sick or with witnessing a sibling stay home sick, we come to understand that one can merely state that one does not feel well, and, if this statement is convincing enough, we “get to” miss school or other engagements. Whether we are playing pretend or legitimately ill, being sick is how you get to enjoy the lazy pleasures of staying home.
This knowledge forms the basis of an iron-clad social contract that extends into adulthood. And this (the business of being sick as an out clause for any engagement we wish to avoid) forms a responsibility loophole that very few individuals find themselves above taking advantage of at least once in a while. If you don’t want to do something (including your job), being “sick” is your get-out-of-jail-free card. Therefore, in contemporary society, minor illness has major purchase: it gets you a day off for free. Provided you don’t sick-out beyond what is considered reasonable by your cohort—and provided you are not an on-the-clock worker who does not have the luxury of getting sick without an attendant financial penalty—claiming sickness will get you out of many things you don’t want to do without losing pay, credibility, or social standing. Which makes all claims of sickness both morally unquestionable and deeply suspect.
This sneakiness around sick days dovetails with an overall regard for the medical as the ultimate authority. We can get away with saying, “I can’t go; I’m sick,” because our relationship with the medical is so universally submissive that we will do literally anything it tells us to do. The medical has taken up residence in our collective psyches to a degree that we do not feel capable of viewing our own health through a process of individual self-assessment. We might know what we want to watch on TV or eat for dinner, we might proclaim whom we find sexually attractive and where we want to live, but we cannot discern on our own whether we are sick or well. This ignorance is fortified by the medical in many ways. We are told constantly about “silent killer” diseases and imperceptible chemical imbalances, and we are castigated if we do not get regular checkups. And let’s not even start on those reckless members of society who do not have health insurance—a term that is now used interchangeably with health care. We are beseeched by the medical community to “get our numbers checked,” with these quantitative analyses being viewed as the holy and righteous truth of contemporary health. Measurable, quantifiable data—the type of information science stakes its ironclad reputation for objective reliability on—has become the truth of our health. We can feel great, but beware of our “not great” numbers, which signal death looming unseen at the door. Or we can feel terrible, and be met with a shoulder shrug if our tests and numbers come back “within normal range.”
Science and the medical world are intimately connected, but they are not synonymous—they are simply now viewed as such. Science informs the medical, but both the medical industry and the realm of science are coupled to a larger global system of industrial capitalist consumerism. To assume purity of intent in either science or medicine is to cast a blind eye to the structures that fund and influence them. It is akin to believing that the church and clergy have a direct line to God, and one must obey their every word to receive God’s love and protection.
The reality of the medical industry can be more transactional than vocational, which means that patients expect something in return for their copayments. In contemporary times, when a patient goes to a doctor with a problem, a solution is nearly always presented (even when the problem resides outside the scope of the medical). Philosopher and priest Ivan Illich wrote his 1974 book Medical Nemesis as a repost against what he viewed as an increasingly medicalized world. In reference to the tendency for medical personnel to assume solutions in nearly all instances, Illich wrote, “To ascribe the pathology to some Tom, Dick, or Harry is the first task of the physician acting as member of a consulting profession. Trained to ‘do something' and express his concern, he feels active, useful, and effective when he can diagnose disease. Though, theoretically, at the first encounter the physician does not presume that his patient is affected by a disease, through a form of fail-safe principle he usually acts as if imputing a disease to the patient were better than disregarding one. The medical-decision rule pushes him to seek safety by diagnosing illness rather than health.”
The term medicalization was conceived by sociologists in the ‘70s to describe the reassignment of previously quotidian human circumstances, behaviors, and problems as medical conditions. Under the collective ideological mindset of medicalization, many formerly unmedicalized human attributes and experiences—shyness, thin lips, pregnancy, and boredom, to name just a few—now demand a medical resolution in the form of a pharmaceutical prescription or a surgical intervention. Illich commented on this phenomenon by writing “…a society [that] transform[s] people into patients…inevitably loses some of its autonomy to its healers.” This now visibly manifest psychosocial hierarchy has been building itself, brick by brick, for decades. Today, we live in an omni-medicalized world, with medical professionals wielding unprecedented authority.
There has been an extended public response to the phenomenon of Covid that dovetails with this overwhelming belief in medical guidance as the ultimate human authority. Were the church equally empowered by the general public in this time period, we’d likely see a growing number of public sermons, group prayer circles, and requests for divine salvation. Instead, all protective and restorative power has been granted to the medical, and medical advice about social distancing, mask-wearing, and vaccination takes on almost a liturgical mean (one that might arguably transcend measurable benefits). Were these phenomena carefully measured in the first place. The medical world's mutually beneficial affiliation with science has led to its ongoing presentation as not just the leading experts on how to prevent the transmission and contraction of Covid, but also the only reasonable and intelligent voice one might listen to. Yet, prior to this pandemic, scant research was done on the one thing our medical officials eventually seemed the most solidly confident about recommending: the use of face masks. Apparently, conclusive results had never been firmly established on the particular efficacy of face masks in response to severe respiratory illnesses spread from human to human by viral contact (though masks had been used for decades by citizens, and became especially popular after the 2004 SARS outbreak). In the abstract of a paper published in September 2020 by Nature, the authors stated: “Although mask wearing is intended, in part, to protect others from exhaled, virus-containing particles, few studies have examined particle emission by mask-wearers into the surrounding air.”
The level of authoritative supremacy the medical industry has obliquely claimed—think, for instance, of the transactional value of a diagnosis, a vaccination card, or even a simple doctor’s note—has allowed this industry overwhelming jurisdiction over the lives of common people, particularly in the time of Covid. This phenomenon, labeled “diagnostic imperialism” by Illich, gives “medical bureaucrats” the power to “…subdivide people into those who may drive a car, those who may stay away from work, those who must be locked up, those who may become soldiers, those who may cross borders, cook, or practice prostitution, those who may not run for the vice-presidency of the United States, those who are dead, those who are competent to commit a crime, and those who are liable to commit one.” In 1972, sociologist Irving Zola published a paper titled Medicine as an Institution of Social Control. In this paper, Zola—one of the first theorists endeavoring to describe medicalization, and also the one credited with coining the term—illustrated a society where the medical was becoming “…a major institution of social control, nudging aside, if not incorporating, the more traditional institutions of religion and law. It is becoming the new repository of truth, the place where absolute and often final judgments are made by supposedly morally neutral and objective experts. And these judgments are made, not in the name of virtue or legitimacy, but in the name of health.” Meaning: these judgments appear to have the power to keep humans upright and alive.
Because those in the medical field have been granted and have also assumed this authoritarian role, the average American has largely divested themselves of the view that their mental and physical health is within their personal responsibility to assess, uphold, and maintain. Instead, many view maladies as events that just happen to them, something for a doctor to figure out and fix—in the same manner that the mechanically challenged bring their car to the shop when it’s making that funny noise again. The average Western human feels helpless to the vicissitudes of disease, and can do nothing but call the doctor, get a test, pick up a prescription, or schedule a procedure. Indisputable medical advancements—antibiotics being one major lifesaving milestone, vaccines being another—help bolster the medical’s lifesaving reputation and allow for this industry to make broader and deeper authoritative claims.
So where is the line one might parse between medical assistance and medical supremacy? It is important to note that Zola, who the New York Times valorized as “a Sociologist Who Aided the Disabled,” was not some magically self-healing guru who viewed himself as exempt from medical need. He was a survivor of childhood Polio and a later car wreck that left him with permanent injuries. But he was also an individual interested in transcending the boundaries of ability and disability, somewhat in the manner that Scottish psychiatrist R.D. Laing rejected the medical model of mental illness by complicating the binary of sanity and madness. In Zola’s 1989 paper “Toward the Necessary Universalizing of a Disability Policy,” he wrote that “…the issues facing someone with a disability are not essentially medical. They are not purely the result of some physical or mental impairment but rather of the fit of such impairments with the social, attitudinal, architectural, medical, economic, and political environment.”
We currently have a very hard time understanding our own suffering—never mind the suffering of those around us—in anything but medical terms. Many people who are not elderly or infirmed take prescription drugs (including and especially psychological pharmaceuticals) for years and years, and do not have any solid plans to wean themselves off these medications. The same is true for cholesterol and blood pressure medications, both of which may become unnecessary were the “patient” to make a few permanent lifestyle changes. For many people, the idea of getting better is a fairytale. They are complacent with and complicit in being eternally medicated, and thus in maintaining a system of medical and pharmaceutical dependence. Many people believe that “healthy” is not something they are or could ever be. They view health not as the general set point of many if not most minds and bodies, but as something external that must be consumed.
It could be said that the medically dependent are disabled in a way that can’t be immediately understood or diagnostically determined. In the foreword to his 1982 memoir titled Missing Pieces, Zola writes, “…I already have the stigmata of the disabled—the braces, the limp, the cane—though I have spent much of my life denying their existence." For Zola his limp and cane were a visual cue that he was “invalid.” He found this personally undesirable. But such an immediately recognizable stigmata of illness might in fact become desirable to those whose maladies go unseen and misunderstood, for people who need help and who cannot get it from anyone other than a doctor. It’s now a legally defined aspect of American social values that our ill must be cared for to some degree, whether they can afford this care or not (a 1985 federal U.S. law requires all hospitals to stabilize and treat patients even if they cannot pay for it). And, for many, perhaps for most, the only time they can be assured that another human being is going to show genuine concern for their well-being is when they enter the doctor’s office or an emergency room. Stigmata are colloquially the marks of either disgrace or martyrdom, but they also signify open wounds that need immediate attention.
Where we as a society often talk about the negative impact of industrialization when it comes to environmental pollution and factory farming, we rarely talk about the fact that many other human practices, including what was once termed “doctoring,” have also been industrialized in the factory model—particularly in the realm of pharmaceuticals. Illich claimed that, “Effective health care depends on self-care; this fact is currently heralded as if it were a discovery.” Which is interesting, given that the term “self care,” when employed in contemporary dialogue, has morphed meanings. It is now used to describe a consumerist, luxury-based approach to personal grooming, emphasizing the creation of a self-care “ritual” inclusive of the use of a dermatologist's private skincare line and plastic surgeon's signature lip injections.
While self-care as a term has been appropriated by the beauty and wellness industries to promote the consumption of goods and services that are meant to make consumers feel pampered, the term, as Illich uses it, was originally coined by nursing theorist Dorothea Orem. In grand nursing theory, self-care is used to describe the responsibility an individual has to the management and maintenance of their own health. It is essentially self-doctoring that takes individual responsibility along with body sovereignty as a given. It also puts forth the idea that a patient should not become overly dependent on the medical system. Instead, patients must practice self-care by looking after themselves up until a “self-care deficit” occurs—and this is the point at which a nurse should step in. This belief structure—that the health of an individual is primarily their own responsibility, and that nurses should provide assistance only when it is absolutely necessary—has been endemic to nursing since its inception, with Florence Nightingale stating that the “…role of a nurse is to put her patient in the best position to be able to self-heal.”
Yet, where once the medical was a place of last resort, it is now typically the first line of defense. Thus, it has become an obligatory rite of passage for young people to receive their first identity-defining diagnosis—ADHD, depression, anxiety—and to be set forth on a life’s journey of prescriptions and check-ups. While technology's manifest destiny has resulted in an abundance of innovative quasi-cures, we cannot deny that (given the significant amount of medical and pharmaceutical treatment the average American consumes) we are not a people who are overwhelmingly “healthy” on our own—we are often simply medicalized and medicated into living longer lives of mediocre health. The result of medicalization is the diagnoses of ever-increasing existential threats; illness is the marauder who knocks upon the door at midnight. Medicalization “heroically” fights this enemy through endless courses of treatment designed to conquer and control looming health catastrophes, rarely considering that the reliance on this form of care might be the problem in the first place.
Our health care officials are guardians at the gates of doom, a militarized hierarchy to whom we have given almost complete authority. Without them, we’d be dead—or so the story goes. But we will all die someday, and there is a stark difference between quantity of life and quality of life. We seem intent on viewing death as the ultimate tragedy. Perhaps life and death have more in common than one might think. When the average Western individual remains in a stalemate with their own pharmaceutical-infested mind and body, one wonders if they are truly living. Or just waiting to die.
There is something under the surface here, related to body sovereignty, physical ownership and the fact that 9 to 5 jobs often pay one not for their products, performance, or ideas, but for the seconds stolen from the duration of that person’s lifespan. In a sense, your job owns your body. At least sometimes. And health insurance—which is almost ubiquitously tied to employment in the United States—ensures that a company’s investment is well tended to. As Illich posits, “The higher the salary the company pays, the higher the rank of an aparatchik, the more will be spent to keep the valuable cog well oiled. Maintenance costs for highly capitalized manpower are the new measure of status for those on the upper rungs. People keep up with the Joneses by emulating their ‘check-ups’...” But health insurance is a relatively new invention, and its alliance with employment began as a perk rather than as a necessity or a human right. As reported by Alex Blumberg on NPR, “By the late 1920s, hospitals noticed most of their beds were going empty every night. They wanted to get people who weren't deathly ill to start coming in. An official at Baylor University Hospital in Dallas noticed that Americans, on average, were spending more on cosmetics than on medical care. ‘We spend a dollar or so at a time for cosmetics and do not notice the high cost,’ he said. ‘The ribbon-counter clerk can pay 50 cents, 75 cents or $1 a month, yet it would take about 20 years to set aside [money for] a large hospital bill.'”
So, in the classic manner of all capitalist enterprises, the entrepreneurial Baylor hospital, “…started looking for a way to get regular folks in Dallas to pay for health care the same way they paid for lipstick — a tiny bit each month. Hospital officials started small, offering a deal to a group of public school teachers in Dallas. They offered a plan for the teachers to pay 50 cents each month in exchange for Baylor picking up the tab on hospital visits,” writes Blumberg. And, as with many great American business success stories, the hospital was able to turn catastrophe to triumph: “When the Great Depression hit, almost every hospital in the country saw its patient load disappear. The Baylor idea became hugely popular. It eventually got a name: Blue Cross.”
Blue Cross proved to be the blueprint for the employer-based health insurance system, but, writes Blumberg, “The modern system of getting benefits through a job required another catalyst: World War II… The government rationed goods even as factories ramped up production and needed to attract workers. Factory owners needed a way to lure employees [so] owners turned to fringe benefits, offering more and more generous health plans. The next big step in the evolution of health care was also an accident. In 1943, the Internal Revenue Service ruled that employer-based health care should be tax free. A second law, in 1954, made the tax advantages even more attractive. By the 1960s, 70 percent of the population is covered by some kind of private, voluntary health insurance plan. By [then] Americans started to see that system — in which people with good jobs get health care through work and almost everyone else looks to government — as if it were the natural order of things.”
The paternalistic “naturalness” of employer-provided health care, which always takes the form of health insurance, has created a society that wants to be kept healthy via extrinsic forces like “a great health care package” (complete with unlimited sick days to get us out of doing a job we've likely taken to acquire health care in the first place). We are not a society that wants the ultimate responsibility of keeping themselves healthy. The Oremian concept of “self-care” has been replaced with an dependency-based mindset that refuses to take responsibility for its own physical, mental and emotional health–primarily because it has been brainwashed into thinking it can’t. As a consumerist society, it has been decided that life itself must be consumed, rather than created. We have outsourced our health, and created a system wherein we must buy it back via the time we spend on the clock at work.
In America, we find pills as a general category not just easy to swallow, but infinitely palatable. And there is something about the medicalized mindset, which views individuals first and foremost as a collected set of health problems—and life as an unending diagnostic test—that seems to have no issue with the specter of sickliness as a coat of arms. It is those possessed with this mindset who might strive for themselves and others to wear their masks of illness forever, if it comes with the promise of safety. In his 1993 Reith lecture, philosopher Edward Said stated that, “The [role of the] intellectual ... cannot be played without a sense of being someone whose place it is publicly to raise embarrassing questions, to confront orthodoxy and dogma (rather than to produce them), to be someone who cannot easily be co-opted by governments or corporations, and whose raison d'être is to represent all those people and issues that are routinely forgotten or swept under the rug.”
As a people, we must provide space for embarrassing questions, especially in relation to our own bodies and minds. We must enter a liminal zone of unknowing, and begin to look with new eyes at the structures we regard as unquestionable—particularly those that assert themselves into our existence under the guise of universal safety and protection. More than anything, we must begin to enact daily rituals of self-care for our own thoughts and ideas, lest we begin to transfer the sovereignty of our minds to our fractured and increasingly enfeebled governing systems.