Tuesday, December 17, 2013

Discussing the Nature and Future of Antibiotics - Part 1

Antibiotics are chemical substances that either inhibit the growth of or terminate bacteria. Typically antibiotics are restricted to bacteria and have no significant effect against viruses or other pathogenic agents due to their points of action. The verbal origins of the word antibiotic come from the Greek anti (against) and bios (life). Despite the existence of the immune system, before the development of antibiotic treatment protocols and their widespread use many illnesses like pneumonia, tuberculosis and typhoid carried very high fatality rates. Now when discounting for bacterial resistance fatality rates from all manner of infection are much lower.

Most of the early work concerning antibiotics occurred between the years 1928 and 1942 with the initial work of Sir Alexander Fleming in 1928 who accidentally empirically discovered the anti-bacterial properties of the mold, Penicillium and Gerhard Domagk who discovered the first class of antibacterial agent, sulfonamides, in 1935. In 1939 Rene Dubos discovered the first naturally derived antibiotic, tyrothricin, but after testing was deemed too toxic for human administration. The substance utilized by the mold to achieve such properties was isolated and unsurprisingly named penicillin by Ernst Chain and Howard Florey in 1942. By the early 1940s both sulfonamides and penicillin began clinical use in treatment of diseases. Later penicillin was first chemically synthesized in 1957 by John C. Sheehan, which generated the basic information required to begin synthesis of synthetic penicillin derived antibiotics to combat resistance.

Antibiotics have two modes of function: prevention of bacterial growth (bacteriostatic) and bacterial death (bactericidal). In normal immune systems typically either function is sufficient for recovery as the patient’s immune system should be able to neutralize the bacteria after antibiotic application. However, in those individuals who have weaker immune systems, elderly, very young children or other immunocompromised, bacteriostatic antibiotics may not be enough to fully treat the illness. The bactericidal behavior of penicillin is largely why it was the favored antibiotic over sulfonamides before bacterial resistance started to emerge.

Antibiotics have a wide range of functional pathways to influence bacterial growth: 1) destroy cell wall either through the production of pores or negation of synthesis elements; 2) neutralize aspects of protein synthesis including blocking ribosome production or their binding to appropriate targets; 3) neutralize DNA synthesis and other nucleic acid metabolism functions.1-4 When identifying a new antibiotic agent testing is important to determine the functionality of the antibiotic, how long it works and how large of a dose is required to receive an effective treatment response. Dosage is important because a large enough dose of antibiotic administered against a non-resistant target will kill it, but will also generate significantly detrimental side effects as well.

One of the major targets for antibiotics currently utilized in treatment is the bacterial ribosome largely because of its significant structural differences relative to its mammalian cousin. Bacterial ribosomes are 70S with a 30S and 50S subunit that are comprised from three types of rRNA (5S, 16S and 23S) versus the 80S mammalian ribosome.2,5 This difference makes antibiotics that target ribosomes attractive candidates as the probability of antibiotic interaction against mammalian (friendly) ribosomes leading to unpleasant side effects is low. Antibiotics that target DNA synthesis can also be attractive candidates because of the ability to target molecules that are not utilized in mammalian cells like bacterial DNA girase (topoisomerase II),6 which packs and unpacks supercoiled bacterial DNA, or bacterial DNA-dependent RNA polymerase.7 Other antibiotics neutralize bacteria through indirect means like targeting unique secondary products that are required for DNA like the metabolism of tetrahydrofolic acid. Tetrahydrofolic acid is essential for the synthesis of purines, pyrimidines and some amino acids. Anti-metabolites interfere with tetrahydrofolic acid synthesis and, therefore, inhibit DNA synthesis.

There are two classifications for antibiotics: narrow-spectrum that only work on a small number of specific bacteria and broad-spectrum that work against a large number of bacteria. Not surprisingly though because of its target breadth the use of broad-spectrum antibiotics has a higher probability of developing antibiotic resistant bacteria. Therefore it is standard operating procedure for broad-spectrum antibiotics to only be assigned when the pathogen is unidentified or if unresponsive to any narrow-spectrum antibiotics. For instance gram negative bacteria are typically treated with broad-spectrum antibiotics due to the structural differences in the cellular wall and internal reproduction machinery, which make more narrow-spectrum antibiotics, which are designed to treat gram positive bacteria, ineffective.

Antibiotics are also separated into three classifications derived from their application: surface, oral or intravenous. Surface applications are placed on the skin, in the eyes or mucous membrane in the nose and are typically limited to the local area around where the antibiotic is applied. Oral applications are pills, tablets and gel-caps that are swallowed breaking down in the small intestine and are later absorbed by the bloodstream. Intravenous application is the most powerful application because there is little residence or lag time between application and absorption into the bloodstream. However, intravenous application typically only occurs at a hospital and is largely reserved for critical or uniquely specific conditions.

Of the three classifications oral application is one of the principle elements responsible for the development of bacterial resistance because despite warnings a number of patients continue to prematurely cease oral treatments due to a disappearance of illness related symptoms. Basically patients take some of the assigned dosage start to feel better and fail to finish the remainder of the dose due to a belief of non-necessity. Unfortunately not completing the dosage increases the probability of surviving bacteria becoming resistant.

Although antibiotics are normally used to treat illness they are also prescribed in certain situations to reduce the potential for infection and illness. The most common scenario for this preventative strategy is antibiotic treatment before major surgery to reduce probabilities of current operative and post-operative infection. Also combination therapy is popular where multiple drugs are administered together where one mechanism aids the effectiveness of the other in a synergistic relationship. An example of such a relationship is penicillin weakening or destroying cell walls allowing aminoglycoside entry into the cell. Typically synergistic activity is not an effective treatment strategy outside of circumventing non-pathway resistant mutations because normally a single antibiotic and the innate immune system can neutralize an illness. Also while effective combination therapy failure increases resistant probabilities for multiple drugs instead of just one.

Unfortunately the second most common pathway for bacterial resistance has developed from the widespread use of antibiotics in animal and milk production. In short farmers and corporate entities apply large amounts of antibiotics to healthy dairy cows and other livestock in an attempt to reduce their probability of contracting illness and increase their weight. The weight increases come from eliminating the current microbiota (gut bacteria population) through antibiotic treatment and re-establishing a weight-gain favoring microbiota through diet. However, the haphazard application of antibiotics in such a fashion increases the probability that resistant bacteria emerge and because resistance is just a matter of probability frivolous application strategies like the one above have net detrimental outcomes over the long-term.

Although the tools exist to synthetically create antibiotics from a new base core, a vast majority of antibiotics in use are derived from living organisms, mostly molds, fungi and other bacteria. The two major reasons synthetic antibiotics have not become prominent are safety and functionality concerns along with the financial cost associated with creating antibiotics versus the “lack” of a market for pharmaceutical profit. One of the most common and reliable ways of producing an antibiotic is biosynthesis where the specific organisms themselves manufacture the antibiotic under optimized growing and conditions with additional elements/stressors that will increase the rate and probability for production of the desired elements.

Industrial mass production of antibiotics through biosynthesis is carried out by fermentation where the antibiotics, which are typically secondary metabolites, are collected before cell death. The typical isolation process involves killing the cell, thereby production schemes require large amounts of cell growth to maximize product collection to ensure efficiency and profitability. Collection first involves extraction and then purification into a crystalline product. Organic solvents are used to increase the efficiency of collecting soluble products, but non-soluble products must be removed through additional steps like precipitation, ion exchange and/or adsorption.

The synthesis of synthetic antibiotics typically follows the same methodology. First, an existing antibiotic is selected for modification. The reason behind selecting an existing antibiotic is two-fold: first, there is already empirical certainty that the selected antibiotic has some form of activity against bacteria, thus there is no wasted money developing random chemical structures that have unknown activity profiles. Second, the existing antibiotic is safe enough for human consumption due to widespread use with relatively known side effects.

The second step for synthetic generation is to identify the chemical structure of the active portion of the antibiotic in order to determine what structural modifications could be made to change activation potentials against resistant organisms. Third, the operational pathway of the antibiotic is identified and confirmed. Finally, alterations are made to some part of the structure of the antibiotic to possibly change its response to a given infection. For example all members of the penicillin family have identical rings, but the chemical chain (R group) attached to the ring will be different for different members of the family. So modification of that particular chemical chain is frequently the target of synthetic strategies for penicillin.

After their production these new synthetic derivatives are tested for effectiveness and to ensure retention of safety due to the changes in the chemical formula. In addition to this general methodology new techniques are being used to alter the genetic structure of certain bacteria so the bacteria itself produces a similar, but different antibiotic.

Pharmaceutical companies use computers to data mine and test modifications in the ring to observe chemical compatibility and probability. A standardized testing structure known as 'rational design program' is frequently used now. Such a program focuses on more in-depth analysis of how the favorable agent inhibits by looking at specific targets. In addition to making slight changes in chemical composition to increase antibiotic effectiveness against resistance emergence, another common goal of synthetic antibiotic creation involves increasing the half-life of the drug so that lasts longer in the bloodstream, which will increase the probability for effective treatment.

Identification of side effects and efficiency of treatment is carried out through a screening process that uses a large number of isolates of microorganisms and the secondary products of these organisms are tested in diffusion and growth limitation studies on test organisms. Molecules that show favorable results in both of these areas are then tested for selective toxicities. Afterwards the best candidates are isolated for more rigorous study, which enter the standard three clinical trial phase testing methodology for clinical drugs. However, recently there has been some momentum for changing antibiotic specific testing protocols to speed the application of new antibiotics to the market in effort to better address increasing bacterial resistance.8

Realistically a large amount of natural and synthetic antibiotics have been created at one time or another, but only a select handful have been proven safe and effective. The ideal characteristic of an antibody is a selective bacteria-unique target that negatively influences a critical system required to maintain life. This method of action will lead to the death of the bacteria, but should not interact with any eukaryotic cell targets thus heavily limiting, if not eliminating, any side effects associated with the use of the antibiotic. However, due to the general effectiveness of the immune system antibiotics that lack selective targeting are still useful if administered in a controlled and proper dose. Even if dosage is proper there is valid probability for the development of side effects, thus these changes must also be monitored.

When determining whether or not to treat with antibiotics the first step is to identify what type of organism is responsible for the illness. Under normal circumstances once identification has concluded available treatment options and treatment is rather straightforward. There are typically two elements that complicate treatment strategies. First, if the cause of the illness is viral most treatment options are no longer viable limiting options to a small number of interferons. Second, if the patient is allergic to the primary option of treatment a secondary, more than likely, less effective option will need to be utilized. If the illness is caused by an unidentified pathogen treatment is usually applied using broad-spectrum antibiotics, a strategy commonly called empiric therapy. Overall once a treatment is applied its effectiveness is determined by how well the drug is absorbed into the bloodstream, the diffusion rate of the drug and the half-life of the drug.

Some of the more common classes of antibiotics that have been used in the past or are currently in use are:

Penicillins:

Alexander Fleming discovered the penicillin group from the fungi Pencillium in 1928 (specifically penicillin G was isolated first and later followed by procaine penicillin, benzathine penicillin and penicillin V). Penicillin functions by damaging or destroying bacterial cell walls while the bacterium are in the process of reproduction. The mechanism of action is the inactivation transpeptidase, which is necessary for cross-linking and proper cell wall synthesis. The bacterium accepts penicillin as a substrate and then alkanolates a nucleophilic oxygen of the enzyme, rendering it inactive. Cell wall construction and maintenance ceases leaving multiple holes in the existing cell wall due to continuing natural degradation. This process is also aided by negative feedback where increasing osmotic pressure increases the probability of cytolysis and excess peptidoglycan precursor, due to limited cell wall synthesis, triggers hydrolases and autolysin activation further breaking down the cell wall.

The antibiotic nature of the penicillin is due to the strained beta-lactam ring; when the ring opens, strain is reduced making penicillin more reactive than ordinary amides. Based on its method of action penicillins do not act against organisms lacking cell walls like eukaryotic cells and certain types of bacteria (like most Gram-negative). Due to their ability to facilitate cell wall breakdown, penicillin works well in combination treatments with intracellular acting antibiotics that attack DNA, RNA and/or protein synthesis. Unfortunately due to unsurprising overuse, penicillin has been used for over half a century most bacteria have become resistance on a large-scale.

While the lack of cell walls in eukaryotic cells limits the side effects from these antibiotics, some mild side effects can occur from their use like diarrhea, rashes and hives (which usually indicates an allergy). The most rare and dangerous side effect is an “anaphylactic” allergy, which results in labored breathing due to swelling in the airway born from a severe allergic reaction. Normally allergic reactions eliminate the availability of a given antibiotic, but if the situation demands the use of an antibiotic that will result in an allergic reaction a desensitization strategy is used where the patient receives small doses with a high frequency of administration and slight increases in the overall dosage.

Cephalosporins:

Along with cephamycins, cephalosporins are a class of beta-lactam antibiotic that originally acted against Gram-positive bacteria, but through modern synthetic synthesis newer formulations have increased action effectiveness against Gram-negative bacteria while weakening their action against Gram-positive bacteria. While its method of action is similar to penicillin due to a small structural difference cephalosporins are less susceptible to penicillinases making it more difficult for bacteria to neutralize its bactericidal action. Differing generations of cephalosporins have been created by largely modifying its nucleus to reduce the probability that bacteria develop resistance. Currently, although somewhat disputed, five different generations of cephalosporins have been created with the fourth and fifth generations classified as having broad-spectrum action.

Quinolones:

Quinolones, the largest sub-class being fluoroquinolones, are synthetic broad-spectrum antibiotic that was first developed in 1962 as a distillate byproduct in chloroquine synthesis. Quinolones are typically reserved for secondary application in community-acquired infections due to concerns over peripheral neuropathy along with other strong side effects and increasing bacterium resistance development along with enhancing probability of Clostridium difficile and MRSA infections. However, when used quinolones function occurs through blocking the action of DNA gyrase in Gram-negative bacteria and topoisomerase IV in Gram-positive. For treatment quinolones can stand alone due to their increased probability of cell entry through porins, but in Gram-positive bacteria combination treatment with a beta-lactam antibiotic also increases treatment outcomes. Unfortunately numerous pathogens now demonstrate resistance to quinolones due to overuse in that they were the principle treatment of Gram-negative bacteria from the 60s until the 90s until the development of synthetic broad-spectrum beta lactam antibiotics and continued misuse in the 00s.

Aminoglycosides:

Aminoglycosides are molecules composed of amino-modified sugars and typically can act as antibiotics through two different mechanisms: first the ability to bind irreversibly to the 30S ribosomal subunit in bacteria interfering with successful mRNA interaction reducing the effectiveness of translation (i.e. protein synthesis). There is also some evidence that they also interfere with proofreading and inhibit ribosomal translocation. Second, aminoglycosides are thought to competitively displace magnesium and calcium that link polysaccharides within lipopolysaccharide groups creating transient holes disrupting cell wall permeability.

Aminoglycosides are primarily effective against Gram-negative aerobes and mycobacteria, but while effective antibiotics they are unfortunately rapidly dissolved in the stomach, which eliminates oral consumption, limiting their administration to IVs or injections. Also aminoglycosides have some fairly damaging side effects including kidney and auditory damage. Due to these side effects aminoglycosides are typically used as a last resort antibiotic against unknown or multi-drug resistant pathogens.

Polymyxins:

Polymyxins damages cell walls through interaction between lipopolysaccharide in the outer membrane and its hydrophobic tail in a similar method as detergent/soap. Polymyxin appears to work best in combination treatments with intracellular acting antibiotics for the disruption of the outer membrane allows for easier entry to the cell. Unfortunately similar to aminoglycosides, polymyxins are typically reserved for late stage treatments due to serious neurotoxic side effects.

Tetracyclines:

Tetracyclines (tetracycline, doxycycline) bind to the 16S portion of the 30S ribosomal subunit and block polypeptide chain elongation by preventing the attachment of charged aminacyl-tRNA. They are typically used when treating Gram-positive cocci, chlamydia, mycoplasma, and rickettsia. A secondary mechanism of function involves tetracyclines binding to ions on the membrane and generating additional ionophores disrupting calcium flow into and out from the bacteria. Unfortunately bacteria resistance has developed rapidly against tetracyclines significantly limiting their functionality.

Rifampicin:

Rifampicin binds strongly to DNA-dependent RNA polymerase leading to the inhibition of RNA synthesis. It is largely used in combination treatments against Mycobacterium infections most notably tuberculosis. Due to its mechanism of action rifampicin is also used in secondary combination treatments against more resistant bacteria. Combination treatments are the norm with rifampicin because when used alone bacterial resistance builds quickly.

Macrolides:

Macrolides are molecules with a macrolide ring and a standard construct of a linear molecule that later form a large ring induced by activation of an enzyme on the terminus molecule. These rings are linked through glycoside bonds with amino sugars usually cladinose and desosamine. The linear molecule is constructed similar to that of a protein with small molecules combined on a biological assembly line. Most macrolides function by interfering with bacterial ribosomes by binding reversibly to the P site on the 50S subunit preventing ribosomal initiation complex formation or amino-acyl translocation. In low doses it is typically bacteriostatic, but at higher concentrations it becomes bactericidal, but obviously side effects become more prominent as well. Leukocyte accumulation and interaction aid in transporting macrolides to infection sites increasing efficacy.

Sulfonamides:

Not all sulfonamides are antibacterial, but those that are typically have structures that resemble para-aminobenzoic acid (PABA), which is a precursor in folic acid synthesis. Sulfonamides compete (competitive inhibitors) with PABA to bind to dihydropteroate synthase, an enzyme involved in folate synthesis. Reduction of folate synthesis efficiency makes sulfonamides bacteriostatic. Sulfonamides are typically used in combination treatment, usually erythromycin or trimethoprim, due to recent escalation in resistance. There are few side effects associated with sulfonamide use due to the lack of biosynthesis of folate in humans; folate is instead acquired through diet.

Trimethoprim:

Trimethoprim (TMP) is a synthetic antibiotic. Similar to sulfonamide, trimethoprim selectively inhibits the dihydrofolic acid reductase of bacteria, which converts dihydrofolic acid to tetrahydrofolic acid, a precursor of purines stopping DNA synthesis and replication. TMP is normally bacteriostatic, but can be bactericidal and is commonly used in combination with sulfamethoxazole (SMX). This combination is useful because bacteria that are partly resistant to either TMP or SMX can still be killed by the combination of the two.

Vancomycin:

Vancomycin is a glycopeptide that inhibits cell wall synthesis in Gram-positive bacteria by preventing the synthesis of the long polymers N-acetylmuramic acid and N-acetylglucosamine and preventing polymer cross-linking. Through most of its lifetime vancomycin was viewed as a treatment of last-resort due to its poor oral bioavailability requiring intravenous treatment, which reduced the probability of bacteria resistance. Unfortunately in recent years bacteria resistance has grown exponentially significantly limiting the usefulness of vancomycin.


Resistance has become one of the biggest “rarely talked about” problems in the modern world. A bacterium becomes resistant to an antibiotic when it alters a portion of its genetic structure to neutralize the method of antibiotic action. The genetic change can neutralize antibiotic activity in multiple ways: the target is eliminated entirely, a new defense element is created to eliminate the ability of the antibiotic to target the molecule (deactivating it or removing it) or a change occurs in the pathway of operation eliminating the necessity of the target for growth and survival.1,9 There are typically two major categories that bacteria can alter their genetic structure to create resistance.

Random genetic mutation is the most basic way resistance is acquired. Basically random genetic mutation is the egg in the chicken and the egg question regarding bacterial resistance as no other forms of resistance could occur without the original underlying mutation. Unfortunately these mutations are random and do not require previous exposure to the antibiotic to become resistant. The spontaneous mutation of a susceptibility gene has a frequency of occurrence of anywhere from 10-12 - 10-7 which for some species increases to 10-7 to 10-5 under selective pressure during or after exposure to a specific antibiotic.10 This increased resistance probability is what creates concern by certain parties when discussing frivolous antibiotic administration on healthy organisms.

Another means of mutation that generates resistance is genetic transposition. Transposition is the recombination of genetic material due to transposons which are DNA segments with specific insertion sequences at each end. Using these sequences, transposons are able to migrate or jump between DNA strands in the same bacterium. Sometimes the new sequences that are created due to this movement produce antibiotic resistance.

The second method that leads to resistance is acquisition of a plasmid with the appropriate resistance. Bacteria commonly exchange circular pieces of DNA smaller than their chromosomes otherwise referred to as plasmids. Plasmid acquisition typically occurs in one of three ways: 1) bacterium in close contact with each other freely transfer various plasmids typically referred to as transduction. If the receiving bacterium transfers a plasmid back then the process is referred to as retrotransfer; 2) the unilateral transfer of a plasmid between bacterium during reproduction typically referred to as conjugation; 3) the absorption of a portion or entire chromosome by another bacterium typically referred to as transformation;

Overall there are five major mechanisms in which a bacterium uses these genetic additions or mutations to acquire resistance. 1) Antibiotic Modification - The antibiotic is inactivated typically by the production of a new enzyme that targets the antibiotic changing its structure; 2) Antibiotic Entry Restriction – Modification lowers the probability that the antibiotic is able to enter the cell to target intracellular molecules; this mechanism is more common in Gram-negative bacteria where porins that typically allow diffusion of antibiotics through the membrane are altered disallowing antibiotic entry; 3) Enhanced efflux of the antibiotic – efflux pumps are augmented or synthesized at faster rates to remove antibiotics from the intracellular space and cytoplasm before it can reach the target molecule; 4) Alteration of Drug Target – the mutation changes the structure of the target molecule so the antibiotic can no longer bind triggering its effect; 5) Pathway Alteration – This is the least common mutation due to its complexity, the bacterium produces an alternate redundant pathway mitigating the effectiveness of antibiotic blocking of the initial pathway.

One troubling element of resistance is how rapid it appears to be developing. For example in the 1940s when penicillin use was first becoming standard protocol for all isolates of S. Aureus <1% of isolates were resistant, but by 1951 approximately 75% of isolates were resistant which is why synthetic penicillin was created. For S. pneumoniae only 2% of isolates in 1981 were penicillin resistant versus 12% in 1995. It is thought that similar types of resistance development curves exist for other bacteria and other antibiotics. Of bacteria that have developed resistances the Infectious Diseases Society of America (IDSA) believes that Enterococcus, Staphylococcus, Klebsiella, Acinetobacter, Pseudomonas, Enterobacter and E. coli are the most advanced and dangerous.11

The Gram-negative Acinetobacter, Pseudomonas, Enterobacter are becoming increasingly resistant to existing antibiotics at a much faster pace than their Gram-positive brethren. According to the CDC methicillin-resistance Staphylococcus aureus (MRSA) kills approximately 11,000 more than emphysema, AIDS, Parkinson’s or homicides.12 An additional 100,000 die from hospital acquired infections where a vast majority of those infections are resistant to at least one type of antibiotics.13,14

Antibiotics are misused primarily through excessive use of prophylactic strategies for travelers, failure to prescribe correct dosages through history of use and weight, failure to complete the prescribed dosage regardless of existing symptoms and inappropriate prescription of antibiotics for non-necessary conditions. In addition to misuse, the crisis among antibiotic resistant bacteria has largely come about due to a vast majority of pharmaceutical companies electing not to continue antibiotic research programs to develop new antibiotics. Not surprisingly without a continuous stream of new antibiotics, especially those with new core structures, increasing bacteria resistance has narrowed the available treatment options for existing bacteria infections. To highlight this deficiency in new antibiotics from 2008 to early 2013 only two systemic antibacterial agents were approved by the FDA versus the approval of sixteen from 1983-1987.14 Of at least 20 pharmaceutical companies that had large research and development programs in 1990 only AstraZeneca and GlaxoSmithKline remain today.15,16

One could argue that with the progression of time novel antibiotic discoveries become more difficult so it is understandable that fewer antibiotics would be developed now versus 20-30 years ago; while this rationality is correct a drop off of 87.5% in a 25-year span cannot be explained by such a rationality. In fact there are three principle reasons why a vast majority of pharmaceutical companies no longer focus on producing antibiotics.

First, there is little profitability in creating a new antibiotic largely because its application is single use over a short period of time versus the blockbuster (billion dollar+ per year) drugs that focus on chronic health or lifestyle conditions (statins, erectile dysfunction, high blood pressure etc.). In modern capitalism businesses prefer to sell a product that stabilizes a negative state/condition rather than cures it because the stabilizing agent can be sold in perpetuity. However, a counterpoint is that in the current market individuals pay $100,000 for an extra four months of life when facing stage-4 cancer, but balk at paying $125 for antibiotics that will help treat an infection. In this vein consumer priorities do appear misguided, based on fear rather than rationality ($100,000 for four more months and then death versus $125 for potentially decades of more life). This pricing difference is neither rational nor data-driven; there is no cost analysis that supports cancer drug pricing. Rather, drug pricing in the U.S. is based on public perception and fear. People are terrified of cancer, but not terrified by a multi-antibiotic resistant form of TB, which has much higher levels of virulence and a comparable fatality rate.

Second, drug research in general is expensive, especially now that most of the “low-hanging” fruit has been scooped off of the ground. The principle antibiotics that were first utilized in treatment (sulfonamides and penicillin) were derived from other species making their isolation and manufacture into a human applicable form straightforward. These compounds also provided effective core/lead compounds that could be slightly manipulated to produce new drugs, which had the potential to evade certain resistances bacteria developed for the initial principle compound. However, the manipulation of those core compounds has run fairly dry demanding more complexity, data mining and high throughput screening (i.e. computer power) to develop the next generation of antibiotics. Of course this complexity demands excess resources and financial capital increasing the costs of even attempting to create new antibiotics.

Third, some individuals believe that the regulatory system, especially in the United States, creates a sufficient obstacle for pharmaceutical companies. Critics cite that antibiotic approval by the FDA is confusing, unnecessarily complex and does not recognize the specialty of the antibiotic field of drug development versus other developmental fields. One of the more problematic elements of regulation is the narrow “approved for use” window assigned to antibiotics. For example when an anti-cholesterol drug is approved it is for broad use not high cholesterol values in the heart versus in the lower leg. When an antibiotic is approved it is approved to treat a specific single condition like pneumonia instead of the specific organism that the antibiotic or anti-bacterial targets.

Due to the volume based sales strategy with antibiotics and the narrow “approved for use” FDA rules where about one of every 72 new antibiotics are approved versus one of every 15 for other types of drugs, pharmaceutical companies would be encouraged to develop broad-spectrum antibiotics, which have higher resistance development profiles, the exact opposite of what antibiotic treatments should be striving for. Based on all of these elements: sales potential and regulation/research costs some estimate that in current market environment the production of a new antibiotic will actually cost a pharmaceutical company millions dollars over the life cycle of that given drug. Part of the problem with the additional regulatory steps demanded by the FDA could be what occurred after the approval of Telithromycin (brand name Ketek), which involved the use of fraudulent data to speed its approval resulting in numerous deaths and liver failures. Due to this incident the FDA has become more cautious when approving new antibiotics.

Passed in 2012 the GAIN act attempted to address some of the concerns with the deficiency in antibiotic research and development by extending marketing protections for select “qualified infectious disease products” by increasing levels of exclusivity (12 years to 17 years further delaying generics). Also there is an additional six months of exclusivity for drugs with a companion diagnostic test is approved. Basically this additional exclusivity tacks on to any Hatch-Waxman, orphan drug or pediatric exclusivity. This legislation also produced an expedited review awarding appropriate antibiotic candidates with priority review and fast track status.

In addition the FDA is responsible for revising guidelines for future antibiotic clinical trials to accurately reflect the latest scientific and clinical knowledge. So far in response to GAIN the FDA has created an anti-bacterial drug development task force to study future guidelines for drug development, released a list of high warning pathogens and reviewed with appropriate updates to numerous antibiotic drug development guidelines.

The problem with the GAIN act is it only focuses on one of the two important elements of the bacteria resistance issue, creating new antibiotics. There are no new regulations regarding the stewardship and application of existing and new antibiotics during the course of their administration. Without new rules new antibiotics will simply suffer the same fate as existing antibiotics and squander a percentage of their potential due to misuse.

Some may argue that FDA documents Final Guidance #209 and Draft Guidance #213 address this second element of usage and administration. However, voluntary recommendations are not effective rules and policy. #209 recommends that food/animal producers stop the use of medically important drugs (chiefly antibiotics) for growth promotion or feed efficiency purposes. It also suggests these producers seek veterinary oversight when application of any medical important drug is used for treatment or prevention. Not surprisingly it stands to reason that as only recommendations producers will only apply these actions if profitable. Seeing that a number of individuals/groups tend to ignore the antibiotic resistant problem and even violate existing usage law in the first place one cannot be confident that most producers will abide by these recommendations.

Draft Guidance #213 recommends that drug companies aid the application of Final Guidance #209 by removing any use suggestions on their drug labels pertaining to growth promotion. However, companies in the future will have the ability to list therapeutic uses, including disease prevention, on their labels. With antibiotic sales as a volume driven business it is difficult to see many companies voluntarily reducing the probability of sales of these products.

The FDA does acknowledge that making these guidelines voluntary is a significant handicap on their potential effectiveness, but claims that creating guidelines that have significant penalties associated with non-compliance will require time consuming formal hearings and legal processes that will more than likely see numerous evidentiary hearings and drag out for years partly because it would have to focus on one drug at a time . If this is the case then voluntary recommendations may be the best option to actually producing change.

In an attempt to combat the lack of new antibiotics the IDSA has proposed a specialized regulatory mechanism called the Special Population Limited Medical Use (SPLMU) exclusively for antibiotics that they wanted to be included in GAIN, but was not. The chief aspect of this new plan would change the approval process from two Phase III “lack of inferiority” trials to smaller Phase III trials using narrowly defined test subjects (those suffering from a condition born from a resistant bacteria). Despite the lack of safety information on these new antibiotic compounds it is thought that because these patients are already out of options due to the resistance the benefits derived from consuming the experimental compound will outweigh the risks. The thought is that through this direct treatment methodology safety and efficacy information can be derived for the experimental compound hastening the determination regarding whether or not it would be suitable for large-scale release.

This new testing regimen has been compared to the regimen utilized for development and approval of drugs for “orphan” conditions. However, orphan drugs will always remain a niche treatment area whereas these new antibiotics would eventually be used by millions of people with wide ranging characteristics that could produce a variety of side effects. Such a situation creates one of two strategies: 1) conduct secondary testing after initial approval to identify serious potential side effects and address any of these outcomes appropriately or 2) react and track new side effects when they occur in a broader public administration of the drug. Clearly the first strategy is more costly for it involves traditional Phase III testing, which in a sense is just postponed for a time after approval, but the second strategy creates legitimate questions of legal liability for any life altering side effects.

The two chief problems with this testing strategy are differentiating between drug related and bacterial related detrimental effects and off-label distribution. Phase III trials are tightly controlled, especially with regards to patient health, so effective rationalities can be made regarding the side effects associated with the tested drug and its usefulness in overall treatment. Direct treatment against very sick patients will create numerous problems determining whether future negative conditions are born from the experimental compound, the bacterium or a combination of both.

Off-label usage is a continuous problem as well. Despite the belief that specialized labeling, instructions to physicians and limited marketing should provide a sufficient barrier, the problem is that off-label usage is already widespread with existing drugs, so there is little to prevent the same behavior with these new experimental drugs. Very strict record keeping would be required including the IDSA suggested post-market surveillance plan. Off-label application should result in heavy sanctions against the offending party and if the sponsor company is responsible immediate cessation of the trial.

Some individuals have suggested that the future antibiotic research and development will depend on appropriate incentives to make the research and development economically sustainable. Some of the suggested incentives involve tax credits, special grants and improved intellectual property protections. While enhancing intellectual property protections is suitable and appropriate simply expecting the government to pay private pharmaceutical companies to develop antibiotics is not appropriate. What most individuals who clamor for direct financial incentives forget is that antibiotic development is direct income poor indirect income rich. In short a pharmaceutical company cannot sell their other drugs to a corpse, keeping the consumer alive is of utmost importance. Therefore, focusing on creating statins and other chronic no-cure (stasis) drugs over antibiotics is foolish because the consuming population will die making even the most profitable blockbuster drug not profitable. In addition most of the large pharmaceutical companies are not making “skin of their teeth” profit margins, thus to expect government, which is still running deficits, to incentivize antibiotic production is a short-term feasible long-term foolish strategy.

An interesting funding idea would involve all pharmaceutical companies that fail to develop new antibiotics to pay a royalty to those that do in effort to support antibiotic research and development. In addition this royalty would also allow successful companies to refrain from deployment of these new antibiotics until necessary. Furthermore this royalty could increase for novel mechanisms, which would also receive expedited approval from the FDA focusing on a new safety and efficacy track.

In addition to developing new antibiotics new usage policies need to be developed to ensure that these new antibiotics remain as fit as possible. There are numerous strategies that can and should be applied to limit the probability of resistance for new antibiotics. First, all nations must ban the use of antibiotics on non-human life despite the aforementioned reservations of the FDA. The shallow benefits provided by injecting mass quantities of antibiotics into healthy farm animals and the like are completely outweighed by the selective pressures these antibiotics provide for a more rapid development of resistance. The European Union has already done its part on this issue, it is time for the rest of the world to follow suit.

Second, the development of diagnostic and metabolic biometrics to create an “antibiotic floor” in order to prevent the foolish and unnecessary prescription of antibiotics to patients that should recover in a suitable fashion through standard non-antibiotic therapeutic strategies (hydration, sleep, exercise, etc.). This floor is important because it is more common than it should be for individuals suffering from viral infections to demand antibiotics that will do nothing but either be disposed of or increase resistance probability for current non-pathogenic bacteria residing in the patient. Third, sewage treatment methodologies should develop an additional step or agent that will increase the degradation rate for antibiotics in the waste stream or remove them entirely. Fourth, a standardized treatment methodology should be created to understand the timeline regarding the dosage and treatment pattern for how existing and new antibiotics attack a given infection. Establishing such a methodology will increase treatment rates and efficiencies and working in combination with the above second step should further limit over-prescription of antibiotics.

One longstanding idea regarding clinical trials in general is the lack of a centralized repository where clinical specimens from patients could be housed during and after clinical trials. These specimens would be used to reduce redundancy, increase efficiency in utilized dosage reducing costs and allow effective comparisons between drugs and/or control groups. This repository idea has also been proposed by the National Cancer Institute to retain samples from cancer patients undergoing various treatments.

Overall bacterial resistance is similar to global warming in the fact that it will have a significant detrimental influence on individuals and society as a whole, yet the global public at large does not view either as important problems that need to be solved in the near-future. Solving bacterial resistance will demand pharmaceutical companies stop being penny smart dollar stupid and realize that new antibiotics are essential to the profit maximization of other blockbuster chronic drugs. The FDA needs to realize that antibiotics require a specific pathway for approval and cannot simply be thrown into a generic approval pathway. Finally countries and companies have to work together to maximize information exchange to limit the spread of new resistant bacteria outbreaks and develop new treatments both through antibiotics and non-antibiotics to limit the probability of future resistance development.

Citations –

1. Walsh, C. Antibiotics: actions, origins, resistance. ASM Press; Washington, D.C: 2003.

2. Mukhtar, T, and Wright, G. “Streptogramins, oxazolidinones, and other inhibitors of bacterial protein synthesis.” Chem Rev. 2005. 105:529–42.

3. Tomasz, A. “The mechanism of the irreversible antimicrobial effects of penicillins: how the beta-lactam antibiotics kill and lyse bacteria.” Annu Rev Microbiol. 1979. 33:113–37.

4. Kohanski, M, Dwyer, D, and Collins, J. “How antibiotics kill bacteria: from targets to networks.” Nat. Rev. Microbiol. 2010. June: 8(6):423-435.

5. Nissen, P, et Al. “The structural basis of ribosome activity in peptide bond synthesis.” Science. 2000. 289:920–30.

6. Espeli, O, and Marians, K. “Untangling intracellular DNA topology.” Mol Microbiol 2004. 52:925–31.

7. Campbell, E, et Al. “Structural mechanism for rifampicin inhibition of bacterial RNA polymerase.” Cell. 2001. 104:901–12.

8. Food and Drug Administration. FDA's Strategy on Antimicrobial Resistance - Questions and Answers. http://www.fda.gov/AnimalVeterinary/GuidanceComplianceEnforcement/GuidanceforIndustry/ucm216939.htm

9. Coates, A, and Hu, Y. “Novel approaches to developing new antibiotics for bacterial infections.” British Journal of Pharmacology. 2007. 152:1147-1154.

10. Livermore, D. “Bacterial resistance: origins, epidemiology, and impact.” Clinical Infectious Diseases. 2003. 36(Suppl 1):S11-23.

11. Infectious Diseases Society of America’s (IDSA) Statement Promoting Anti-Infective Development and Antimicrobial Stewardship through the U.S. Food and Drug Administration Prescription Drug User Fee Act (PDUFA) Reauthorization Before the House Committee on Energy and Commerce Subcommittee on Health. March 8, 2012.

12. U.S. Department of Health and Human Services - Centers for Disease Control and Prevention. “Antibiotic Resistance Threats in the United States, 2013.” 2013.

13. Eisenstein, B, and Hermsen, E. “Resistant infections: a tragic irony in modern medicine.” APUA Clinical Newsletter. 2012. 30(1):11-12.

14. Centers for Disease Control and Prevention. “Estimating health care-associated infections and deaths in U.S. hospitals.” Public Health Rep. 2002. 122:160-166.

15. Choe, J. “Fewer drugs, more superbugs: strategies to reverse the problem.” APUA Clinical Newsletter. 2012. 30(1):16-19.

16. McArdle, M. “Resistance is futile.” The Atlantic. Oct. 2011. http://www.theatlantic.com/magazine/archive/2011/10/resistance-is-futile/8647/

Saturday, November 30, 2013

Is War Inevitable?

The question of whether or not war among humans is unescapable is a troubling one for it goes to the core of who we are as a supposedly logical and rational species. Some argue that conflict is inherent in our makeup with deep biological roots that cannot be overcome. However, others believe that no creature with any level of rationality is destined to yield to their supposed aggressive nature. So who is right and what elements contribute to such a reality?

The idea that humans have a predisposition towards violence, either intraspecies or interspecies, has been debated for decades. Both sides attempt to utilize the behaviors of other primates as empirical support for their positions, but different primate species demonstrate different levels of innate violence even within the same genus as rhesus macaques are very violent, but stump tail macaques are generally non-violent. Overall studying primate behavior does suggest that environment shapes behavior almost exclusively versus genetics for even when rhesus macaques are transplanted into a stump tail troop the rhesus change and adapt reducing their violent behavior.1,2

Based on this information there appears to be two driving keys to conflict: survival and expression of dominence. Interestingly enough both of these keys involve the acquisition of resources. Initially one could state that all violence and aggression boils down to resource collection. Looking at all of the conflict over history rarely, if ever, can one characterize a conflict where resources were not the driving force initating that conflict regardless of whether the war was on of conquest or independence. Resource motivation for wars of conquest are self-explanatory where motivations in wars of independence are to “unshackle” resource access controlled by the ruling parties so one can control his or her future.

If an individual has enough resources to survive the need for violence is reduced because the weighted necessity of those resources is reduced. For example what is the probability of acceptance if someone with 10,000 dollars in the bank is offered 10 dollars to do 100 pushups verses someone who has 100 dollars in the bank? Although the cost benefit ratio of the deal for both individuals is generally the same, the overall weight is different for the 10 dollars is more valuable to the second individual than the first; therefore the second individual is more likely to accept the arrangement.

The same can be said for displays of aggression and conflict engagement. When an individual or group has a lot of resources the costs of war seem greater than when that entity has few resources even if the costs are the same. This psychology is especially true when resources are scarce to the point where survival is at stake because the aggressor rationalizes three options: 1) a low quality life leading to a hastened death due to lack of resources; 2) a quicker death due to the conflict; 3) a longer live due to new resources acquired through victory; in all negative scenarios death is the worst outcome, but conflict does create a pathway to more resources and continued survival where the status quo does not.

This cost element can also be regarded as only a secondary element to initiating a war. While war is costly, especially in the most extreme fashion (the loss of life) and most cost benefit analysis reject war, most individuals do not first consider a cost-benefit analysis, but instead question whether or not they the war will be successful. What is the point of conducting a war, no matter what the benefits, if one cannot win? A low probability of victory is what stops a vast majority of individuals/groups from engaging in conflict; however, it is also what allows inequality and injustice to persist as well.

Clearly resource allocation and shortage heavily influence aggressive tendencies in creatures, even those of higher intelligence, because if one has an abundance of resources then violent action is unbenefical because the gains are mitigated, one can solve problems with the abundance of available resources. While this reasoning is logical it still begs the question of why do different groups initiate aggressive actions against other parties even when they have enough resources to survive? Even though resources play the major role in aggressive and violent behavior there are clearly other factors that initiate violence. What are these additional elements?

The secondary elements that catalyze aggressive behavior appear to be pride/ego and freedom. Even if a creature has adequate resources to survive, if they lack the freedom to use those resources or acquire more resources they may resort to aggression to change this situation. One could acknowledge that such action is also influenced by a lack of resources, but not isolated to that situation. Pride/ego induces aggression in one of two ways: 1) when one feels wronged on some level by another, which then deamnds some level of consequence against the offending party; 2) when one wants to prove that he/she is better than other individuals;

Further exploring the human psychological aspect to conflict when resources are not the sole driving force, ego is an interesting element to conflict because it is not insinctive instead it is brought on by cultural or soceital triggers. For example everyone at some level wants to be respected, but not everyone feels the need to consider him/herself better than someone else. The expansion of ego drives the desire to compare each other in effort to judge self-worth, which then drives conflict. The acquisition of resources, chiefly money, is the principle measuring factor used by humans when distinguishing superiority between individuals. Remove this relative judgment standard in favor of a self-absolute standard and it stands to reason that conflict would decrease. Instead of judging each other on a comparison basis, one would only judge the possessed number of resources against a self-imposed standard ignoring all others. The need to be regarded as better than another human drives people, even when equal, to seek more and war/conflict is one way to acquire that more. In one respect the problem is not that aggression or violence is in human nature, but instead it is in our complexity, consciousness and society.

There in lies the chief problem when attempting to eliminate conflict among humans; resource allocation is a fixable problem, but is made much more complicated because of this personal ego attribute. For example most societies use capitalism as their economic system, a system the demands competition and unequality between the members of that society due to the necessity to assign winners and losers in such a system. Therefore, in this system even if resources are abundant enough that everyone could receive what is necessary to survive individuals could be (and are frequently) outcompeted for those resources by those who already have significant resources. This enhanced competition factor is largely the reason why more inequality exists in the first place.

Not surprisingly capitalism has a notorious feedback system where those who have resources increase their probaibility of getting more resources while those who don’t have resources decrease their probability of getting resources in the future. Sadly while some continue to believe that capitalism is principly a meriticracy such a characterization can no longer be accurately applied in modern society, for numerous examples exist where smart and determined individuals fail to acquire suitable resources because those who already have resources force the use of special connections and secret handshakes to enhance resource acquisition probabilities. Unfortunately as long as capitalism retains its competitive characteristic and individuals are free to compete for all resources despite their personal resource standing (i.e. no resource acquisition ceiling) there is little reason to suspect a lasting and significant reduction in conflict within a capitalistic system.

While human psychology provides a significant barrier to reducing conflict it can also provide hope through the same mechanisms. Interestingly studies find that people in general are more cooperative than economic rationality within a capitalistic system would predict3-6 creating meaningful questions within the study of reciprocity. One theory to explain this gap is that individuals utilize altruism as a signal to indicate their quality.7 Such a signal is meaningful because the demonstration of the altruistic action has cost to the actor and this cost is viewed as creating a sense of sincerity in the actor. This altruistic action is thought to bridge the gap between strangers who cannot rely on friendships to explain a lack of point-for-point reciprocity.

One can also use the application of altruism to shift the relative nature of superiority that capitalism promotes. Instead of individuals using their acquired wealth and resources as a signal for superiority they could use their ability to give to others as that signal. Such a mindset has been the general hope of some, but clearly has not come to pass possibly because it has only been passively applied by society rather than actively. Unfortunately while the complete elimination of relative self-worth would be ideal it appears that the transference of how realitve self-worth is evaluated is the best one can expect.

Some may argue that threats of war between major powers has declined demonstrating that aggression is not purely biological. The problem is that these declines can be attributed not to a change in biology or even one in societal behavior instead a change in weaponary. It is the advancement of weaponry used to wage war and a conscious understanding that one must use these weapons to win, but their use reduces the resource gains from war due to excessive levels of collaterial damage. Interestingly enough these new weapons make it less likely for a major power to conduct war, but more likely for a single individual or small group to conduct war. Thus the advancement of weaponry may actually shift the nature of large-scale aggression and conflict. If this shift actually occurs it will displace the long standing belief that when the benefits of war outweigh the costs, war occurs, versus when the costs of war outweigh the benefits war will not occur. With pride continuing to be a catalyst the cost of war become almost microscopic and if we turn that corner as a species, aggression, conflict and war become inevitable.

Overall ending general conflict among humans will be incredibly difficult because of the competitive nature of capitalism and how that nature is applied with human psychology and social relationships as well as war having become a formal part of human culture. While the frequency and ferocity of war conducted by human ancestors can be debated, it cannot be debated that in recent history war has been conducted constantly on both large and small scales. This empirical application of the process of war makes it more difficult to end war because individuals do see winners and losers not just losers from the conflict. It would be easier to argue against the benefits of war if humans had never engaged in it in the first place. However, despite the realities of ego, experience and survival it is possible for humans to throw off the “yolk” of conflict, but it will demand a perspective change and an active effort of utilitarian sacrifice.

==
Citations

1. Kummer, Hans. In quest of the sacred baboon: A scientist's journey. Princeton. University Press. 1997.

2. Chaffin, C, Friedlen, K, and De Waal, F. “Dominance style of Japanese macaques compared with rhesus and stumptail macaques.” American Journal of Primatology. 1995. 35(2):103-116.

3. Colman, A. M. “Cooperation, psychological game theory, and limitations of rationality in social interaction.” Behavioral and Brain Sciences. 2003. 26:139–198.

4. Fehr, E, and Fischbacher, U. “The nature of human altruism.” Nature. 2003. 425:785–791.

5. Ostrom, E. A. “Behavioral approach to the rational choice theory of collective action.” American Political Science Review. 1998. 92:1–22.

6. Palameta, B, and Brown, W. “Human cooperation is more than by-product mutualism.” Animal Behaviour. 1999. 57:F1–F3.

7. Roberts, G. “Cooperation through interdependence.” Anim. Behav. 2005. 70:901–908.

Wednesday, November 20, 2013

Advancement of Information and Quality Ideas in Society

Although it is the primary responsibility of those that seek information to properly acquire and utilize it by filtering out bias and misinformation, initial information providers can make these tasks easier and increase the efficiency of information collection by presenting information in a straightforward and organized manner. In general there are three categories of information presentation based on the intent and motivation of the provider: review, problem solving or debate. Each presentation method has its own strengths and weaknesses that need to be explored.

Information in a review form is the simplest means of transmission because there is no motivator the information is simply presented without opinion. In this style the goal of the presenter is to ensure as much accurate information about the given topic is made available to whoever wishes to acquire it. There are no judgments made about the information presented, outside of its factual accuracy, and no significant attempt to apply the information for any type of solution. There are three key points to maximizing the importance of distributing this type of information. First, define the exact context of the subject matter for presentation. Second, create a hierarchy outline of what information would be required to or aid in understanding the presented information and what information could be better understood after understanding the presented information. Third, the easiest step to say and most difficult to do, present all of the relevant and accurate information on the defined subject.

Currently the online dictionary Wikipedia does an excellent job at covering step one and a good portion of step three largely because of the efforts of the entire global community who continue to add information and continually check the accuracy of that information. Wikipedia has an advantage in raw information availability over print material, which should be taken advantage of when supplying information without application. This advantage allows Wikipedia users to continue to add information without having to subtract information that may not be as important, but still relevant.

Thus the information library regarding the particular subject matter is continually increased increasing the probability that individuals who seek out this information will find a lack of gaps in the overall amount of information pertinent to the overall subject. Two elements that Wikipedia could incorporate to improve its already significantly useful information distribution system would be to integrate step two indicating what specific knowledge would enhance the ability for a user to understand the current topic. Also the addition of a more specific categorization regarding what information is essential to understanding the basics of the topic at hand and what information is supplemental and what exactly does it teach.

Problem solving is a motivation that strives to solve a problem and presents all of the necessary information and methodology to solve that problem. The goal of problem solving is to first find at least one significant and workable solution and then to identify the optimal solution among all solutions, if multiple ones are found. When presenting information with this motivation the first and third step utilized in the review motivation is presented to act as a base for understanding the solutions presented later. Next the provider presents a number of different solutions to the problem highlighting their strengths and weaknesses. Unlike debate none of these solutions are presented with bias or any level of emotional favoritism, but instead just “cold hard” facts. Defining strengths and weaknesses for certain solutions can be difficult because responses to certain problems may rely on anticipating how outside parties will react which, despite what a number of individuals want to believe, are frequently unknown. In such cases it is best to simply present what would be the best course of action for the particular party based on how the presenter interprets the behavior of that particular party. Due to the lack of bias it is important for the presenter to clearly define the boundary conditions that will be applied to the presented solutions.

Solving problems can also be difficult because if one wants to create elaborate and specific solutions significant details will have to be provided. For smaller problems that only affect a single individual or a very small community these solutions can be crafted by a single person; however, when addressing large societal problems it is very difficult to expect a single person to devise a detailed solution methodology that will be successful both because of factors that may evade said person due to a lack of experience and knowledge or just the pure labor intensity of documenting the steps themselves.

Therefore, to solve the large much more meaningful problems successfully numerous parties need to study the issue and contribute solution pieces which can then be incorporated into a grand solution. Unfortunately this mindset for creating effective solutions has been abandoned in most sectors, especially the political sector, in recent years in favor of entire solutions being created by individuals who derive their experiences from only a small collection of knowledge and life events. Not surprisingly solutions born from these individuals are ineffective and a waste of time and resources, yet because they speak to a specific group of individuals egotism allows them to gain undeserved traction.

Currently there is no Wikipedia-like database that focuses on solving problems using the aforementioned methodology, but such a database could be and should be created. Overall the creation of such a database would be an extremely useful tool when making decisions and drawing conclusions on complicated matters such as foreign or domestic policy, economics and technology. With access to such a database of potential solutions for various concerns and problems it would allow the citizenry of a given country more control to pressure their government if they believe the government is not attempting to solve a given problem with the best solution. Knowledge is indeed power and it is time that people give themselves the means to acquire and wield that power in a more effective manner.

Debate is obviously driven by intent to convince others that the viewpoint of the presenter regarding information or a solution is correct. This method can be a useful motivation because those that utilize it are typically very passionate about the particular subject at hand and can potentially be receptive to new information about the subject. However, it can also be dangerous and counter-productive for the probability that the presented information contains unnecessary bias is much higher than review or problem solving motivation. Also the presenter may be invested in his/her particular viewpoint to an extent where it will not listen to any opposing evidence and/or even be willing to carry out an inferior solution just to get his/her way even if it creates negative consequences for others. Unfortunately these negatives have come to cast a large pall over the positives in modern times for debate information motivations.

Since the goal of debate is to convince listening parties that a particular viewpoint is the correct one, the presenter should focus heavily on highlighting key issues that demonstrate the strength of that position relative to the applied realistic boundary conditions. Significant weaknesses should also be identified and their flaws mitigated through the use of facts and logic regarding the existing boundary conditions that will be present when the solution is applied and during the source of its application. One of the worst things for an individual can do when questioned about a weakness in an argument is to utilize cognitive dissidence or something similar to avoid acknowledging it because it is impossible to improve an idea if one ignores its legitimate problems. The point in a debate is not to “win” the discussion, but to produce the best idea that matches an individual’s core beliefs; being stubborn and clinging to a clearly flawed idea disrupts the very point of a debate in the first place and cheapens the beliefs of those who retain these flawed ideas.

Overall more details can certainly be provided in the discussion of these different motivations behind information distribution, but the purpose of this post was simply to introduce the concept and argue for a more public venue of organization for the problem solving perspective. Current organization largely involves various scattered message boards or blogs (like this one) that offer little efficiency for those new to the topic at hand and can become environments of groupthink or needless conflict. Society is facing bigger problems as it continues to grow both from itself, wealth inequality and food distribution inefficiencies to name a few, and from the environment, human-induced global warming. These problems demand a concerted effort to develop effective solutions that cannot be solely assigned to academia. Therefore, it is important for the public to actively involve themselves in the development of these solutions and one of the most important steps is creating a public arena that is able to sort through the bias of arguments and apply the contents to reviews and problem solving.

Tuesday, October 29, 2013

Errors in Medical Treatment

Medical care is simple in theory, but complex in execution due to additional difficulty factors associated with human behavior, thorough record keeping and bureaucratic overview that go beyond diagnosis and treatment. Therefore, while most lay individuals were surprised when the Institute of Medicine (IOM) estimated an annual death total between 49,000 and 98,000 stemming from medical errors, most individuals with experience in the field were saddened, but not surprised.1 After the IOM issued the estimate in “To Err is Human: Building a Safer Health System” the medical community was initially galvanized ready to act to correct these unnecessary deaths developed through miscommunication, inefficiency and/or neglect.

Early results from this resolve demonstrated positive results such as a reduction in deaths from accidental injections of potassium chloride, lower rates of hospital-born non-MRSA infections, fewer warfarin derived complications in addition to the Institute for Healthcare Improvement’s (IHI) 100,000 Lives Campaign and companion 5 million Lives Campaign.2-5 However, a number of individuals believe that these successes are marginal for other negligent hospital/physician actions have increased in addition to those who believe that the initial IOM figure was underestimated.6-9 Overall within the environment where various groups “duke it out” over whether or not medical care has become safer from a patient perspective over the last two decades, the real questions are how can safety be improved and why are these ideas not being implemented as effectively as theory suggests.

One of the chief concerns with actually accurately identifying negative instances or preventable adverse events (PAEs) at hospitals and/or under physician care is the issue of various conclusions amid a feedback environment. Basically regardless of what side one takes on the issue of PAEs and their prevalence that individual can find valid evidence to support his/her position. Therefore, individuals who want to believe that PAEs have increased will principally cite evidence that suggests this reality whereas individuals who believe that PAEs have decreased will principally cite evidence of such a reduction. This researcher behavior creates a feedback environment were neither side will gain much ground because each side is simply reinforcing their own bias. Note that according to the IOM a PAE is defined as an “unintended harm to the patient by act of commission or omission rather than by underlying disease or condition of the patient”.

Part of the reasoning behind why such a contrasting reality can exist is that calculated PAEs can either go up or down depending on the type of boundary conditions and assumptions an investigator applies. For example what element is important when considering PAEs in hospital settings? Should the goal be to lessen the severity of the event (reduce the probability of death), lessen the number of events period, record all occurred events, how much should a patient’s health be compromised before it qualifies as a PAE, etc.? Without clearly identifying the expectations of the study and correcting for the goals of the hospital with relation to PAEs little useful knowledge can be gleamed from these numerous studies about PAEs in hospitals.

Another issue is the separation of responsibility between physician and patent. Rarely does research that concludes an increase in PAEs differentiate fault for particular PAEs between physicians and patients. For these researchers the entirety of the fault for any PAE is placed upon the physician. This mindset is an interesting one and seems to apply a negative bias against physicians (NBAP), which can be highlighted by the following analogy.

Currently in 2013 there are numerous reality television shows that involve a specialist of sorts intervening in a failing business and changing various elements in an attempt to make it a success. Two particular “hosts” have dramatically different mindsets when it comes to the structural operational methodology of business. In the show “Bar Rescue” Jon Taffer believes that while management leadership, direction and training are important, individuals have pride, self-respect, honor and intelligence and even in situations where training and leadership is lacking should be expected to act in a certain manner and perform their job with some level of competency, honesty and dignity. In contrast “Restaurant Stakeout” host Willie Degel seems to believe the opposite, that management and training are the foundation of praise or blame regarding job performance. Basically Mr. Degel seems to believe that employees are sheep without any ethics, ambition or pride, thus if the proper training is not provided it should be expected that these individuals will fail miserably at their jobs.

The mindset of Mr. Degel mimics the one possessed by most NBAP researchers where the physician is responsible for all mistakes regardless of the behavior or actions of the patient. For example suppose a patient suffers from an episode of respiratory distress during aerobic activity. After visiting the hospital resultant tests fail to find any obvious underlying causes for the episode. Clearly it is not entirely the responsibility of the physician to inform the patient not to engage in aerobic activities in the near future. The patient has to have some understanding that without a clear diagnosis future aerobic activities must be undertaken with significant caution and care otherwise an even more detrimental condition may arise. If the patient forgoes this common sense understanding and befalls a more detrimental condition, it should not be regarded as the fault of the physician. Similar blame elements would involve patients not telling physicians about certain allergies or other utilized medications that lead to complications brought on by negative reactions between the old medication/allergy and the new medication.

High quality medical care is a partnership between the physician, the hospital and the patent; for those who want to reduce/eliminate medical errors it is necessary to differentiate between errors caused by patents and those caused by physicians/hospitals. This differentiation must not be sought with the intention of assigning responsibility, but to properly assign responsibilities to all parties to maximize the probability of eliminating errors. Individuals who refuse to identify events where patient behavior increased the probability of an eventual PAE are doing a disservice to the goal of eliminating medical errors.

Even if individuals differentiate between patient and physician/hospital fault when attempting to determine the cause behind certain PAEs it is irrational to expect all PAEs to be eliminated due to the simple fact that mistakes will always happen. The general mindset should be to limit the severity of PAEs as much as possible. For example if the PAE magnitude is converted from 1000 harms with 100 deaths to 1043 harms with 50 deaths such a change can be considered a short-term victory with more improvement desired in the future.

Some investigators believe that there are numerous unreported PAEs and justify this belief through the use of the discrepancy between PAEs reported in outpatient surveys and official medical reports.10 Weissman and colleagues found that 6 to 12 months after their discharge, patients could recall 3 times as many serious PAEs as were reflected in their medical records. Unfortunately there are some concerns with this study. First, there is no clear understanding that patients and physicians/hospitals have a similar definition for a PAE, which could result in patients overestimating the number of PAEs during their treatment. For example failure to give a certain medication at a certain time could technically be regarded as a medical error, but if said medication is given within a given time boundary beyond that specific time no harm should befall the patient. So while such an event could be regarded as an error, it is not a PAE.

Second, patients may regard elements or instances of discomfort through their own personal lens as a PAE. For example a patient may want a glass of water, but due to nurse/physician preoccupation in other more pressing tasks this individual waits a long time before getting the water and possibly develop a slight case of dehydration while waiting. For the patient such an event could easily be a PAE, but from the perspective of the hospital such an event is irrelevant. Third, patients are not aware of a significant amount of “behind the scenes” actions relative to their treatment, thus have incomplete information regarding overall treatments and may mischaracterize certain outcomes as PAEs.

Most individuals classify PAEs into four separate categories:

- Diagnostic Errors
- Communication Errors
- Omission Errors
- Commission Errors

Diagnostic errors are rather self-explanatory, but are also the trickiest of the four categories when attempting to deduce between honest or negligent mistakes. If one could have all possible information available through various diagnostic tests (MRI, CT, PET, etc.), a full accurate medical history and a patient with a photographic memory who did not withhold information then it could be determined that all diagnostic errors were the fault of the physician either due to lack of knowledge or negligence. Unfortunately never are these environments of complete information available; therefore one must judge diagnostic errors within effective probability matrices. For example if patent A presents with a list of symptoms that suggest a 74.1% probability that condition A is the cause, which has a low probability of significant damage versus a 4.6% probability that condition B is the cause, which has a high probability of significant health damage and the physician treats for A instead of B, even if B turns out to be correct the physician should not be viewed at fault.

Communication errors are commonly defined by misinformation between two interacting parties resulting in a negative health occurrence. Normally communication errors stem from incomplete or inaccurate patient information due illegible writing leading to confusion between drug prescriptions, disconnected or fragmented reporting systems and/or inconsistent or repetitive care due to numerous physician interactions. Lack of patient background and disclosure can also be viewed as communication errors originating with the patient. Most believe that the adoption of electronic health records (EHRs) will reduce a significant percentage of the communication errors by creating a redundant system of error checking. More than likely this assertion is correct; however, implementation and proper use of HER in hospitals nationwide is still problematic and slow.

Commission errors are the easiest error to detect after the fact for it involves a physician or hospital taking a form of negligent action: either the wrong methodological action or a right action executed improperly that results in harm to the patient. The most common publicized commission errors seem to be surgical mistakes where foreign objects are left in patients, incorrect procedures are carried out (amputating the wrong limb is a noteworthy one) or accidentally damaging a part of the body (nicking a blood vessel when conducting a heart bypass, etc.). Fortunately commission errors can be managed because most occur due to carelessness born more not from physician incompetence, but other environmental factors like time pressures, fatigue, short-term psychological stress and/or too any patients.

Omission errors typically contrast commission errors, where an obvious action was required for treatment, but was not executed; similar to commission errors, omission errors can be easily detected after looking at properly compiled medical records. The most common omission errors involve the lack of drug administration to augment a specific treatment like an alpha- or beta-blockers for certain heart procedures.

One of the biggest issues regarding medical errors is the physician culture. In the past the physician was largely regarded as a position of great respect held by learned individuals. Compound this element with the fact that most physicians hyperbolize the capacity of their position “commanding” life and death, the ego and stress related to the position with regards to being “perfect” is further augmented. When a physician makes a mistake those who are not complete assholes or in denial take direct damage to their self-image and may attempt to hide the error behind a wall of cognitive dissidence in order to maintain the “perfect” persona. Public acknowledgment of the error limits the effectiveness of this cognitive dissidence strategy, thus could further increases the desire to hide the error. Therefore, for a number of physicians lawsuits may be of secondary concern versus potential psychological damage. This shame factor would probably explain most of the “hidden” errors that many NBAPs claim exist.

Unfortunately modern times have only applied more pressure to this “perfect” persona, especially due to the rise of the Internet. Now vast amounts of patients “research” potential causes for their medical conditions on broad kitchen sink websites like WebMD and make suggestions or even challenge diagnostic conclusions made by physicians damaging their authoritative specialization. The most problematic element of introducing this “research” into the diagnostic equation is that most patients will credit a much higher than rational occurrence probability for lethal or seriously detrimental conditions. Worse yet this unsubstantiated fear will be utilized as blame and legal ammunition on the extremely small probability that the cause of said condition was actually a low probability condition and the physician did not prescribe every possible test, regardless of cost and stress to the patient, to deduce it. Such a probability irrational environment adds even more pressure to physicians.

Now some cynics may simply state that physicians need to drop the narcissus act and man/women up. Outside of the simple retort that such a “solution” can be levied against numerous problems in various occupations, the problem with this belief is that physician training encourages this narcissus attitude out of belief necessity with a little ego of status. With regards to belief necessity what patient wants an indecisive physician? Physicians need to exhibit confidence, even on the borderline of arrogance (especially surgeons), to assure patients that they have a high probability of certainty regarding the type of ailment and the treatment protocol. Of course the ego element comes from the notoriety of the position in society despite its erosion in recent decades in part due to the Internet. Overall it can be argued that change is needed, but it would be more useful for cynics to offer a psychological means to drive that change rather than simply state that change should happen.

Another concern is the perception of medical mistakes. As stated above due to authority and prestige issues most physicians, especially specialists, become perfectionists out of medical school. This characterization creates a black-and-white mindset where physicians are either perfect or terrible, which is reinforced by morbidity and mortality review boards. A single mistake, whether understandable or not, can result in a “killer”, “007” or some other “clever” characterization for a physician. Most NBAPs say they could forgive an “honest” mistake, but a vast majority of those who say this have never been on the receiving end of these types of mistakes, so with their inherent bias towards physicians already would this proclamation hold true actually when tested?

Unfortunately some have attempted to adjust these review boards by making them more team centered. However, this strategy seems to be in error because while the motto “We win as a team we lose as a team” may work for sports it does not work in medicine. Such a strategy creates an unrealistic micromanage mentality for the “leaders” of these medical teams due to the level of work that is required. It also mitigates the responsibility of the error, which could reduce the shame individually associated with it, but will also reduce the probability that the individual learns from and corrects the behavior that created the error in the first place. Instead the boards should be teaching environments where blame is not shamed nor avoided, but identified and corrected. Pursuant to a significant error individual actions within the appropriate acting team should be identified to determine where the error occurred, what type of error it was and why it occurred. Medical personnel will also be penalized through a clear transparent evaluation system, which will eventually result in job termination if enough mistakes are made.

One of the major concerns with physician training is that some believe the process of becoming physicians erodes empathy from medical students even though they enter medical school with plenty. Part of the rationality is that the third and fourth years, which are clinical, are disjointed and emasculating due to constant testing and relocation fostering an attitude of “just another cog in the machine”. Another aspect of training that enhances this mindset is its monotonous nature with each sleep-deprived day bleeding into the next.

However, empathy is a two-way street. While a number of physicians have problems properly exhibiting it, if they have it at all, patients also fail to appreciate what physicians have to deal with in an average day. From the patient’s mindset he/she only cares about getting attention and an effective analysis and conclusion regarding diagnostic and treatment. He does not care that the physician has patients other than him, who expect the same, and other tasks that need to be completed beyond his medical treatment. Certainly one can somewhat understand this mindset as patient attitude can range from frustration due to being sick to panic due to the potential for having a serious medical condition, but empathizing with this attitude is no excuse for the hypocritical reasoning of criticizing lack of empathy in physicians, but ignoring it in patients.

One way to potentially manage the empathy erosion that occurs in medical school is to breakup the process itself. In most medical school training the first two years involve classroom and lab study with the second two years involving medical training and residency. This process should be adjusted. In a new system the first year would involve basic medical training. Remember that almost all medical students are “pre-med” so they have rudimentary medical knowledge about biology and chemistry, thus this first year should not be review, but involve new medical training and procedure information. In the second year students would enter a clinical environment to apply their knowledge and get a feel for their occupational requirements. The third year would involve returning to the classroom to acquire more advanced medical techniques, organization skills and logistics, and psychological coping techniques. Introducing these advanced techniques after exposure to a clinical environment will increase the probability that medical students appreciate their importance and can better incorporate them into their behavior. The final year would involve selecting a specialization (or general practice) and tailoring their clinical experience towards this specialization to lead into residency.

A further concern with the education structure for physicians is the lack of a well integrated and effective occupational education system. Basically after a physician graduates from medical school and residency it is “assumed” that physicians possess all requisite knowledge for the remainder of their medical careers. Of course such an assumption is wrong, especially when dealing with technology, so why are there not small periods of time where physicians are required to update their skill sets (similar to a sabbatical). Some would argue that due to general physician shortages (especially at the general practitioner level) there is little ability to develop effective downtime to acquire new skills or update existing ones. Regrettably this is a relevant retort and add to that concern that an additional 35 to 50 million uninsured individuals with at least 15 million having chronic and/or pre-existing conditions have now or will soon acquire health insurance finding time for physicians to skill enhance becomes even more difficult. Unfortunately there may not be an effective solution to this issue beyond simply creating more physicians and hospitals may have to augment physicians with hired secondary help for specific technological skills.

Numerous suggestions in the past through various communication mediums have been made regarding how to reduce the probability of medical errors and below will be no exception. Clearly one of the most obvious strategies is to create checklists for each type of medical procedure and diagnostic exam ensuring that proper methodologies are followed, which will heavily reduce the probability of any stupid careless mistakes. An additional easily applicable safety measure is redundant question asking among physicians, nurses and pharmacists to patients to confirm unanimous agreement regarding action among relevant parties. Clearly the introduction of electronics from physician order entry (POE) for prescriptions to EHR will reduce transcription and medication assignment, cross-reaction and dosage errors. Finally creating better ways for physicians to manage fatigue through efficient on-call rotations and nutrition/diet recommendations should reduce errors.

Another issue with correcting medical errors can come from addressing the “perfection” mindset by creating a more transparent legal avenue for redress steaming from negligent behavior. Part of the problem is that some individuals who experience a PAE in a hospital feel that it is their right to receive some monetary compensation for their negative experience; a sincere apology will not be sufficient. Some have attempted to address this “sue-happy” litigation culture through the use of monetary malpractice caps, but this response is not structured properly because the ceiling is universally blind; it does not appreciate that certain medical errors impact an individual both physically and psychologically in different ways and more severely than others.

To better address the issue of medical errors and their proper disclosure one has to focus on the environment of perfection and fear. Beyond the perfectionist characteristics of physicians there is some natural fear as well when interacting with patients that have experienced a PAE. The first element that drives this fear is the natural aversion most individuals have to confronting angry individuals when it is suspected that the anger will be justifiably directed at them. The second is that admitting to the mistake will open the door for a lawsuit through the general admission of guilt when discussing the error. This lawsuit would then result in potential lost employment and increased malpractice insurance premiums. This aspect of the fear is the driving force behind the common “deny and defend” strategy utilized by hospitals in response to lawsuits. Finally due to the “deny and defend” and “sue-happy” environments an inherent adversarial relationship has developed between lawyers and physicians.

A cap limit on medical malpractice claims is not the only legal reform attempted to ease the difficulties between patient and physician interaction after an error. Thirty-six states have passed “apology laws”, which limit the admissibility of physician statements of a sympathetic nature in a civil lawsuit regarding the incident for which the statements were made.11

One theoretical success of apology laws is that it allows honest communication between patient and physician. For example one study determined that one of the primary factors behind attorney acquisition is patient frustration born from a lack of adequate answers to their questions about significant negative experiences during a hospital visit.12 Some even think that they are helping future patients by demonstrating to the hospital/physicians that there are consequences to an avoidance and denial strategy. For these patients an honest and clear explanation along with an apology would have eliminated the need to seek the services of outside council.

The overall results of whether or not more disclosure catalyzed by apology laws would increase or decrease total liability remains mixed. Some risk managers still feel that while specific statements will be excluded due to apology law, the admission itself still confirms that the hospital/physician made an error opening the door for a lawsuit13-15 However, other studies have demonstrated that within certain operational parameters that a disclosure program can be successful without increasing liability claims or costs.11,16

So what should be the proper procedure when a medical error is discovered:

1. Unless the medical error is one of life-threatening immediacy the medical team treating the individual should be gathered and informed of the medical error with instruction regarding how it will influence current and future treatment. The gathering should not occur immediately after the error is discovered (due to the potential for other ongoing obligations), but should occur in a timely fashion. Immediately after coming to the necessary conclusions regarding future actions, those actions should be executed.

2. Assign an individual to officially document the medical error and report it to the appropriate authoritative body. In addition add the information to the patient’s official medical records so future healthcare providers are aware of its occurrence so it can be utilized later in potential future diagnostics.

3. Inform the patient and any potential medical proxies about the error itself, why it occurred, how it occurred and what future steps are now recommended because of the error. Inform the patient and appropriate parties of their role, responsibilities and rights in addressing the reporting of the error. Finally sincerely apologize for the error and communicate what type of consequence was applied to the offending party.

4. During the next medical and mortality board all recorded medical errors are reviewed with the intent to explain to all other parties what the error was, why it occurred and what actions were taken after the mistake was identified. There is no need to identify the responsible parties behind the mistake because responsible parties have already been brief on the mistake and its consequence to their careers in the above earlier meeting. Due to this previous briefing there is little reason to officially “introduce” the individuals behind the error to the rest of the hospital for the general purpose of ridicule.

The advantages of such a system for addressing medical errors are numerous: 1) the patient receives prompt information regarding their current situation, transparent facts about the reasons behind any errors and consequence to the responsible party; 2) all relevant hospital staff are informed of the error and instructed to learn from it without direct reputation consequence to the offending party (whether or not staff hear something through the “grapevine” is uncontrollable); 3) the error is addressed openly, thus reducing the probability of escalating any detrimental condition derived from the error because corrective treatment will begin much quicker;

While there is still robust debate about how many individuals unfortunately lose their lives in hospital settings due to some significant form of medical error, the response to these events should be the same because a large number are born from negligence either on the side of the patient or the physician/hospital. There is large agreement regarding what needs to be done, but both meaningful and superficial obstacles remain in the way. Meaningful obstacles involve the incorporation and proper use of technology, getting patients and insurance companies to buy into the advantages of evidence-based medicine and creating effective networks between medical institutions and home life for various patients. Superficial obstacles are largely psychological and cultural between both patients and society regarding egos and expectations. Much to the chagrin of some individuals, the expectation for private driven compliance and evolution of medical safety in hospitals is short-sighted due to a lack of motivation; therefore, the intervention of penalties and an organized central entity should be encouraged to catalyze the administration of safety procedures in medical institutions.

--
Citations –

1. Kohn, K, et Al. “To Err is Human: Building a safer health system.” Washington D.C.: National Academy Press. 1999.

2. Joint Commission on Accreditation of Healthcare Organization. Sentinel event trends: potassium chloride events by year.

3. Kelly, J, et Al. “Patient Safety Awards: safety, effectiveness, and efficiency: a Web-based virtual anticoagulation clinic.” Jt Comm J Qual Saf. 2003. 29:646-651.

4. Whittington, J, and Cohen, H. “OSF Healthcare’s journey in patient safety.” Qual Manag Health Care. 2004. 13:53-59.

5. Leape, L, and Berwick, D. “Five years after ‘To Err is Human’ What have we learned?” JAMA. 2005;293:2384-2390.

6. James, J. “A new, evidence-based estimate of patient harms associated with hospital care.” Journal of Patient Safety. 2013. 9(3):122-128.

7. Landrigan, C, et Al. “Temporal trends in rates of patient harm resulting from medical care.” N Engl J Med. 2010. 363:2124Y2134.

8. Hayward, R, and Hofer, T. “Estimating hospital deaths due to medical errors.” JAMA. 2001. 286:415Y420.

9. Brennan, T, et Al. “Incidence of adverse events and negligence in hospital patients: results of the Harvard Medical Practice Study.” N Engl J Med. 1991. 324:370Y376.

10. Weismann, J, et Al. “Comparing patient-reported hospital adverse events with medical records reviews: Do patients know something that hospitals do not?” Ann Intern Med. 2008. 149:100Y108.

11. Kachalia, A, et Al. “Liability claims and costs before and after implementation of a medical error disclosure program.” Ann Intern Med. 2010;153:213-221.

12. Vincent, C, et Al. “Why Do People Sue Doctors? A Study of Patients and Relatives
Taking Legal Action.” Lancet. 1994. 343:1609-1613.

13. Butcher, L. “Lawyers say ‘sorry’ may sink you in court.” Physician Exec. 2006. 32:20-4.

14. Kachalia, A, et Al. “Does full disclosure of medical errors affect malpractice liability? The jury is still out.” Jt Comm J Qual Saf. 2003. 29:503-11.

15. Studdert, D, et Al. “Disclosure of medical injury to patients: an improbable risk management strategy.” Health Aff (Millwood). 2007. 26:215-26.

16. Boothman, R, et Al. “A better approach to medical malpractice claims? The University of Michigan experience.” J. Health and Life Sci. 2009. 2(2):125-159.