Wednesday, December 30, 2009

Changing the Strategy – Planning for the Future

In the past months, including at the COP15 at Copenhagen, a large number of individuals have called for a scale of mitigation and remediation in global emissions of CO2 that would rapidly reduce the atmospheric concentration to at least 350 ppm. The target of 350 was selected, in part but not entirely due to the findings of Hansen et Al.1 which considered the current influence of climate forcings and what climate changes could be adapted to by human civilization at reasonable financial and human costs. Other goals have been proposed for 450 ppm where it is believed that average global temperature would not exceed 2 degrees C, which allows maintenance of a generally stable human society with proper adaptation strategies. However, the 350 camp adamantly believes that stabilization at 450 ppm would not be tenable to the comfortable survival of the human species on Earth, thus although 450 ppm would be acceptable for a very brief period of time, 350 ppm must be the plateau for a final CO2 atmospheric concentration.

Depending on what measurements one uses, the current atmospheric concentration of CO2 resides between 386-388 ppm.1,2 Unfortunately most proponents of the 350 movement fail to realize that this number represents CO2 concentration and its resultant climate forcing and not any of the other greenhouse gases which can be measured in CO2 equivalency. The atmospheric concentration for CO2 equivalency ranges even higher with a very high likelihood of being in the low to mid 400 ppm. Overall the 350 ppm goal must involve CO2 equivalency and not just CO2 alone otherwise the goal is structured in a way where direct achievement may not result in principled success.

Now one may find fault with the statement that CO2 equivalency is over 400 ppm, so how was that statement derived? The IPCC uses the following formula to calculate CO2 equivalency:

Total Climate Forcing = 5.35 ln(CO2 equivalency / CO2 pre-industrial);

First, note that CO2 pre-industrial is equal to the atmospheric CO2 concentration before humans began emitting large amounts into the atmosphere due to advances from the industrial revolution and beyond; this concentration is commonly viewed as 278-280 ppm.1 Most people view total climate forcing as the forcing from all significant greenhouse gases, significant greenhouse gases are defined by the Kyoto Treaty (CO2, CH4, N2O, HFCs and CFCs etc.).

Using these elements alone a total climate forcing relative to 2007 of approximately 2.71 W/m^2 can be calculated. This value leads to a CO2 equivalency of approximately 461.35 ppm to 464.67 ppm. However, this methodology does not take into account the fact that there are other forcings in the atmosphere as well which influence the climate such as areosols, surface albedo, clouds and ozone. In 2007 when the IPCC 4th Assessment Report was released due to a filing deadline it used empirical information from 2006 and earlier. This information generated a forcing map that defined a total climate forcing of approximately 1.6-1.7 W/m^2 when taking all relevant factors into consideration. The figure below outlines these forcings.3



These climate forcing numbers result in a CO2 equivalency of approximately 374.9 ppm to 382 ppm (very similar to the concentration of atmospheric CO2 at the time). The new CO2 equivalency numbers drop significantly due to the inclusion of the negative forcing influence assigned to aerosols, clouds and surface albedo among other elements. Unfortunately since the publication of the 2007 IPCC report new empirical evidence has re-evaluated the forcing influence of aerosols calculating a lower than previously thought negative climate forcing.4 Also new information has been discovered regarding clouds and the sustainability of their impact on climate forcing. Similar to aerosols, clouds provide a negative climate forcing which reduces the overall rate of increase in surface temperatures; however, this new information suggests that as sea surface temperatures increase low-level stratiform clouds decrease in both size and frequency.5 Thus the influence of clouds at reducing the severity of climate change is reduced as temperatures increase, so the influence of clouds will significantly wane over time.

In addition to the loss of influence from aerosols and clouds, surface albedo both on land and on sea, especially sea, have been taking a beating in recent years reducing their influence on limiting the rate of climate change. Finally logical intuition when viewing current empirical evidence suggests a higher atmospheric CO2 equivalency than a value equal to the current atmospheric CO2 concentration. For example the rate of ice melt in the Arctic, Greenland, Western and even Eastern Antarctica significantly eclipse the predictions made in the IPCC 4th Assessment Report, which suggests either incorrect assumptions regarding climate forcing or the exclusion of a significant factor influencing climate change in a negative (temperature increasing) way. With the size of most of the error bar associated with previous climate forcing calculations, the first option seems more probable. Overall with such rapid and negative changes to the climate everyone better hope beyond hope that CO2 equivalency is in the 400s and not the 300s, otherwise the situation is much worse than anyone previously thought.

With all this said there are a number of people that believe the target of 350 ppm is unrealistic in that humans do not possess the necessary tools and/or determination to accomplish such a goal and that humans are better off preparing for a world that has a greater average temperature of at least 2 degrees C. Proponents counter with claims that a phase out of coal in the next 20 years and aggressive anti-deforestation and reforestation programs would go a long way to reaching the 350 goal at modest costs. Unfortunately for the 350 ppm proponents the real failure in achieving maintenance of a familiar ecosystem and environment may not come from a failure in human will, but instead a failure in tactics based on inaccurate information. The chief concern is that improper tactics are being suggested to reach a goal due to incomplete information based on the warming trend.

There are two crucial elements pertaining to the probability of achieving a specific ceiling and stabilization of global surface temperatures: the climate sensitivity of the Earth and the atmospheric concentration of CO2 and other greenhouse gases. Climate sensitivity describes how the surface temperature changes in response to a stabilized doubling of atmospheric CO2. The reason climate sensitivity is important is it tries to provide a direct correlation between surface temperature and changes in CO2 concentration. Basically climate sensitivity describes the influence of greenhouse gases on temperature change. In 2007 the IPCC 4th Assessment Report defined the range of climate sensitivity between 2 and 4.5 degrees C.3 For 350 and other temperature ceiling movements such a range should be troubling because the lower range was raised by 0.5 degrees C from the IPCC 3rd Assessment in 2005 which defined climate sensitivity between 1.5 and 4.5 degrees C,3 in only a few years the estimated lower floor jumped 33%.

The primary means of deducing climate sensitivity is correlating known temperature change trends in the past with changes in atmospheric CO2 concentrations. The best historical data comes from the Last Glacial Maximum because of the size and accuracy of the temperature and CO2 concentration shifts. For example during the Last Glacial Maximum CO2 concentration where approximately 180 ppm vs. 280 ppm for typical pre-industrial times and the 386-388 ppm that current exist.6 Average surface temperatures dropped 7 degrees C in relation to this CO2 concentration which generated a climate sensitivity of 11.2 degrees C.6 Despite this number most climatologists consider it flawed due to questions surrounding how feedbacks like existing sea ice, clouds and water vapor with other particulates were factored in its calculation. Most believe that these feedback elements were more pronounced during the Last Glacial Maximum then they are now which significantly reduces climate sensitivity in the present.

Overall the most widely accepted value for climate sensitivity comes from Charney who calculated a climate sensitivity of 3 degrees C when incorporating fast feedbacks.7 Unfortunately this calculation assumed an instantaneous doubling in CO2 with no surface changes. Eliminating any surface changes limited the accuracy of the calculation. Hansen et Al.1 used paleoclimate data when including slow surface albedo feedback while assuming a first order relationship for the area of surface/sea ice as a function of global temperature to calculate a climate sensitivity of approximately double Charney’s calculation (6 degrees C vs. 3 degrees C). Normally such a calculation would not be a huge deal because slow feedbacks operate over centuries to millennium, but these operational ranges were only experienced through natural cycles, not with humans dumping hundreds of gigatons of CO2 into the atmosphere. Thus, it is difficult to rule out these slower feedbacks exerting an influence after decades instead of centuries. Empirical evidence especially demonstrates significance for these slower feedbacks because of the rapidly melting surface ice in the Arctic.

Determining a reasonable climate sensitivity is important because it is a principle element in how predictions are made regarding future changes in surface temperature and the overall climate in general. The principle elements that allow predictions on the future climate come from many different climate models and current empirical observations. Modeling the climate is incredibly complicated requiring thousands of different variables as well as the inclusion of hundreds to thousands of interactions between those variables to generate results that can even only be considered ‘in-the-ballpark’. The application of these interactions and variables creates a tremendous demand on time and energy for the computers involved in the modeling. Therefore, to ensure that the generation of a single result does not take weeks/months, certain elements are removed from consideration in the final results. In addition there are some gaps in knowledge regarding how certain variables interact with other variables and in an attempt to ensure some level of accuracy, these types of interactions are also removed or estimated as best as one can and modeled accordingly.

Unfortunately these omissions create inaccuracy in the ability of the model to predict how the climate will change relative to how it actually changes. It is in these omissions where most climate skeptics have attacked with the claim that because x model is potentially inaccurate then the very essence of climate change is wrong. Of course any rational person realizes that such claims are utter nonsense as all of the valid empirical evidence still demonstrates that climate change is occurring almost entirely due to the actions of humans and the lack of a completely accurate model, something that probably will never be generated in the first place, does nothing to taint that evidence. However, in the past few people have considered that the predictions made by climate models were incorrect on the other side of the coin, that they are underestimating the rate of climate change. Due to new empirical evidence, largely surrounding the much more rapid ice and glacier melt in the Arctic,8 more individuals are questioning whether the results of the 4th Annual IPCC report were inaccurate, predicting too slow of a surface temperature shift.

When predicting future temperature changes climate models tend to generate either a linear or a quasi-exponential change in the increase in average global air temperature over future years. For example after modeling four distinct scenarios of human action for the future, the 4th Annual IPCC report illustrated surface temperature changes as shown in the graph below.3


At first glance such predictions may seem practical based largely on how much CO2 and other greenhouse gases humans continue to emit into the atmosphere through future action. However, when considering all of the potential environmental feedback elements that could trigger during warming such results seems less and less plausible. The more noteworthy feedback factors that have a high probability of playing a role in additional future temperature increase include: increased water evaporation leading to more water vapor in the atmosphere,9,10 CO2 and methane release from melting permafrost,11,12 nitrous oxide release from peat sources,13 increased ocean albedo due to Arctic ice melt,3 new cloud synthesis or disappearance at different altitudes,5,14 increased rainforest dry season reversing sink to source behavior,15,16 and increased ocean temperatures resulting in conversion from sink to source. Although all of the previously listed feedback elements demand concern, permafrost melt and ocean out-gassing demand the most concern due to the sheer amount of CO2 that either process could eventually release into the atmosphere. Both of these problems have previously been addressed on this blog at the following links:

Permafrost: http://bastionofreason.blogspot.com/2009/08/permafrost-and-carbon-stores.html

Ocean Out-Gassing: http://bastionofreason.blogspot.com/2009/09/ocean-acidity-danger-and-remediation.html

It must be noted that the IPCC report identifies the potential inaccuracy in its conclusions due to feedback processes that were not included in the modeling. The decision to exclude most of the feedback information seems to stem from the lack of conclusive and accurate empirical information pertaining to those feedback processes. Basically the mindset of ‘some inaccuracy by not including feedback process A is better than gross inaccuracy through interpreting the feedback process incorrectly.’

Even though the potential inaccuracy is discussed, sadly enough it appears that increased water vapor was the only significant one attempted with varied results.3 A discussion was also given regarding the potential reduction of land and oceanic sinks due to surface temperature increases, but no direct comments regarding sink to source transformation were made. The inaccuracy of the IPCC used models and some of its conclusions have become quite evident most notably in the rapid pace of ice melt in the Arctic and new conclusions that the Arctic may be completely free of summer ice by only 2015-2020 instead of 2080-2100. The most unfortunate element in all of this seems to be the fact that most climate proponents themselves do not incorporate the potential feedback elements into their strategies with regards to limiting surface temperature increases to a certain boundary ceiling. Overall with the inclusion of feedback elements future average global surface temperature increases will more than likely follow a more severe trend than shown in the above graph.

The assumption of a more severe trend in temperature warming finds support when one considers the influence of the ocean in the carbon cycle. In large respects the ocean can be viewed as a dynamic replenishing buffer of some sorts. Various denizens of the ocean, most notably phytoplankton, are able to absorb CO2 either directly from the atmosphere or in the ocean for photosynthesis. When these organisms die, the CO2 that was used in photosynthesis is typically confined to the bottom of the ocean in sediment. After confinement to sediment the capacity of the ocean to absorb CO2 increases. In short due to the interaction of oceanic organisms the ocean is able to continually and consistently increase or at least maintain its ability to draw CO2 from the atmosphere.

Unfortunately buffers can only neutralize pH changes to certain concentrations. When a counter-agent (acid or base) is added at a high enough concentration the buffer collapses and the pH shifts. Such is also true for the ocean and its ability to absorb CO2. As the concentration of CO2, largely due to human driven activities, increases the ocean continues to absorb that CO2, but the rate of absorption is faster than the action of the pathway responsible for burying CO2 in sediment. Thus, concentrations of CO2 in the ocean build-up both by reducing the ability of the ocean to absorb further CO2 from the atmosphere as well as decreasing the efficiency of the CO2 removal pathway by limiting the available organisms responsible for that CO2 removal. The number of organisms is limited due to increases in acidity which result in less available calcium carbonate for certain food chain critical organisms to construct calcium carbonate infrastructure. When these creatures, like coral, are unable to create calcium carbonate shells it negatively affects large portions of the oceanic food chain including organisms that aid in CO2 removal. Eventually this process will conclude at a concentration equilibrium point where the ocean will no longer be able to absorb CO2 from the atmosphere eliminating its CO2 sink capacity.

Now while the loss of the ocean as a sink is bad enough, there is a very real possibility that the ocean will eventually become a source for increasing atmospheric CO2 instead of a sink. Most of the warming due to the excess CO2 in the atmosphere has not occurred on land, but in the ocean. This warming is important because gas solubility in a liquid decreases as temperature increases because increasing temperature increases available kinetic energy which increases molecule movement. Greater molecule movement increases the probability of bond breaking which reduces the ability of the gas to remain in solution. Therefore, as the ocean continues to warm its maximum capacity for CO2 storage in a dynamic equilibrium with the atmosphere will decrease causing it to release CO2 into the atmosphere until it is able to establish a new lower storage equilibrium. Although it is unclear how much CO2 could be released as a result of a negative gas solubility shift in the ocean, the fact that the ocean has increased in CO2 concentration by 118 +- 19 gigatons in the last 200 years17 and absorbs approximately 8-10 gigatons of CO2 a year from the atmosphere paints a dreary picture. Overall although out-gassing would be horrible, the loss of the ocean as a CO2 sink would be far worse over the course of decades.

Tie in the feedback resultant loss of the ocean as a CO2 sink with the potential of out-gassing to the prospect of a continual release of CO2 and methane from the potential 1,672 gigatons of carbon storage load in permafrost18,19 and those two feedback elements alone could create a huge shift in surface temperatures regardless of what humans emit in the decades to come.

Suppose one rejects the above contention of rapid severe warming due to these feedback effects? Even if such warming is rejected and such a rejection turns out to be correct in reality, those wishing to hold temperature increases at a ceiling of 2 degrees C still have the problem of ‘backwash warming’, warming that has yet to catch up with the influence of the current level of climate forcings. Basically the amount of climate forcing that has currently been applied to the environment has not been fully compensated for through change in average surface temperature largely due to slow feedbacks. That is to say that if all human based CO2 emissions were ceased tomorrow, the average global temperature would still increase another x degrees. Although it is not clear how much actual warming will occur through this ‘backwash’, Hansen et Al.1 estimate an additional global temperature increase of approximately 1.4 degrees C. Add that increase to the current increase from pre-industrial times of 0.6 to 0.9 degrees C (depending on what track information is used) and an increase of 2 degrees C is extremely probable regardless of what actions humans take. The graph used by Hansen et Al.1 to illustrate this additional future warming is shown below.


However, regardless of these concerns there is still time to successfully derail significant climate change, but only if the proper strategy is taken. Although the hot topic, the more trendy and popular geo-engineering strategies are unlikely to prove useful in short-term because of two significant flaws. First, they do relatively little, if anything, to alter the concentration of CO2 in the atmosphere instead they work to mask some percentage of the warming driven by this CO2 concentration. Second, they do nothing to change the concentration of CO2 in the ocean, which maintains ocean acidity and limits the ability of the ocean to remove CO2 from the atmosphere. Therefore, if geo-engineering is to be utilized the strategy must attack one of these two issues otherwise it would lack usefulness. Strategies like reforestation or bio-char would be useful, but would also fall far short of drawing out the required CO2. Also it is unlikely that exportation of CO2 into the upper atmosphere and eventually space would prove useful or even viable.

Reduction or mitigation of future emissions is an important element for limiting the total amount of temperature increase, but it is clear that the governments of the world are unwilling to create the cuts that allow mitigation to be an independent strategy that lacks further technological intervention. Currently despite the cries and curses from the environmentalist moment, it is unlikely that enough viable trace/zero emission energy can be generated to compensate for the draw down from coal and natural gas at the speeds required. Also regardless of how some environmentalist spin it, like Joe Romm of Climateprogress, China issuing a non-binding pledge to reduce carbon intensity is rather meaningless because reducing carbon intensity (with the economic growth still to available for China) instead of doing nothing is like getting a 32% on a test instead of a 19%, it is still failure. As it currently stands if the required cuts to avoid significant increases in surface temperature (2-3 degrees C) were to be made it would be a significant detriment to the overall global economic output due to less available energy. The energy gap that is created through emission mitigation for the United States was previous discussed in detail here:

http://bastionofreason.blogspot.com/2009/07/emission-adherence-in-2020-and-2030.html

So if mitigation is not occurring rapidly enough and generic geo-engineering tactics are basically worthless, what is to be done? The principle action beyond mitigation must be to draw CO2 out of the atmosphere via technological means. Technology must be harnessed solely because natural methods are just not fast enough. Not only are current carbon sinks becoming compromised due to current warming,20 but it is highly unrealistic that enough trees can be planted in the near-future to enhance land sink capacity especially when REDD, the most promising anti-deforestation proposal, has yet to expand in any significant capacity. Soil strategies using various tilling methods or bio-char may increase sink capacity by very minute amounts, but nothing to the level that is required. Thus, technology must be used.

In short all research funds that individuals want direct towards point-source carbon capture (a.k.a. clean coal) must instead be directed to non-point-source carbon capture (a.k.a. air capture). This blog has previously discussed the outstanding concerns with air capture and they are significant, but realistically the only way humans stop an increase in average surface temperature of even 3 degrees C without a miracle occurring is a combination of reducing carbon emissions and some form of air capture. Finally the development of a technology that could draw CO2 from the ocean would be an exceptionally useful tool in furthering mitigation by increasing the sink capacity of the ocean.

Overall the environmentalist movement needs to shift gears; it is somewhat humorous in that its members vent frustration at the portrayal of the question of global warming like it is a legitimate debate, yet these same individuals do the same by continuing to talk to species annihilators (global warming deniers) in a context of trying to convince them. At this point in time the debate is over; anyone who believes that humans are not the driving force behind climate change will not change his/her opinion regardless of what facts and evidence are highlighted, it is not worth wasting more time trying to convince them. The only thing that will convince these individuals are negative climate events that directly affect them, nothing which can be provided by environmentalists. In addition further discussion and proposition of foolish and inefficient boycotts of high emitters bad guys like Exxon should cease because truly such a strategy would be ineffective and just take time away and personnel away from more meaningful and effective endeavors. Instead it is time to move into research and innovation mode.

Solutions and strategies need to be prepared for when they are needed in the future. An honest assessment of what energy technologies will be needed to replace coal and natural gas will need to be identified. Just a quick note for those wind supporters, wind will not even come close to providing the necessary energy for global growth or even growth in the United States, especially if wind speeds continue to fall. Is nuclear really that expensive, preventing its widespread adoption, or is the expense only contingent on using 2nd generation technology over 3rd or 4th generation? What new energy strategies will need to be researched? There are many more questions beyond the few mentioned above that demand discussion and attention. Also these discussions cannot be broad based with weak statements like ‘oh all sorts of trace emission energy sources like wind, solar, nuclear and geothermal will be needed for the future’. No, these discussions must be full of details and specifics, so businesses and researchers know exactly what the future markets demand and expect.

In the end although mitigation is important, remediation is also important because the environment has reached a point where nature cannot restore the balance on its own. There are important questions to be asked regarding remediation and it is time for the environmental movement to start focusing in on those questions rather than lamenting or championing the latest meaningless poll regarding the public’s view of global warming, clean energy or whatever else is the subject of the poll de jour.

==
1. Hansen, James, et, al. “Target Atmospheric CO2: Where Should Humanity Aim?” The Open Atmospheric Science Journal, 2008, 2, 217-231.

2. Tans, Pieter. NOAA/ESRL (www.esrl.noaa.gov/gmd/ccgg/trends)

3. Climate Change 2007: Synthesis Report. Intergovernmental Panel on Climate Change.

4. Myhre, Gunnar. “Consistency Between Satellite-Derived and Modeled Estimates of the Direct Aerosol Effect.” Science. June 18, 2009. DOI: 10.1126/science.1174461.

5. Clement, Amy, Burgman, Robert, and Norris, Joel. "Observational and Model Evidence for Positive Low-Level Cloud Feedback." Science. July 24, 2009. 325: 460-464. DOI: 10.1126/science.1171255

6. Kohler, Peter, et, Al. “What caused Earth’s temperature variations during the last 800,000 years? Data-based evidence on radiative forcing and constraints on climate sensitivity.” Quaternary Science Reviews. 2009. 1–17. doi:10.1016/j.quascirev.2009.09.026

7. Charney J. “Carbon Dioxide and Climate: A Scientific Assessment.” National Academy of Sciences Press: Washington DC 1979. 33.

8. Hawkins, Richard, et, Al. “In Case of Emergency.” Climate Safety. Public Interest Research Centre. 2008.

9. Santer, B, et, Al. “Identification of human-induced changes in atmospheric moisture content.” PNAS. 2007. 104: 15248-15253.

10. Dessler, A, et, Al. “Water-vapor climate feedback inferred from climate fluctuations, 2003-2008.” Geophysical Research Letters. 2008. 35: L20704.

11. Ã…kerman, H, and Johansson, M. “Thawing permafrost and thicker active layers in sub-arctic Sweden.” Permafrost and Periglacial Processes. 2008. 19: 279-292.

12. Jin, H.-j, et, Al. “Changes in permafrost environments along the Qinghai-Tibet engineering corridor induced by anthropogenic activities and climate warming.” Cold Regions Science and Technology. 2008. 53: 317-333.

13. Dorrepaal, E. et, Al. “Carbon respiration from subsurface peat accelerated by climate warming in the subarctic.” Nature. 2009. 460: 616-619.

14. Booth, B, et, Al. “Global warming uncertainties due to carbon cycle feedbacks exceed those due to CO2 emissions.” Geophysical Research. 2009. 11: 4179.

15. Cook, K, and Vizy, E. “Effects of Twenty-First Century Climate Change on the Amazon Rain Forest.” Journal of Climate. 2008. 21: 542-560.

16. Phillips, O, et, Al. “Drought sensitivity of the Amazon rainforest.” Science. 2009. 323: 1344-1347.

17. Sabine, C, et, Al. “The Oceanic Sink for Anthropogenic CO2.” Science. 2004. 305: 367-371.

18. Schuur, E, et Al. “Vulnerability of permafrost carbon to climate change: Implications for the global carbon cycle.” BioScience. 2008. 58: 701-714.

19. Tarnocai, C, et Al. “Soil organic carbon pools in the northern circumpolar permafrost region.” Global Biogeochemical Cycles. 2009. 23: GB2023.

20. Canadell, J, et, Al. “Contributions to accelerating atmospheric CO2 growth from economic activity, carbon intensity, and efficiency of natural sinks.” PNAS. 2007. 104: 18866-18870.





The Value of Economic Sanctions

The administration of economic sanctions by a country or group of countries against another country as a means to induce certain policy changes has been a staple of foreign policy for decades. The use of sanctions is popular because it is viewed as a viable alternative to taking costly and frequently unpopular military action. One powerful preconceived notion involving economic sanctions seems to be that it offers the greatest probability of success for the lowest possible cost. Decades ago there was a significant probability that economic sanctions could be utilized as an effective means to influence policy or strategy change because the global economy was still reliant on a small number of countries. However, due to globalization and the emergence of a significant number of new economic players, it is much more difficult to influence policy change through economic sanctions. The biggest problem with modern economic sanctions is their structure and execution did not evolve as the global economic and political community evolved. This lack of evolution has left economic sanctions that may have worked in the 1950s and 60s flawed and with a much lower chance for success in present day. The continued use of economic sanctions with little alteration in application demonstrates a large disconnect between theory and reality for those responsible for the administration of these sanctions.

Improperly designed economic sanction policies actually have a greater probability of doing more harm than good; creating a greater divide in the relationship between the sanctioning and the sanctioned countries further reducing the probability of success in future diplomatic engagements. This possibility is one of the reasons the application of economic sanctions can be detrimental because they frequently act as an accelerant in conflict between two countries. Typically economic sanctions are levied by one country against another country because the sanctioned country is taking an action, be it economic, political or militarily, that the sanctioning country disapproves of and possibly believes to be dangerous. In large part economic sanctions can be regarded as the last step before a military conflict/intervention, so clearly when a country uses economic sanctions that country normally does not currently have a quality diplomatic relationship with the sanctioned country where rationality and even quid pro quo diplomacy can be utilized.

Unfortunately when friction exists between two countries the governments of those countries use political propaganda to further increase distrust of the opposing country in their citizens. This propaganda can facilitate a nationalist response when faced with economic sanctions from a country viewed as the enemy where instead of capitulating to the threat of or actual economic sanctions, the citizenry of the sanctioned country unites in defiance with their leaders. Such an attitude is characterized by those citizens digging in even harder rather than be characterized as weak, which would establish a precedence that they can be pushed around at the sanctioning country’s whim.

The two most notable historic examples illustrating the failure of economic sanctions are between the United States and Cuba and the United States and Iraq. For over 40 years economic sanctions have been the policy of the United States towards Cuba and yet a Castro still remains in power (Fidel to Raul) and the economic system is still socialist (those who refer to the economic system of Cuba as Communism need to actually research the definition of Communism). Realistically there are only two reasons economic sanctions remain on Cuba. First, for simple vengeance as the initial reason for enacting economic sanctions was due to Fidel Castro illegally seizing U.S. property and assets in Cuba when he seized power. Clearly those in power who owned those assets did not look upon such an action favorably. Second, a fanatical anti-Castro movement by Cuban-Americans in Florida and other places around the U.S. continues to provide heavy political pressure on the federal government to continue the economic sanctions; ironically pressure which the government caves to for no real legitimate reason because the voting power of these individuals as a whole is rather pathetic.

Economic sanctions on Iraq lasted for over a decade and still did nothing to induce revolution to overthrow Saddam Hussein or change Saddam’s policy towards his own citizens or the rest of the world. Instead military force was used to remove Saddam from power. In both situations all economic sanctions really did was hurt the citizens of those sanctioned countries as well as hurt the economy of the United States. Most free-market proponents believe that utilizing capitalism and free trade would have influenced the desired political change much faster and efficiently by providing incentives to maintain a positive relationship through the purchase of goods. Basically if Country A sells Country B a lot of product x and product x is valued Country B will have a more friendly relationship with Country A. So why do policy makers skill believe that economic sanctions under their current design are a viable strategy for inducing foreign policy changes?

Why has the generic blueprint for the general economic sanction remained the same for over six decades with the only improvement being the advent of ‘smart sanction’, which has had mixed results at best? One possible explanation is that in the early post-World War II period economic sanctions, most initiated by the United States, were successful a vast majority of the time. Unfortunately that early success turned out to be a negative influencer for future results, convincing U.S. authorities that economic sanctions would frequently work and that the structure of those post-war sanctions was the proper structure to achieve success. In addition most of these successful economic sanctions were unilateral further fostering confidence in the superiority of the U.S. economic system and its influence aboard. Interestingly enough this mindset has yet to truly change despite significant failures of unilateral U.S. economic sanctions to generate change from the late 1970s to the present.

Overall there appears to be no good explanation for why the basic structure and methodology behind economic sanctions has not changed significantly. The sad reality is that even when economic sanctions are successful they rarely induce long-term significant change; instead the successes derived from economic sanctions are short-term, typically single event victories. So realistically economic sanctions can only achieve moderate goals in the first place and even those goals have become less attainable in modern society. Perhaps the reality of the situation is that world leaders have concluded that any variation in the economic sanction design will not significantly change the low probability of success, so they have elected to not waste time and resources by modifying the old strategy. Instead economic sanctions are viewed as a policy designed to make the public believe that something is being done about country A that is perceived to be threat for given reason B when in reality nothing of significance is being done. Of course if this is true, economic sanctions are a rather harmful ‘do nothing’ policy for they are detrimental to U.S. based exporters and in some respects U.S. based importers.

If economic sanctions are ever going become a viable alternative to military action its failings must be identified and corrected. The chief problem with economics sanctions is that originally there was motivation to facilitate such sanctions through the United Nations; in fact that facilitation was part of the motivation for establishing the United Nations where the sanctions would have multi-national support, but such a strategy was never really established; therefore most economic sanction strategies became unilateral or at best a small network of countries seeking a common goal. Through most of the 20th century, especially after World War II, the U.S. and the Soviet Union had the economic clout to influence policy through the use of economic sanctions, but globalization has increased the number of effective economies in the global trading game reducing the overall influence of each single economy. Now unilateral sanctions regardless of who imposes them have a much smaller probability of success.

Unilateral sanctions are difficult to successfully administer on two different counts. First, the country being sanctioned needs to have a strong economic relationship with the country that initiates the sanction or the sanction will have little to no effect. Second, the sanctioned country must not be able to acquire a significant amount of resources to neutralize the effect of the sanction. Failure in this second criterion is why most unilateral sanctions fail as even though a powerful economy may elect to cut trade ties with a country only in the rare situation of a global product monopoly will the sanctioned country not be able to receive necessary resources from another country. Countries, like individuals, tend to look out for their own interests first and there is no reason for these other trading partners to agree explicitly with the policies suggested by the sanctioning country without analyzing how such a policy change would influence them. It is irrational for the countries pushing for the sanctions to expect other trading partners to abide by the proposed sanctions because their desired policy is righteous and correct.

Additionally if the first parameter is met, the sudden depression in supply to certain resource sectors of the sanctioned economy can create new opportunities for existing and possible future trade partners to capitalize on the product shortfall. Instead of adhering to the sanction these countries could look to make-up the lost supply carving out a new piece of a foreign market that might not have been otherwise available. Understanding these realities lead to the reasonable conclusion that the goal of the sanction will rarely succeed if it is politically unilateral. The best way to circumvent the economic advantages that non-sanctioning countries receives by not adhering to the sanction is to convince the non-sanctioning countries that the policy the sanction seeks to change is critical for their own national interest. Basically the sanctioning countries should try to convince other possible trading partners that without a policy change in the sanctioned country the future detriment will exceed any short-term economic gain.

If the policy that the sanctioned country wants changed is not critical for the additional trading partners or it is, but they cannot be convinced of such, then the best way to establish policy change is for the sanctioning country to remove a sufficient number of the other potential trading partners from the equation by offering them a better deal. To eliminate the possibility that other countries trade with the sanctioned country, the sanctioning country should negotiate short-term trade agreements with those other partners under the condition that they will not trade with the sanctioned country. These trade agreements should have similar structures and product movement that would be seen between the partner in question and the sanctioned country. Note that only enough trading partners need to be neutralized so that the sanctions will negatively affect the economy of the sanctioned country. Of course such trade agreements will probably increase the total cost of the sanctions for the sanctioning country as well, so the policy change that is being targeted must continue to exceed these costs in overall benefit. Fortunately these trade agreements also may mitigate some of the job loss and capital loss from ceasing trade with the sanctioned country.

In the end even if the proper allies are created, the overall policy change goal when considering the ramifications of the economic sanctions must be rational. A good example of an irrational goal is a portion of the goal involving the economic sanctions placed on Iraq by the United States. Part of the reasoning behind the economic sanctions was to induce rebellion in the Iraqi populous in order to overthrow Saddam; however, such a revolution is improbable when your rebels are sick and dying because they are not getting enough food and medicine because of the economic sanctions placed on their country. Remember that product distribution regarding economic sanctions will typically go from top to bottom. The have-nots will be hurt more often and more severely than the haves. Hence planners need to keep these realities in mind when determining goals for economic sanctions.

Overall before embarking on a proposal to engage economic sanctions against another country it would be wise to consider the following factors:

1. Make sure the eventual policy change goal of the economic sanction is rational and attainable;
2. Ensure enough economic clout to negatively affect the economy of the target country;
3. Neutralize other possible trading partners by convincing them that it is in their best interest to join the sanction or by entering into short-term trade agreements to neutralize the economic gain from trading with the sanctioned country;
4. Immediately cease trade with the sanctioned country as the faster and more forceful the disconnect the greater the probability that the sanction will work at a reasonable cost to the sanctioning country;
5. Do not be afraid to negotiate with the sanctioned country after a fixed period of time under the sanction, the policy change goal does not need to be all or nothing;

Monday, December 7, 2009

A Brief Discussion about Change

An old cliché is that change is not easy, which like most clichés is true because unfortunately there are significant obstacles to transformation especially in either behavior or thinking. First, an individual has to believe that there is a problem with their behavior or thought process that demands change. Belief in such a reality is a complicated issue because of different perspectives regarding how one should think. Also one must also take into consideration the absolute level of change demanded for a particular individual, for some individuals simply may not be able to think to an advanced stage due to limitations beyond one’s control, not everyone in life is a philosopher.

However, the biggest obstacle to overcoming this condition is the simple fact that most people do not like to admit when they are wrong about something, even when presented with overwhelming evidence to the contrary. There are two basic options to defeat such a psychological imperative: first tap into personal pride. Sticking to one’s ‘guns’ even when wrong should be viewed as a poor light, not revered like it tends to be. Basically an individual should feel shame for being wrong and resolve to correct those error(s). The second option is to penalize the individual for being wrong. Thinking that 2 + 2 = 5 should come with a huge financial or personal penalty because if an individual is not going to have enough pride to accept when he/she is wrong then society must correct that behavior not through incentive, but through punishment.

Second, the participants have to have the capacity for change. Change typically requires strength of will, a characteristic that is not common place in most individuals. There is something that can be said for blind faith being a driver of change, but the general outcomes generated by blind faith are more randomized than typically desired. The reason strength of will is required is because change demands, as previously mentioned, admitting that the current course of action or behavior is wrong. Humans tend to shun the prospect of confirming that they are wrong about something, thus one must have a psychological means to overcome this distaste and the cognitive dissidence defense mechanism hence the strategy of tapping into personal pride.

Third, even if an individual is aware of their superficial or inappropriate behavior and/or erroneous thought process it may be by design. For example the individual could have tried to be him/herself at one point in time and was just unhappy for some reason, either he/she did not like him/herself or society did not like him/her and behaving in a disingenuous manner is a mechanistic attempt at creating happiness. Of course an argument can be made that such happiness is simply a lie, but that leads to the deeper question: is it better to live a happy lie or an unhappy truth?

The second rationality for behaving against one’s true nature may be to advance in a particular social or occupational infrastructure. It cannot be intelligently argued that human society does not favor certain characteristics and attitudes over others; although society claims to desire diversity, reality paints a different picture. Thus, an individual may think the best course of action is to change behavior to represent something society or a potential mate may want instead of who they truly are. Unfortunately such action seems rather detestable in that to achieve such a standing a person has to destroy the sense of self. What is worse an individual behaving/thinking in a despicable manner or an individual play-acting in a despicable manner solely to advance in society?

Such a question is interesting in that some could argue that clearly the former is worse because if one is clever enough to realize how a certain aspect of society is exploitive and chooses to exploit that aspect to his/her own advantage that individual should be praised. However, the thought of such praise leaves a bad taste in the mouth because such action does not change the fundamental flaw in the system instead it indirectly supports that flaw.

For example one is reminded of the verbal confrontation between John Stewart and Tucker Carlson on the now defunct CNN show “Crossfire” in 2004. Mr. Stewart opened the question and answer portion of the show involving him with an unexpected significant criticism regarding the validity of "Crossfire" as a genuine debate and information source. In response Mr. Carlson criticized the seriousness and usefulness of the news commentary provided by "The Daily Show", which Mr. Stewart hosted at the time and still currently hosts. Mr. Stewart appeared to view this criticism as comical because "The Daily Show" was supposed to be satirical and comedic in nature; bolstering this point Mr. Stewart cited that at the time puppets making crank phone calls was slotted right before "The Daily Show" whereas CNN claimed, and still does, to be the most trusted name in news and Mr. Stewart’s criticism was directly related to not living up to such a boast.

By Mr. Carlson criticizing "The Daily Show" instead of defending "Crossfire" he was basically telling the public that both "The Daily Show" and "Crossfire" were ill equipped to properly disseminate and analyze the news, thus limiting the quality of either show. Mr. Carlson should have instead defended "Crossfire" in effort to demonstrate its importance in the news community and simultaneously hurting Mr. Stewart's credibility to lodge future criticism of similar nature. Although one can somewhat understand the attack strategy as the criticism of "Crossfire" could have been viewed by Mr. Carlson as a criticism of him because of his direct involvement with "Crossfire", such an emotional response seems to be typically more detrimental than beneficial. This example relates back to the exploitation situation in that society as a whole is better served if the exploitation is brought to light instead of kept in the shadows because a particular individual wants to use it to his/her advantage. Basically a wrong should be corrected, not defended through manipulation or citing wrong in that which is trying to correct the wrong.

The fourth barrier to behavioral change is the knowledge of change. One may want to build a better clock, but if one does not know how to build clocks that desire to build a clock is wasted. The knowledge could come from the individual, but not all individuals have the experience or intelligence to know what needs to be done to change. Therefore, if an individual is to change a means to initiate that change must be outlined to the given individual otherwise how will change occur?

Environmentalists have this problem in that they frequently suggest that individuals cut their carbon footprints in effort to derail the prospects of serious global warming. Some even go into more specifics on how to achieve a reduction like improving the energy efficiency of their homes. Unfortunately that is where almost all of them stop, failing to realize that improving energy efficiency is still a rather complicated endeavor when factoring in which contractors to use, how to get started, rates of return on investment, etc. Perhaps more people would pursue energy efficiency if someone created almost a step-by-step guide. Paging the authors of the ‘for dummies’ books, the environmentalists need you.

Due to these four elements a simple ‘call to arms’ type statement for change, again something commonly used by environmentalist, will more than likely be ineffective. Each of these elements must be dealt with effectively if any real and lasting change in behavior is to occur. Logic seems to dictate that the second and third barrier make up the principle reasons individuals behave/think in a certain way. Unfortunately such a situation creates a significant problem. Regarding the third barrier, the individual is playing a disingenuous role that creates a net advantage in existence over acting in a genuine manner. It makes no logical sense for an individual to exhibit his/her genuine character if the individual can effectively manage life as that fake person. Therefore, the only means to initiate change in those individuals is to remove the greater benefit from the false persona.

The problem with fostering change when faced with the first aspect of the third barrier is the psychological phenomenon of inclusion. Recall that although individuals say the politically correct things, in reality life and society do not like diversity (otherwise most of the issues involving ethnicity and race would have ended a long time ago), instead they are like chemistry: like dissolves like. People prefer dealing with people that do not exist outside of the realm of their own expectations. Interacting with those that do makes them uncomfortable and typically drives them away from those individuals towards safer and more familiar pastures.

Equally influential is the prospect of being alone which is a powerful negative force that again only the strong can truly neutralize as a mover in their life. Humans by evolutionary nature are social beings. If one acts in such a way that limits the potential for social interaction, it is understandable although unfortunate, that the individual would readily abandon true self to ensure positive social interaction. Tying the issue of change to the field of relationships, despite fairy tales to the contrary, there is not someone out there for everyone and a vast number of individuals do not recognize ‘Boulevard of Broken Dreams’ as their personal theme song. In short it is difficult to be you when life kicks you in the teeth for being you.

In large context conquering the third barrier may be simple relative to conquering the second barrier. The third barrier involves balancing any differential rewards to a given personality or mindset regardless of factual or empirical correctness. One particular mindset cannot be rewarded beyond another particular mindset with regards to a given issue without empirical evidence demonstrating factual superiority on that given issue. Basically being right and logical needs to be the driver in how a particular mindset is rewarded. The problem is prying away enough power from those that prefer to be wrong that such a change can occur. For the second problem people must be instilled with a level of personal pride where they rebuke anyone that does not accept them for themselves within the context of social norms and the legal system.

Make no bones about it change is indeed a tall order. However, the second barrier cannot be fully conquered until society as a whole can understand the unique offerings of each individual and how they can contribute positively to society. Otherwise the probability of any successful mass unmasking of those hiding behind facades unfortunately remains low. Thus steps must be taken to facilitate and nurture this potential for a new mindset. Overall society must demand that individuals focus on casting aside those elements that are statistically false, which will involve change for many, in order to optimize not only present benefits, but future benefits as well.

Monday, November 30, 2009

Addressing the Problems in Health Care: Where Should Reform Aim?

For the past 6-8 months the United States has been abuzz in rancor arguments about potential legislation regarding proposed health care reform. In fact as of this post the House of Representatives have already passed a reform bill. Unfortunately the perceived refrain has morphed from getting health care reform right to just getting something passed because both the Republican and Democratic parties have, not surprisingly, forgotten the true purpose behind health care reform and have instead boiled the issue down to a political question. If any form of health care reform is signed into law Democrats will sing its praises and pronounce victory, regardless of whether or not it is effective reform; whereas if no reform is signed into law Republicans will sing praise to their upholding the current system and in their minds saving capitalism. Of course both of these stances are complete garbage. There is no victory in baselessly opposing health care reform and defending the current system. However, there is also no victory to be had through passage of a bill that does not address the key problems in the current health care design. Genuine reform is needed because it is obvious, based on every projection available, that the health care system in the United States if left unchecked will go bankrupt somewhere in the mid 2020s, if not sooner. When beginning a discussion about reform it is important to identify the key problems plaguing health care so proper legislation can be developed to address those problems.

The chief problem in health care is the question of future financial solvency. Clearly regardless of what the health care system does or how it is designed, if it is not self-sustainable then there is no point in its continuation. With this issue in mind it is important that any health care reform results in a reduction of projected future costs vs. the projected future costs of the current system. This result is far and away the most important element to health care reform because if the reform results in higher costs, the changes become generally meaningless because the system will collapse at a faster rate. The first set of reforms do not have to be self-sustainable as long as more time is allotted to the survival of the system. It is unlikely that a perfectly self-sustainable system will be designed in a single set of reforms, largely due to randomized outliers and uncertainties most notably human behavior, which may require more exotic responses that could not be predicted before empirically witnessing such outcomes. Thus as long as more time is added to the solvency of the system to allow these disturbances to come to light, the health care reform accomplishes this first important goal.

Does the health care bill passed in the House of Representatives achieve this goal? At the moment it is difficult to tell, not just because of the generic ‘it is tough to project budgetary expectations out over 10 years…’ commentary, but more so because the House bill does not directly deal with Medicare reimbursement appropriately to ensure lower costs vs. the what would be seen if no reform is passed. The current House Resolution (H.R.) 3961, which can be viewed as a companion bill to address Medicare reimbursement, attempts to remedy this problem, but the solution which will be described later, appears to push the entire reform to greater expense than the current system.

Budgetary issues aside, what are the other prominent health care problems that health care reform should seek to address? The list below outlines these problems:

1) 40-50 million individuals are uninsured;

2) Low reimbursement rates for doctors treating Medicare patients;

3) Low efficiency due to written records (lack of electronic record keeping);

4) Over-use of services (both by doctors and patients) leading to increased costs;

5) Little to zero patient education programs;

6) Medical malpractice lawsuit concerns or obstacles;

7) ER slowdown due to limited beds and turnover;

Identifying why there are so many uninsured individuals is easy, these individuals cannot afford health insurance because if insurance were affordable they would have it as medical bills can become quite expensive over even only a short period of time. However, solving that problem is much more difficult because it involves devising an acceptable methodology for providing lower insurance premiums for a given insurance plan or increasing salaries. Increasing salaries in lieu of employer provided health care or just in general can create problems due to a number of reasons. First, employers have an advantage from a negotiating standpoint with insurance companies over the generic individual due to the size of the employee role which provides a greater ability to spread risk with lower average premiums being the quid pro quo for that increase in the population pool. Second, increasing salaries would only work for those that are already receiving insurance coverage from their occupation. Any salary increase from jobs that do not offer insurance would simply add to employer costs, which would not be well received. Third, increasing salary only works for those that have jobs and it can be argued that increasing salaries without a comparable reduction in costs elsewhere will reduce the probability that jobless individuals will acquire jobs due to additional costs against employers, which would limit their ability to hire new employees.

These complications with increasing salaries transfer any real solution for the uninsured problem to reducing premiums. Insurance is a risk-driven business based largely on the volume of participants. The more individuals paying premiums the lower each individual premium payment can be versus the opposite situation where fewer individuals paying premiums demands higher individual premiums to ensure available funds for coverage of medical costs. Unfortunately the most obvious answer, putting everyone on the same insurance plan seems unrealistic at the moment due to either logistics or monopoly concerns. Thus, due to these limitations the best way to generate lower premiums is to create a competitive environment between insurance companies where they compete for healthy individuals to bolster their population pools without increasing their associated payout probabilities. An important component to fostering competition would be to allow the government to provide insurance on the open market (basically a public option) because there is not sufficient competition between existing insurance companies because most have carved out spheres of influence in given regions of the country.

Some would argue that such an idea could spell the end of the private insurance industry, but such a concern is rather stupid because any government policy could be given ground rules that would prevent it from ending private insurance, like a premium floor so the government could not offer an obscenely low premium that would redirect all customers to the government plan. Also no government plan could cover all conditions, hence private insurance plans would find specialized niches to cover those gaps (remember even in single-payer Europe, private insurance companies exist).

However, a government plan has its own pitfalls in that any expansion of Medicare would be problematic due to dropping reimbursement rates. In effort to avoid any significant level of debt from Medicare, the Centers for Medicare & Medicaid Services (CMS) must review and fix, if necessary, any relative values of service that are shown to be inappropriate with a boundary condition of up to 20 million dollars.1 Basically the total correction of the relative values of service must be lower than 20 million dollars. If not, other service fees must go down until the total change meets this 20 million dollar condition. Thus, without a significant expansion of the Medicare enrollment population, a reality that most lawmakers would not allow due to the above silly, yet powerful notion of a government takeover of health care, reducing premiums for any government plan would require a reduction in reimbursement to physicians.

Unfortunately current reimbursement rates are already critically low and require yearly reprieves to avoid imposing significant losses on physician reimbursement after treating Medicare patients. Things have gotten quite troublesome in that even prestigious medical centers like the Mayo Clinic have started to turn away Medicare patients. Therefore, until the reimbursement problem can be worked out, a government option will probably only have limited influence at positively insuring those that currently do not have insurance.

The reason reimbursement rates are important for a public option is because if reimbursement rates create negative returns for physicians during treatment those physicians will not treat patients covered by the public option unless the patient pays out of pocket. It is likely to expect individuals that do not receive effective coverage from an insurance plan to abandon that plan, thus there will be little long-term enrollment in the public option. Without the potential to generate a large list of enrollees, the public option will provide no competitive counter-weight against private companies and their corresponding premium rates. Under this scenario the public option would be rendered completely worthless and just a waste of time.

Speaking of the reimbursement problem, partially due to the “budget neutrality” (20 million) feature of Medicare, reimbursement payments have decreased in recent years. The basic explanation for this change is that all individuals over the age of 65 who are citizens of the United States are automatically enrolled in Medicare. Unfortunately on average these individuals appear to be costing Medicare more than adding to it through their Part B premium payments and previously paid Part A FICA taxes, influencing a reduction in reimbursement rates for physicians for various procedures. The chief problem with significantly raising Part B premiums as a means to raise reimbursement rates is that premiums are taken from Social Security and most would argue that Social Security does not pay enough to begin with, thus increasing the amount taken for Medicare premiums would be disastrous for those that rely on Social Security, which turns out to be a number of the elderly. Part B premiums have increased over the years, but the rate has been small and stable. Increasing FICA taxes is probably a non-starter. Increasing reimbursement without increasing available revenue does not appear to be a good idea either because it would significantly increases government debt.

To better understand why reimbursement rates are a problem, one needs to understand how they are currently calculated. Medicare uses a standard formula to calculate the requisite relative value units (RVUs) for resource-based medical practice expenses to determine the amount of money that a physician will receive for a given procedure. RVUs are fixed values determined by the CMS. There are three specific RVUs that make up the reimbursement formula: the relative value of a physician’s work (RVUw), the relative value of the practice expenses (RVUpe) and the relative value for malpractice (RVUm). Note that each of these RVUs are relative to the given procedure on what is expected or the average.2,3

Of course there are regional cost differences that need to be considered when calculating reimbursement because clearly it is unfair to compensate a physician that works in West Lafayette, Indiana the same as one that works in San Diego, California. Thus there are regional cost modifiers or geographic practice cost indices (GPCIs) for each RVU category (GPCIw, GPCIpe and GPCIm) that are used in calculating reimbursement. So one of the formulas used by Medicare to determine an aspect of reimbursement is:

Total RVU Amount = [(RVUw x GPCIw) + (RVUpe x GPCIpe) + (RVUm x GPCIm)];

However, as can be plainly seen, the total RVU amount is not a dollar figure. The conversion of the total RVU amount to an actual dollar payment, which will be covered both by Medicare and the patient (remember that after a Medicare patient reaches the $135 deducible for Part B a 20% co-insurance with Medicare kicks in), is determined through multiplication of the Total RVU amount with a conversion factor (CF). The current calculation of the CF involves a significant amount of dissatisfaction in the medical community and is thought to be a principle reason for the reduction in reimbursement, largely because of the inclusion and disfavorable structure of the sustainable growth rate (SGR).

The calculation of the CF is rather complicated in that it is influenced by the Medicare Economic Index (MEI) and the Update Adjustment Factor (UAF). The MEI consists of the weighted average price change for various inputs involved in producing physician services and the UAF compares actual and target expenditures for the current year and what Medicare has paid out vs. its targets since 1996 and the SGR for the coming year.4 1996 was established as the range floor by the Balanced Budget Act of 1997.4 The official formula for the calculation of the UAF is shown below:

UAFx = ((Targetx-1 – Actualx-1)/Actualx-1)*0.75 + ((Target4/96 to 12/x – Actual4/96 to 12/x)/(Actualx-1 * (1+ SGRx))) * 0.33

Note that boundaries are placed on the UAF where the percentage must be between negative 7% and positive 3%.4 If the calculated value exceeds one of these boundaries it becomes the boundary (for example positive 5.8% would not be used, instead positive 3% would be used). Normally this calculated UAF value would be added to the MEI to calculate the percentage that would be used to determine the new conversion factor for year x. Originally just the UAF and MEI made up the conversion factor adjustment, but recent legislation, as mentioned, have added a budget neutrality factor for the coming year as well as a 5-year budget neutrality adjustment.

Unfortunately over the last decade the actual expenditures that have comprised Medicare have exceeded the expected targets which would have resulted in reduction of conversion factors leading to lower reimbursement rates, if Congress did not pass temporary conversion factor adjustments to prevent those drops. Of course these temporary conversion factor adjustments only last for one year and do not solve the problem, instead it compounds as the conversion factor reduction percentage for the next year will absorb that reduction from the previous year and grow larger.

For example suppose an adjustment percentage of –4% is calculated, which would lower the CF from $39 to $37.44. Clearly Congress does not want to lower physician reimbursement otherwise physicians may begin to turn away Medicare patients, thus they pass a law that changes the adjustment percentage for that year from –4% to 1% raising the CF from $39 to $39.39. However, the new law does not wipe the –4% clean, it just masks the percentage so next year the adjustment percentage is higher than –4%.

Another example would go far to cementing understanding with regards to why these negative percentages appear in the first place. Suppose Person A is given a $500,000 check and needs to use that money to survive for a 10-year period. Thus Person A budgets $50,000 for each year. The first two years go by and Person A is still on budget (still has $400,000), but in year 3 Person A’s costs are $70,000, not the budgeted $50,000. Now Person A cannot squelch on the additional $20,000 so he pays it. However, now due to this unanticipated expense the money allocated for each additional year needs to be adjusted. So instead of having an average of $50,000 dollars available for each year the new yearly allocation is $47,142.86. The excess payouts in year 3 changed the amount of money available for the rest of the 10-year period.

The above example is similar to how Medicare works in that the $500,000 in the example represents both the Part A FICA taxes and the Part B premiums and other fees. Originally the total number of individuals in Medicare and their medical costs were low. However, increasing populations and associated medical costs, largely due to more costly diagnostic technologies, have eaten into that payment pool made by these taxes and premiums causing a decrease in payout (i.e. reimbursement rates) to ensure the continued solvency of the system. Basically this system is largely referred to as “pay as you go” (PAYGO), the money needs to be available in some context (if spending increases it has to decrease elsewhere accordingly). The system cannot rely on the prospect of future debt to fund the system (basically no ‘I.O.U.s’).

There are other arguments regarding reimbursement problems which do not reflect back on the SGR like concerns relating to the GPCI constants in that certain regions are snubbed in what they actually cost because there are a number of fixed overhead costs with regards to running a medical practice that do not relate back to costs associated with living in a particular region. This complaint has become even more relevant because of concerns regarding stoppage of bonus payments to physicians in physician scarce regions. Others believe that there are other inherent flaws in the formula itself, but these flaws are more related to new individuals entering Medicare costing more money than they are adding in revenue (through their premiums and previous FICA taxes). One of the problems is that although medical costs are rising across the board, Medicare costs are increasing faster because individuals on Medicare are older than the average individual and are more than likely going to have more health problem simply because of their advanced age.

So the underlying problem with correcting reimbursement is the two most popular strategies for increasing revenue in the insurance industry are not initially available, increasing premiums or increasing volume (number of patients). There appear to be three remaining main strategies for bridging the gap between reimbursement and revenue in a government system like Medicare. The easiest means would be to eliminate coverage or ration certain procedures thus freeing up the money that would normally be devoted to those procedures to increase the amount paid to other procedures. It is likely that this strategy would face significant opposition from Congress due to extensive lobbying from groups like AARP. Also there would be questions regarding the morality of generating such cuts in Medicare.

Rationing might be required in this instance because rearranging the reimbursement scale would not be effective without inflation because the total allotment of funds to draw from to create the reimbursement scale is rather stable. Without inflation it does not matter where the capital is distributed if the total amount of capital available for distribution does not change and all of the procedures are legitimate. Basically it does not matter if a cake is cut into 17 slices or 4 slices the same total amount of cake is available, just in smaller or larger individual quantities. Note that inflation in this instance describes the administration of unnecessary tests for the sake of collecting high service payments from Medicare.

However, if there is inflation in the system then reorganization could work on some level, not in increasing the total average reimbursement, but closing the standard deviation between physicians. For example if patients were demanding unnecessary tests and/or physicians were administering unnecessary tests, such actions would place an irrational strain on the reimbursement system which would skew the capital allotment within the system. Once these unnecessary procedures are removed from the reimbursement allotment, then the capital that was assigned to pay for those tests can be redistributed to other more reliable categories of reimbursement. This redistribution would typically increase the total reimbursement for Medicare patients due to less manipulation and waste in the system. However, it must be remembered that total reimbursement across the board does not change even in this situation because the total amount of capital available for distribution does not change. Rationing basically works the same regardless of whether or not inflation exists, which may be necessary because reliably identifying inflation can be problematic.

Eliminating inflation is a valuable tool though because it levels the playing field so more honest and/or successful physicians are better rewarded for those attributes. Basically the point of redistribution through the elimination of inflation is to reduce the profit made by a select number of physicians and increase the total amount of money reimbursed to physicians doing meaningful work. This redistribution can also be directed against specialized fields that are receiving more compensation than is thought rational due to the statistical relevance of their testing procedures. For example consider a scenario where three doctors receive $100,000 from Medicare, partially due to over-treatment, and three other doctors receive $60,000. Redistribution could be applied to narrow that gap where the first three are only receiving $82,500 and the second three are receiving $77,500 due to the elimination of those over-treatments and increasing reimbursement for general practices.

A second option in lieu of rationing is to improve the overall health of new Medicare enrollees. The continuing problem in the United States with more and more individuals either being obese or overweight is that these individuals are more than likely having a negative affect on the general health costs of the populous. Therefore, if steps can be taken to reduce the probability that these individuals become obese or overweight in the first place, such action should reduce the probability of increased losses when these individuals join the Medicare roster. Unfortunately there does not appear to be any easy way to accomplish this goal because individuals have their own free will and probably would react negatively to any limitations or rules restricting their eating habits; also nutritional education does not appear to be working well either as indicated by the climbing rates of unhealthy individuals.

The third option involves changing the formula used to calculate reimbursement. This option is what H.R. 3961 proposes, replacing the 1996 baseline for the SGR with a new 2009 baseline which is thought to ‘better represent’ the current costs of medical treatment.5 Unfortunately this change absorbs the gap in cost between the 1996 and 2009 baseline. Normally such a debt allotment would not be valid under the most current PAYGO statute, newly passed in July, without decreased spending elsewhere. However, H.R. 3961 circumvents PAYGO through the addition of the text from H.R. 2920 thus the change in reimbursement would not be considered a ‘new’ measure of spending that would have to abide by PAYGO.5

Although resetting the baseline may be a valid strategy, but not by itself. The concern is that by de-linking the statute from the PAYGO provision such a change will result in significant debt. Changing the baseline does nothing to actually reduce rising medical costs, which projects continued increasing costs into the future and more money required to pay those costs through reimbursement. Basically circumventing the previously operating ‘pay-as-you-go’ aspect of Medicare is risky because depending on how medical costs change in the future it could easily eliminate the solvency of Medicare much faster than is currently projected at this time. Once Medicare solvency is gone then the entire funding base of Medicare becomes tied to deficit spending, which removes almost all stability from the system and would more than likely dramatically increase the national debt.

One strategy that has been proposed to ward off dependence on debt is creation of private medical insurance accounts similar to the private accounts proposed for Social Security. Unfortunately the ability of private accounts to be part of the solution to future shortfalls, especially when used for both Social Security and Medicare, is becoming less and less probable. Thus the above problem of debt reliance once again becomes an issue making an adjustment of baseline without corresponding effective measures to reduce costs less and less attractive for anyone that cares about financial solvency.

H.R. 3961 tries to make some serviceable improvements in reimbursement by creating a new category of care for preventative care with its own CR.5 Unfortunately there is also reason to believe that abandonment of PAYGO in the above respect will result in a more rapid increase in Part B premiums which as previously discussed will place a greater burden on Social Security recipients. Also on a side note H.R. 3961 also appears to solidify the Stark rules which prevent physicians from making self-referrals in order to collect more fees.5

With these roadblocks the best strategy may be a combination of rationing and improving the general health of society. For example limited rationing through decreasing payment on procedures more widely associated with poor health vs. random chance while increasing payment on procedures associated with ‘normal wear-and-tear’ and general care. Realizing that certain conditions will afflict an individual regardless of health, rationing would not be appropriate to exclude any conditions outright, but limit certain conditions to a predetermined amount of covered treatments per year. Such a strategy would be preferable if applicable, i.e. there are a number of individuals who receive multiple passes of expensive treatments that would fall into the rationed category. However, if there is little repeating treatment then rationing in such a strategy may not be effective. Realistically the best bet for generating a method that will allow for the increase of physician reimbursement without excess debt is increasing the overall health of society, but if such a thing is not possible then rationing on some level may be the only response.

Unfortunately there is another problem altogether with reimbursement, the problem relating to the core of its existence as a fee-for-service redress methodology. There is a real concern that because physicians are compensated based on the amount and type of service rendered that not only are unnecessary tests administered to patients, but also some physicians practice a more sinister attitude of not focusing on curing the patient in a single treatment instead offering piece-meal treatments to maximize profit. Clearly both of these elements are going to have a negative influence on the total cost pool reducing the total effective reimbursement available from programs like Medicare and even private insurance companies forcing premium increases. Although the solution seems obvious, reward physicians for proper diagnosis and treatment when applying appropriate cost measures, the problem is how to identify those elements in a fair and rational manner. As mentioned H.R. 3961 creates a new payment category relating to preventative treatment, but how exactly one would measure success in this new category is still unclear.

Most medical experts look at the Mayo Clinic methodology as the new standard for treatment that should be applied for all hospitals where there is no fee-for-service pay structure, but instead all physicians and staff receive a salary. Without pay tied to the number and type of services rendered physicians have no incentives to not work as hard and efficiently as possible to diagnosis and treat patients as soon as possible. In fact such a system creates a negative incentive for errors and disingenuous action because more time would be spent treating a given patient instead of on other endeavors. Under such a system doctors are also more incline to help each other in diagnosis which creates a glut of new ideas which reduces the average number of tests that need to be run in order to successfully diagnosis and treat a patient reducing costs.

A salaried system also removes any incentive to cherry-pick patients that have high quality insurance that could afford more expensive tests. Basically the general credo of the Mayo Clinic is that patient needs come first and financial endeavors come second. Of course this philosophy can only go so far because a hospital does need capital to operate, thus a hierarchy of treatment does exist. This hierarchy is why Mayo Clinics are beginning to turn away Medicare patients due to lack of reimbursement because absorbing too much debt will put any institution out of business and that hurts many more individuals that those that are initially turned away. In many respects the transition away from an individual fee-for-service reimbursement structure to a cooperative salaried structure would be the best thing the medical community could do to reduce the escalating costs of medicine in modern society. The interesting problem with such a change is more psychological than logistical, in that in a capitalistic society can a certain profession voluntarily handicap their ability to make money for the good of society, especially when entrance into the profession requires such a high initial capital and opportunity investment? It would go a long way if the government could make such a change easier through financial incentives.

The lack of electronic records is probably the easiest of the big problems in health care to solve and many individuals have pointed out that the lack of electronic records is irrational and unacceptable for a society that is supposed to be modern. Although electronic records are important upgrading process has been quite slow. One reason for the slow transition from paper records to electronic records is the cost involved. Although approximately 19 billion dollars was allocated for the expansion of electronic records in hospitals through the recent stimulus package when actually dividing the total amount among all of the available hospitals that would use it to update their systems, the amount per hospital is only somewhere from 4 to 6 million dollars, enough for some of the smaller hospitals, but not nearly enough for the larger hospitals.6,7

Remember although proponents say that electronic record keeping would eventually be cost negative individuals and companies still do not like to foot the initial investment, especially when rate of return values are uncertain; this rationality is largely why energy efficiency has not really taken off yet outside of government grants and loans. Also that philosophy could explain why only approximately 17% of U.S. physicians use a minimally functional or a comprehensive electronic records system.8 It should be noted that such percentage information was collected before the passage of the American Recovery and Reinvestment Act of 2009 [a.k.a. the stimulus package], thus there is no current information regarding how these numbers have changed, it is probable that there has been some progress, but probably nothing dramatic.

Utilization of electronic records makes sense largely due to a reduction in errors and testing repetitiveness as well as reducing treatment time. Unfortunately it is highly probable that the dollar figures attributed to what could be saved via electronic record keeping are overestimated because of the difficulty separating the specific savings provided by electronic records from other non-related test mediums. For example proponents like to talk about the increasing costs of imagining modalities and the positive influence that electronic records could have on reducing those costs. However, they rarely if ever, attempt to separate the costs associated with conducting repetitive tests due to lost/misplaced images vs. necessary repetition due to changing circumstances in a patient’s condition. It seems reasonable to suggest that the former makes up a very small percentage of costs associated with medical imaging, which is all electronic record keeping would really influence.

Error reduction largely comes in the form of ensuring accurate information pertaining to the patient’s medical history and family history. Electronic records eliminate the need for the nurse to have to take a family history, ask about any allergies or other potential cross-reactions with other medications for each visit. Also electronic records limit miscommunications between physicians and pharmacists due to poor physician penmanship when prescribing drugs which will reduce a large percentage of the adverse effects from drug mix-up and incompatible drug combinations.

In addition to a perceived lack of funds, there are also problems in the application of the technology. Few hospitals actually have the personnel that could install such a system and private contractors that could do so are not as plentiful as needed. Therefore, even with the appropriate funds finding the proper personnel to install the system in a timely manner may be difficult. One option to speed up the process in this respect may be to create state sponsored technician groups that would be responsible for installing electronic record keeping first in all government sponsored hospitals (VAs, etc.) then private hospitals that wish for electronic record keeping. Such a mechanism eliminates some of the hassles for hospital administrators associated with making the switch from paper to electronic records making sure the work is actually done and done properly. Psychologically this may be a very important issue because one of the hardest things when making a change is actually finding someone that can facilitate the change.

Another significant problem relating to the application of electronic record keeping is changing the behavior of physicians in general. A number of practicing physicians have practiced for a reasonably long period of time and are rather set in their ways. Add that psychological mindset to a profession that is already typically over-worked and one can anticipate little initial success when trying to convince these individuals to spend significant amounts of time learning a new electronic system, especially the older physicians that may already struggle when acclimating to modern technology. Such a belief is not inherently irrational because these physicians may have a routine and certain way of doing things that may not seem efficient from the outside looking in, but for these individuals their way of doing things is extremely efficient for them. Unfortunately it is highly probable that this mindset will create a problem in the mixed environment of larger hospitals between physicians that would use electronic record keeping and those that would not, especially in the same discipline like radiology. Thus it would be up to the younger physicians to convince the older physicians that electronic record keeping is an advantage not a detriment.

Another problem that may prevent hospitals from incorporating electronic record keeping is the fear that such records may be used against them and their physicians as evidence in malpractice lawsuits. Such a fear is warranted, but misses the point in the context that if an electronic record is used as evidence that properly implicates malpractice by a physician then the physician screwed up and should be sued. The real problem is that malpractice lawsuits still revolve around a jury system that does not understand the practice of medicine very well. This unfamiliarity means that juries do not have a sufficient understanding about what is and what is not reasonable in diagnosis and/or treatment, they typically only understand extreme conditions (person gets the wrong foot amputated, etc.). Thus they have to rely on lawyers to frame the arguments, which is not a good thing as the lawyers on both sides obviously have their own agendas. Any malpractice issues involving electronic record keeping will reasonably vanish when the legal system can better operate in the realm of medicine. In fact any real concerns regarding genuine liability will vanish when such reality occurs. Remember if a physician genuinely screws up he/she deserves to be sued, some people seem to have forgotten that whole thing about taking responsibility for one’s actions.

One advantage to a national network based electronic record keeping system is that it could also be used as a data mining center where standards of care for a given disease could be established which would act as a guide for diagnosis and treatment of a given disease. In addition the effectiveness of various treatments could be weighted so physicians know the relative probabilities of success when prescribing a given therapy. Basically the electronic system would better tie evidentiary medicine with the actual practice of medicine.

Unfortunately this advantage could also be viewed as a double-edged sword in that physicians could be wary of such a design. Such a system would limit/eliminate any individual feeling or instinct in the diagnosis and/or treatment as the record would act as a form of game plan that could not be deviated from under any circumstance, basically making a generation of robot physicians. Also there is concern that a precedent could be established where deviation would result in an open-and-shut malpractice lawsuit. Such a possibility would require a set of rules to govern the use of any such database created through electronic record keeping.

One strategy that could be explored to ease the transition for older doctors, especially those with children or grandchildren, is to develop/improve a physician-based video game that incorporates electronic record keeping into the play. Such a game would acclimate the physician to real-world type situations in a forgiving environment where mistakes would not cause problems. Such a game may be interesting for new physicians as well exposing them to real-world situations where any neophyte diagnostic mistakes could be better controlled.

Overall there are more advantages than disadvantages when considering the installation of an electronic record keeping system, but the transition from a paper system to an electronic system involves much more than simply making the physical shift, considerations for the psyche and attitudes of the attending physicians must be made as well. Just throwing money at the problem of installation almost guarantees failure. An important step would be establishing a universal protocol that administrators can review to determine the best course of action when planning to install their electronic record keeping system.

Overuse of medical procedures and treatments has generated significant additional costs to the medical establishment. Both the patient and the physician perpetuate this overuse, although the patient probably bears more of the blame. For instance Webmd and other medical websites have given rise to self-diagnosing patients that lack the expertise to properly identifying correlations between symptoms and cause. Instead of realizing that there is a 99% chance that symptom x is caused by common condition y, these patients are instead treating common condition y and rare condition z with equal weight and thus demand expensive tests to better verify the correct condition. This behavior is aided by the general lack of statistical understanding in the public.

This mindset harkens to an underlying problem in society of the ‘normal expert’; the stance that expertise is only valid when said expert agrees with the opinion of the individual petitioning the expert. Basically the patient disregards the diagnostic expertise possessed by the physician, placing his/her own diagnostic ability at an equal level eliminating trust leading to the demand for the expensive tests. Unfortunately there is little the physician can do because if he/she elects not to put the patient through these tests using the reasoning that it is a waste of time and money, the patient will more than likely leave and visit another physician until he/she finds a physician that will perform the tests. Therefore, because performing the tests for patients of such psychological standing is a foregone conclusion, the initial physician might as well administer the tests to collect the associated service fee.

An additional driver of overuse stems from a sense of equal importance or equality. The richer the individual the more likely he/she is to have insurance that will cover almost any medical procedure, thus these rich individuals will almost always undertake medical care, even if it is unnecessary or irrational based on a probability/statistical assessment. These actions tend to influence other individuals of lower income through the thought process ‘if test x is good enough for person y and I have the same general symptoms, I want test x too.’ Rarely will logic be an effective means of diffusing the situation (you do not need test x, person y is stupid and wasting money undertaking test x), thus the doctor is left with a patent demanding that he/she be treated the same as the richer patient.

Physicians tend to overuse medical technology into one of two ways, financial gain or legal protection. There is little argument that imaging modalities like MRI and PET have large price tags associated with their application and serve as an effective means of profit for the physician/hospital, especially those that own their own machine, so associated use fees do not need to be paid. For some physicians, putting a patient with appropriate insurance through a MRI (a relatively painless procedure) even if no logical reason exists spells big bucks.

The second category is defensive medicine. Note that defensive medicine will be defined as: ‘the administration of a statistically irrelevant medical procedure for the sole purpose of lowering the future probability of success by a plaintiff taking specific legal action against the physician.’ The realm of medical malpractice is a tricky one because perception and reality differ in large respects. The perception of medical malpractice seems to be a sense of entitlement in that errors are unacceptable regardless of the circumstances and if a physician makes one then that physician should be kicked out of the profession forever and be sued costing both his insurance and the hospital millions of dollars. Again with a change in the way the legal system handles malpractice lawsuits this perception will change and more than likely lead to a reduction in both lawsuits and premiums.

While waiting for the legal system to evolve, the best way to eliminate patient demand for and physician application of unnecessary tests is to use evidence-backed medicine as a guide. The medical procedures that are covered by insurance companies must be statistically relevant for a given condition. If empirical research and testing has demonstrated that the given medical procedure does not statistically aid diagnosis and/or treatment of a suspected condition then there is little benefit to administering such a procedure. Basically remove the ability of physicians to increase their fee-per-service salaries through running needless tests and return the expertise factor back to the practice of medicine where a patient can no longer automatically conclude that because symptom x fits an obscure disease that was found on-line test y needs to be run to rule it out. In this situation a serious discussion with the physician will be required because if test y is desired then the fee comes directly from the patient’s pocket.

Note that such a strategy may work even better in a mixed insurance environment. For instance a public based government option could enforce these statistical standards where a private insurance company may not and charge a higher premium for “peace of mind”, however irrational it may be by covering statistically unnecessary tests. There may be some level of initial backlash to this statistical strategy because lower income individuals may believe that they are unfairly targeted because richer individuals will be able to afford these tests that are no longer covered by insurance. However, the key fact to remember in this situation is that these tests are not statistically significant for diagnosis or treatment of a given condition, thus in every instance the richer individual is simply wasting his/her money by authorizing and paying for the test. In short genuine medical treatment is not being withheld from lower income individuals.

Also expansion of pre-certification testing and proper accreditation would more than likely have a positive influence on lowering the costs associated with imaging modalities by removing unnecessary testing and errors during the imaging process. Overall although overuse of medical procedures is a significant problem there are clear strategies that can be used to neutralize the causes of the overuse problem.

It was previously mentioned that increasing physician reimbursement would require increasing the premium pool while reducing the amount of services that those in the premium pool consume. One suggested method to accomplish this goal was to ration health care services. However, such a response is favorable to no one, for limiting an individual to something like 2 MRIs a year could seriously limit the ability of hospitals to treat patients and it also reduces the amount of potential money paid to physicians for services. Fortunately rationing is not the only way to change the premium/cost ratio from less than 1 to greater than 1, one can instead improve the health of the individuals entering Medicare or any government based health care system. The problem executing this strategy is that the general health of society has been going down in recent years not going up, despite increases in life expectancy. Thus not only will this trend need to be reversed, but it needs to be amplified significantly.

One of the biggest reasons individuals suffer from ill health at a controllable level is the lack of quality/healthy food. There are four primary reasons that individuals lack quality/healthy food: lack of desire to eat healthy, lack of access, lack of knowledge regarding what is healthy and what is not healthy and cost gap (eating healthy is too expensive). Of the four the cost gap and a lack of access seem to be the most correctable because they have limited reliability on the individual doing the eating.

A lack of access is seen in both urban and rural regions of the country. In certain parts of these regions supermarkets have become less and less of a factor leaving convenience stores or fast food restaurants as the primary food supplier. Unfortunately the few healthy options these establishments offer are more expensive than the less healthy options and are of low quality. Without access to better variety and quality the populations in these environments, which some refer to as ‘food deserts’, are almost forced to select the unhealthy options at further detriment to their health. Somewhat ironically a big reason why supermarkets and other large food distribution dealers are not present in these communities is that they do not believe that it would be profitable to do so because of the poor eating habits of those in the community. Whether or not these eating habits are derived from the lack of food options is unclear. However, one step that can be taken by government to encourage risk-taking in these areas in an attempt to change these habits is provide tax incentives to individuals that want to open supermarkets.

If tax incentives are provided to entrepreneurs it would probably help with the initial capital costs of constructing the supermarket and leasing the land or building required for the supermarket, but probably would not influence food prices. Such a result is unfortunate because as the recent recession demonstrated, eating healthy is typically more expensive than eating poorly. Therefore, establishing access to the healthy food may not be enough to truly change the eating habits and health demographics of the given region in a positive direction.

Access will make it easier for the wealthy of the region to eat healthier, but most food deserts lie in below-average socioeconomical regions, thus another strategy must be implemented to aid both the supermarket and its potential consumers. As previously mentioned the supermarket is in a difficult position because lower prices for healthy foods may create a problem with respect to current and future profitability unless their supplier lowers their prices in turn which is unlikely because their supplier would have to lower prices and so on and so forth. Increasing the prices on unhealthy food to make them more expensive relative to healthy food as a means to entice purchase of healthy food would not work because such a situation would just price unhealthy food out of the purchasing power of the local populous. If they cannot afford healthy food making the unhealthy food more expensive than the healthy food does nothing to change that fact.

Therefore, it may make sense for the federal government to get involved again providing subsidies to supermarkets that lower the prices of certain healthy foods, mostly fruits and vegetables, past a certain price point in effort to recoup profits that would have been made on sales at the original price. It is irrational for supermarket proprietors not to accept such a strategy because it is highly likely they make more money under this program (greater volume of sales at an equivalent price). Based on simple preventative treatment these subsidies hopefully would pay for themselves in a reduction of medical care for individuals in this region.

Once access and economic barriers are overcome the next two significant barriers can be addressed, lack of desire to eat healthy and lack of knowledge regarding what is healthy. Unfortunately lack of proper knowledge regarding health has become much more troublesome due to the influence of industry lobbyists on Congress. This influence prevents proper scientific labeling on foods allowing foods to skeet by without consequence or challenge when saying food x has mineral y which improves heart health, lowers cholesterol, increases muscle flexibility or whatever else regardless of whether or not unbiased clinical tests have demonstrated such a fact. Therefore, with food manufactures allowed to plaster unsubstantiated claims on labels the art of label reading, which used to be the only thing that was really needed to differentiate healthy and unhealthy food, is further complicated. The problem with this complexity is most individuals do not have the time nor the inclination to scientifically identify which aspects of a label are valid and which are invalid. With this mindset any solution to this problem must be simple, but effective.

A potential solution to the above problem could be for each supermarket to distribute a weekly 2-4 page leaflet that targets 2-3 health-related issues. The leaflet would be printed on recycled paper to ensure limited environmental impact. Nutritional information in the leaflet will be characterized at a high school level of readability with a more specific focus on the relevant issues/foods at hand, basically simple language with focused analysis. Footnotes can be placed at the bottom of the leaflet to provide more detailed information for readers that want it. Note that there is no reason for these leaflets to be restricted to these new supermarkets located in food deserts, they could be distributed to a much wider audience. Also an electronic station can be set up in the supermarket so consumers can access archived information or look up a quick reference regarding the nutritional elements in a particular food or just in general.

With regards to the final problem of creating a desire to eat healthy, there is little that can be done from an outside influence beyond provide the means, through cookbooks or other recipe mediums, to improve/mask the taste of foods that individuals believe to be unappetizing. Understand that this masking needs to be done in a healthy way, not ‘hey just put a bunch of melted cheese on broccoli’. Unappealing taste is one of the few remaining logical factors preventing healthy eating when access, cost and knowledge barriers are lowered. Also parents need to be a prime motivator in getting their kids to eat healthy, especially early as very young children, for eating patterns are best developed early in life. That means parents need to be parents and not buy unhealthy snack food instead substituting those foods with fruits, vegetables and other healthy alternatives.

Education regarding healthy eating is only one portion of the education that must be provided on some level to society. There are basic health care prevention protocols that people should follow to reduce the probability of ill-health. These simple elements do not include getting various tests for various conditions, but instead the basic health care steps that most people already know about, but for some reason some fail to perform consistently, like washing hands, getting an annual physical when past a certain age, consistently brushing the teeth, exercising, etc.

Also although most of the time preventative testing is a benefit there are times when it is inappropriate and wasteful. Incorporating family histories as well as lifestyle choices is important when considering whether or not to undertake a test for a given condition because if the individual in question does not any conditions that predispose the tested condition, then the test itself can be viewed as wasteful. Unfortunately it is difficult to understand the waste because of a general lack of statistical knowledge, hence demonstrated by the uproar response surrounding the new recommendations recently announced for mammograms in relation to breast cancer.

As previously mentioned there are concerns about how health care reform would affect medical malpractice lawsuits. Medical malpractice premiums associated with the corresponding insurance have been increasing steadily since the late 90s and early 2000s.9,10,11 These increases have lead to concerns regarding either a reduction in the ability of physicians to practice medicine due to the lack of malpractice insurance or the increase of medical costs due to the practice of defensive medicine. Some believe that a federal law, similar to those in some states that place a monetary cap on non-economic damages usually of $250,000, should be passed to limit premium growth. Recall that economic damages are related to medical bills and lost wages, but not psychological pain and suffering or quality of life issues. Unfortunately such rationality is an over-reaction to the increase in premiums probably born of a lack of understanding regarding its origin.

Medical malpractice premium increases can be attributed to four causes.9,11 First, from 1998 to 2001 a vast number of insurers experienced decreased investment income due to reduced interest rates on their investment bonds, which made up a significant portion of their investment portfolios (80%).11 This reduction in earnings forced insurers to increase premiums as a means to ensure adequate funds to cover any payouts and other associated costs with being an insurance company. Unfortunately due to the recession, bond interest rates have been threatened again which may repeat insurer behavior seen earlier in the decade. Second, there was an increase in reinsurance rates for insurers which raised overall costs that needed to be recouped through premium increases. Third, competition lead some insurers to short-sell insurance policies that failed to sufficiently cover all probable losses which lead to insolvency for some of the providers. Fourth, in certain regions of the country malpractice claims increased rapidly forcing a justifiable increase in premiums. Unfortunately a significant lack of data regarding medical malpractice claims and division between economic and non-economic related plaintiff losses significantly reduces the ability to effectively analyze the true cost of these payouts.

Although cycles permeate the insurance industry, similar to most other industries, cycles in medical malpractice are more extreme than other insurance markets because of the randomized average time required to determine the outcome of the trial and any possible payment.11 Also the uncertainty surrounding both the number of suits and the amount of payment from those suits creates problems for insurers when creating future budget projections. These elongated cycles explain why when interest rates went up in the middle of the decade premiums did not fall accordingly. However, one problem may be the issue that it is rare that premiums are raised against a single physician; instead the insurance company tends to raise premiums across the board.

There are two problems with the strategy to implement a federal cap on all medical malpractice redress. The first problem is that a fixed cap does not appropriately address the needs of individuals that are actually victims of legitimate medical malpractice. If a physician amputates the wrong arm how is it morally justifiable to say to the wronged individual that the loss of that arm is only worth a maximum of $250,000? It is impractical and unfair to penalize an individual that has genuinely been mistreated by a physician solely to ward against the fear that less scrupulous lawsuits would result in inflated and unjustified monetary awards.

The second problem is that the lack of malpractice information has made it difficult to determine if malpractice caps directly correlate to lower premium rates or if other factors are involved in lower premiums. Both the CBO and the GAO, two organizations with no bias regarding the issue of malpractice, failed to determine conclusively that malpractice caps were responsible for lower premium rates or a change in healthcare spending (lower probability of elimination of care due to malpractice premiums).9,10,11 In some instances caps decreased premiums whereas in other instances caps had no influence or even increased premiums.11 Savings from particular caps are also difficult to analyze due to the unreliability of quantifying the actual costs derived from the practice of defensive medicine. Currently there has been no definitive study that has not raised questions regarding its bias or validity regarding the costs of defensive medicine.

One potential explanation for lower premiums in states with medical malpractice caps could be the rather underhanded tactic by malpractice insurers to purposely keep rates low in cap states to perpetuate the belief that they work due to the uncertainty involved in malpractice litigation in effort to maximize profit. Therefore, until definitive evidence can be determined regarding the true effectiveness of medical malpractice caps it does not appear warranted to pass a federal medical malpractice cap in effort to corral malpractice premiums.

Instead a more appropriate and malleable solution, if one were really interested in limiting needless medical malpractice lawsuits, would be to evaluate the credibility of a malpractice suit before it could proceed to trial. In essence create an independent state or federal board that would review the merits of a medical malpractice suit and could either find evidence supporting the argument of the patient or find that a unique set of circumstances, not neglect, was responsible for the event that is triggering the suit. Basically this board would act as a grand jury of sorts for the civil division with regards to medical malpractice suits; they would either allow the case to proceed to trial or ‘no true bill’ the case, which would preclude the case from proceeding to trial without new evidence. Unlike the cap proposal, this method treats each situation as unique and judges on a case-by-case basis instead of classifying everything under the same umbrella. Such a system has been previously described on this blog.

Abortion has always and probably will always be a contentious issue and not surprisingly that conflict does not escape health care reform. One of the main arguments against providing coverage for abortions under a new mandated public option type structure is the belief that some/many (depending on perspective) taxpayers would not want their tax dollars paying for abortions. Unfortunately for those that use this argument, such reasoning is complete and utter nonsense. Tax dollars are not earmarks nor are they sorted by individual; they cannot be reserved to pay for only certain government funded projects. A taxpayer cannot state that he/she will only pay taxes if those taxes are only used for project x, y or z but not project d. Thus, the only purpose of making the above argument in effort to prevent federal funding of abortion through a federal medical insurance program is to embarrass and make a fool of the individual making the argument.

In fact there is no real argument against preventing abortion coverage for a majority of abortions in a public option type plan. A vast majority of abortions take place in the 1st trimester where no one can intelligently argue from a scientific perspective that a human life is definitively being terminated; one can only argue from the position of ‘potential human life’ which from a legal standpoint is not enough. No ‘potential’ anything has the expectation of any form of legal protection. The only categorical position that can be taken against 1st or 2nd trimester abortions is one of a religious nature with the perspective that life begins at conception not at birth. Unfortunately for those making this argument the 1st Amendment separates church and state preventing the passing of laws based on specific religious beliefs or scriptures. Thus any law preventing coverage of abortion by federal funds in general should be viewed as unconstitutional. Now if anti-abortion proponents wanted to attempt to specifically limit only 3rd trimester abortions, it would be more difficult to argue a singular religious belief construct as the sole mindset for legislative opposition.

One of the lingering issues regarding health care reform that seems to have been ignored, which is funny because its link to the uninsured is critical, is emergency room reform. It would be very difficult to argue that the current state of ER operation is not at a crisis in the United States. Overcrowding and a lack of resources have pushed wait times before treatment to multiple hours, which hardly allows the ER to live up to its namesake. Therefore, unless the problems within the ER infrastructure are directly addressed there is little reason to think that the situation will get better, in fact odds are it will continue to get worse.

One of the biggest prevailing misnomers regarding ER overcrowding is that most individuals visiting the ER have no insurance, thus do not see a primary care physician resulting in ER visits varying from genuine emergencies to benign non-threatening conditions. Belief in this myth is largely drawn from misreading “The Emergency Medical Treatment and Active Labor Act of 1986”12 (surprise surprise individuals have misinterpreted a piece of legislation and have proceeded to make complete fools of themselves by over-exaggerating that misinterpretation).

The misinterpretation involves the erroneous belief that ERs are required to treat any individual that comes through their doors for free. Ha, really just think about that logically for one moment, how long would ER hospitals stay in business if that were actually true… 5 seconds, maybe 10? Actually “The Emergency Medical Treatment and Active Labor Act of 1986” states that ERs cannot withhold medical treatment from an individual who is in need of emergency attention solely based on their ability to pay.12 Individuals who visit the ER that lack insurance still receive a bill for services rendered, but the ER is not allowed to ask for insurance or cash upfront and if the individual does not have any throw the person out the door. Of course the individual can squelch on the bill, but for anyone that has suffered from the inability to pay a medical bill; such an action rarely works out in one’s favor.

Therefore, the uninsured actually visit the ER less (from a percentage standpoint based on their demographic) than the average person with insurance because while a 300 dollar a month premium and maybe 50 dollar co-pay are annoying, a 6,000 dollar ER bill is crippling. Thus, the uninsured tend to avoid going to the ER for as long as possible hoping that their bodies will be able to neutralize a detrimental condition without the assistance of modern medicine. Blaming overcrowding on the uninsured is simply not logical or intelligent. Realistically overcrowding is a simple matter of increased input and decreased output.

One unfortunate aspect of overcrowding is the simple fact that people are living longer and suffering more injuries, the same general problem afflicting the medical community in general. Also a lack of primary care physicians is putting more pressure on the ER to fill in the gaps for those that feel sick, but are unable to see a non-emergency care physician in a reasonable period of time pertaining to the condition at hand. For example it may be fine to wait 2 weeks for a physical when nothing appears to be wrong, but when afflicted with a rash that is spreading, a 2 week wait is too long a wait.

These two elements largely account for the increased input component responsible for ER overcrowding. Unfortunately little can be done about the first element other than try to further promote a healthy lifestyle of quality food, realistically low stress and exercise. The second element would obviously be best handled by increasing the number of primary care physicians. Unfortunately such an endeavor is difficult because primary care physicians are like division commanders in the health care army. They have to do a lot of work, more than specialists, but get paid less money. Clearly if a medical student is confronted with the choice of job A vs. job B where job B gets paid more money and works fewer hours, job B is obviously going to be more appealing.

The shortage of primary care physicians is especially important in the issue of the uninsured. Recall that the uninsured typically wait too long to go the ER to receive medical treatment largely due to the fact that they do not have insurance. However, even if these individuals were given insurance if there were still a lack of primary care physicians then there would only be a very small shift in the probability that these individuals would not go the ER because of the previously noted time discrepancy. In fact there may be reason to believe that insuring the uninsured without any change in the number of available primary care physicians will increase ER overcrowding because conditions can be treated in less advanced stages over a situation where the individual does not have insurance for a lower cost against the insurance company.

One strategy for neutralizing this gap is once again the involvement of government. Medicare already funds a significant portion of physician residency training through subsidies to teaching hospitals. Transferring some of those funds into specialized government grants that could subsidize a significant portion of medical school costs for individuals that sign up to be a primary care physician for a pre-determined period of time may improve the ratio of medical students that become primary care physicians over a particular specialist. Hopefully in the long term the addition of new primary care physicians will result in fewer ER visits across the board as well as a general increased level of health reducing costs to both private insurance companies and Medicare while reducing ER overcrowding by reducing a portion of the input factor.

The second component in overcrowding has to do with a decrease in output speed. One of the primary reasons for the decrease in turnover in recent years has been the lack of available nurses. Keeping with the above military analogy, nurses are like the infantry of the health care army. They typically have to do more work than primary care or emergency care physicians, get paid even less and have to have a significant portion of the education that is required of a physician. With these conditions it is not surprising that the occupational field of nursing is prone to shortages. Regardless of any other factor, processing time is negatively influenced if there is no one to conduct the processing and aid in the in-patient care.

Another important factor for the lack of turnover is the lack of beds for those that need to remain in the hospital for further observation and/or treatment. Obviously not everyone that comes into the ER has a condition that allows for a same day discharge. In fact common sense would imply that such a reality should not be the case for a number of individuals visiting the ER, especially the elderly. For example suppose someone is rushed to the ER after getting into an automobile accident, clearly that individual will need to stay at least one night in the in-patient unit. However, if there are no available beds in the in-patient unit that individual will have to stay in the ER or be moved somewhere else after the initial round of treatment, which will reduce the rate of recovery and contribute to over-crowding.

Some seem to think that the administration of an electronic record keeping system would go a long way to ending overcrowding in ERs. Although the incorporation of electronic records would more than likely create net positive benefits, it is questionable how useful such a system would be at actually reducing overcrowding. The problem is electronic records have no influence on how many patients show up to an ER on a given day nor do they have any real influence on how fast an individual in in-patient care will heal to the point where the bed can be given up to another individual. Basically electronic records would only avoid processing errors and smooth out some rough edges in possible time gaps between patients.

Of course there are other problems besides the ones discussed above, such as gaps in coverage backed by little rationality that may force patients to absorb large costs. However, such micro concerns are second tier concerns in that their solutions are limited by larger macro problems faced by the health care system in general.

Although the plight of the uninsured receives far and away the most attention, there are a number of other problems with health care that are intertwined with the uninsured that prevents directly solving the uninsured problem by itself. Recognition of these additional problems and how they influence each other is an important consideration when attempting to solve any of the main problems in health care. Unfortunately it appears that most of the power players in the United States do not understand this interconnection and are plowing ahead trying to solve each problem one at a time in isolation.

The current health care bill that passed the House of Representatives does not appear to effectively control future costs nor find a means to both affordably provide health insurance and proper reimbursement to physicians treating Medicare patients in a cost-effective manner. Instead of doing something just for the sake of doing something, how about taking a step back and actually solving a problem. Of course such a tactic is difficult when Republicans appear to have less of a clue than Democrats and for political reasons choose to reject everything possible. Whether or not individuals want to admit it, a strong universal federally run public option is essential to medical care cost reduction. However, the public option needs to also control for the problems discussed above otherwise it would be meaningless.


==
1. Centers for Medicare & Medicaid Services. www.cms.hhs.gov/

2. Preece, Derek. “The ABC’s of RVU’s.” The BSM Consulting Group. 2007-2008.

3. Centers for Medicare & Medicaid Services. Medicare Claims Processing Manual: Chapter 12 – Physicians/Non-physician Practitioners. Rev. 1716: 4-24-09.

4. Clemens, Kent. “Estimated Sustainable Growth Rate and Conversion Factor for Medicare Payments to Physicians in 2009.” CMS: Office of the Actuary. November 2008.

5. House Resolution 3961: Medicare Physician Payment Reform Act of 2009
http://www.opencongress.org/bill/111-h3961/text

6. Kluger, Jeffrey. “Electronic Health Records: What’s Taking So Long?” Time Magazine. March 25, 2009. http://www.time.com/time/health/article/0,8599,1887658,00.html

7. Jha, Ashish, et, Al. “Use of Electronic Health Records in U.S. Hospitals.” New England Journal of Medicine. 2009. 360:16 1628-1638.

8. DesRoches C, et, Al. “Electronic health records in ambulatory care — a national survey of physicians.” New England Journal of Medicine. 2008. 359: 50-60.

9. “Medical Malpractice Tort Limits and Health Care Spending.” Congressional Budget Office Background Paper. April 2006.

10. “Medical Malpractice: Implications of Rising Premiums on Access to Health Care.” Government Accountability Office. August 2003.

11. “Medical Malpractice Insurance: Multiple Factors Have Contributed to Premium Rate Increases.” Government Accountability Office. October 2003.

12. The Emergency Medical Treatment and Active Labor Act of 1986. http://www.cms.hhs.gov/emtala/