Monday, November 30, 2009

Addressing the Problems in Health Care: Where Should Reform Aim?

For the past 6-8 months the United States has been abuzz in rancor arguments about potential legislation regarding proposed health care reform. In fact as of this post the House of Representatives have already passed a reform bill. Unfortunately the perceived refrain has morphed from getting health care reform right to just getting something passed because both the Republican and Democratic parties have, not surprisingly, forgotten the true purpose behind health care reform and have instead boiled the issue down to a political question. If any form of health care reform is signed into law Democrats will sing its praises and pronounce victory, regardless of whether or not it is effective reform; whereas if no reform is signed into law Republicans will sing praise to their upholding the current system and in their minds saving capitalism. Of course both of these stances are complete garbage. There is no victory in baselessly opposing health care reform and defending the current system. However, there is also no victory to be had through passage of a bill that does not address the key problems in the current health care design. Genuine reform is needed because it is obvious, based on every projection available, that the health care system in the United States if left unchecked will go bankrupt somewhere in the mid 2020s, if not sooner. When beginning a discussion about reform it is important to identify the key problems plaguing health care so proper legislation can be developed to address those problems.

The chief problem in health care is the question of future financial solvency. Clearly regardless of what the health care system does or how it is designed, if it is not self-sustainable then there is no point in its continuation. With this issue in mind it is important that any health care reform results in a reduction of projected future costs vs. the projected future costs of the current system. This result is far and away the most important element to health care reform because if the reform results in higher costs, the changes become generally meaningless because the system will collapse at a faster rate. The first set of reforms do not have to be self-sustainable as long as more time is allotted to the survival of the system. It is unlikely that a perfectly self-sustainable system will be designed in a single set of reforms, largely due to randomized outliers and uncertainties most notably human behavior, which may require more exotic responses that could not be predicted before empirically witnessing such outcomes. Thus as long as more time is added to the solvency of the system to allow these disturbances to come to light, the health care reform accomplishes this first important goal.

Does the health care bill passed in the House of Representatives achieve this goal? At the moment it is difficult to tell, not just because of the generic ‘it is tough to project budgetary expectations out over 10 years…’ commentary, but more so because the House bill does not directly deal with Medicare reimbursement appropriately to ensure lower costs vs. the what would be seen if no reform is passed. The current House Resolution (H.R.) 3961, which can be viewed as a companion bill to address Medicare reimbursement, attempts to remedy this problem, but the solution which will be described later, appears to push the entire reform to greater expense than the current system.

Budgetary issues aside, what are the other prominent health care problems that health care reform should seek to address? The list below outlines these problems:

1) 40-50 million individuals are uninsured;

2) Low reimbursement rates for doctors treating Medicare patients;

3) Low efficiency due to written records (lack of electronic record keeping);

4) Over-use of services (both by doctors and patients) leading to increased costs;

5) Little to zero patient education programs;

6) Medical malpractice lawsuit concerns or obstacles;

7) ER slowdown due to limited beds and turnover;

Identifying why there are so many uninsured individuals is easy, these individuals cannot afford health insurance because if insurance were affordable they would have it as medical bills can become quite expensive over even only a short period of time. However, solving that problem is much more difficult because it involves devising an acceptable methodology for providing lower insurance premiums for a given insurance plan or increasing salaries. Increasing salaries in lieu of employer provided health care or just in general can create problems due to a number of reasons. First, employers have an advantage from a negotiating standpoint with insurance companies over the generic individual due to the size of the employee role which provides a greater ability to spread risk with lower average premiums being the quid pro quo for that increase in the population pool. Second, increasing salaries would only work for those that are already receiving insurance coverage from their occupation. Any salary increase from jobs that do not offer insurance would simply add to employer costs, which would not be well received. Third, increasing salary only works for those that have jobs and it can be argued that increasing salaries without a comparable reduction in costs elsewhere will reduce the probability that jobless individuals will acquire jobs due to additional costs against employers, which would limit their ability to hire new employees.

These complications with increasing salaries transfer any real solution for the uninsured problem to reducing premiums. Insurance is a risk-driven business based largely on the volume of participants. The more individuals paying premiums the lower each individual premium payment can be versus the opposite situation where fewer individuals paying premiums demands higher individual premiums to ensure available funds for coverage of medical costs. Unfortunately the most obvious answer, putting everyone on the same insurance plan seems unrealistic at the moment due to either logistics or monopoly concerns. Thus, due to these limitations the best way to generate lower premiums is to create a competitive environment between insurance companies where they compete for healthy individuals to bolster their population pools without increasing their associated payout probabilities. An important component to fostering competition would be to allow the government to provide insurance on the open market (basically a public option) because there is not sufficient competition between existing insurance companies because most have carved out spheres of influence in given regions of the country.

Some would argue that such an idea could spell the end of the private insurance industry, but such a concern is rather stupid because any government policy could be given ground rules that would prevent it from ending private insurance, like a premium floor so the government could not offer an obscenely low premium that would redirect all customers to the government plan. Also no government plan could cover all conditions, hence private insurance plans would find specialized niches to cover those gaps (remember even in single-payer Europe, private insurance companies exist).

However, a government plan has its own pitfalls in that any expansion of Medicare would be problematic due to dropping reimbursement rates. In effort to avoid any significant level of debt from Medicare, the Centers for Medicare & Medicaid Services (CMS) must review and fix, if necessary, any relative values of service that are shown to be inappropriate with a boundary condition of up to 20 million dollars.1 Basically the total correction of the relative values of service must be lower than 20 million dollars. If not, other service fees must go down until the total change meets this 20 million dollar condition. Thus, without a significant expansion of the Medicare enrollment population, a reality that most lawmakers would not allow due to the above silly, yet powerful notion of a government takeover of health care, reducing premiums for any government plan would require a reduction in reimbursement to physicians.

Unfortunately current reimbursement rates are already critically low and require yearly reprieves to avoid imposing significant losses on physician reimbursement after treating Medicare patients. Things have gotten quite troublesome in that even prestigious medical centers like the Mayo Clinic have started to turn away Medicare patients. Therefore, until the reimbursement problem can be worked out, a government option will probably only have limited influence at positively insuring those that currently do not have insurance.

The reason reimbursement rates are important for a public option is because if reimbursement rates create negative returns for physicians during treatment those physicians will not treat patients covered by the public option unless the patient pays out of pocket. It is likely to expect individuals that do not receive effective coverage from an insurance plan to abandon that plan, thus there will be little long-term enrollment in the public option. Without the potential to generate a large list of enrollees, the public option will provide no competitive counter-weight against private companies and their corresponding premium rates. Under this scenario the public option would be rendered completely worthless and just a waste of time.

Speaking of the reimbursement problem, partially due to the “budget neutrality” (20 million) feature of Medicare, reimbursement payments have decreased in recent years. The basic explanation for this change is that all individuals over the age of 65 who are citizens of the United States are automatically enrolled in Medicare. Unfortunately on average these individuals appear to be costing Medicare more than adding to it through their Part B premium payments and previously paid Part A FICA taxes, influencing a reduction in reimbursement rates for physicians for various procedures. The chief problem with significantly raising Part B premiums as a means to raise reimbursement rates is that premiums are taken from Social Security and most would argue that Social Security does not pay enough to begin with, thus increasing the amount taken for Medicare premiums would be disastrous for those that rely on Social Security, which turns out to be a number of the elderly. Part B premiums have increased over the years, but the rate has been small and stable. Increasing FICA taxes is probably a non-starter. Increasing reimbursement without increasing available revenue does not appear to be a good idea either because it would significantly increases government debt.

To better understand why reimbursement rates are a problem, one needs to understand how they are currently calculated. Medicare uses a standard formula to calculate the requisite relative value units (RVUs) for resource-based medical practice expenses to determine the amount of money that a physician will receive for a given procedure. RVUs are fixed values determined by the CMS. There are three specific RVUs that make up the reimbursement formula: the relative value of a physician’s work (RVUw), the relative value of the practice expenses (RVUpe) and the relative value for malpractice (RVUm). Note that each of these RVUs are relative to the given procedure on what is expected or the average.2,3

Of course there are regional cost differences that need to be considered when calculating reimbursement because clearly it is unfair to compensate a physician that works in West Lafayette, Indiana the same as one that works in San Diego, California. Thus there are regional cost modifiers or geographic practice cost indices (GPCIs) for each RVU category (GPCIw, GPCIpe and GPCIm) that are used in calculating reimbursement. So one of the formulas used by Medicare to determine an aspect of reimbursement is:

Total RVU Amount = [(RVUw x GPCIw) + (RVUpe x GPCIpe) + (RVUm x GPCIm)];

However, as can be plainly seen, the total RVU amount is not a dollar figure. The conversion of the total RVU amount to an actual dollar payment, which will be covered both by Medicare and the patient (remember that after a Medicare patient reaches the $135 deducible for Part B a 20% co-insurance with Medicare kicks in), is determined through multiplication of the Total RVU amount with a conversion factor (CF). The current calculation of the CF involves a significant amount of dissatisfaction in the medical community and is thought to be a principle reason for the reduction in reimbursement, largely because of the inclusion and disfavorable structure of the sustainable growth rate (SGR).

The calculation of the CF is rather complicated in that it is influenced by the Medicare Economic Index (MEI) and the Update Adjustment Factor (UAF). The MEI consists of the weighted average price change for various inputs involved in producing physician services and the UAF compares actual and target expenditures for the current year and what Medicare has paid out vs. its targets since 1996 and the SGR for the coming year.4 1996 was established as the range floor by the Balanced Budget Act of 1997.4 The official formula for the calculation of the UAF is shown below:

UAFx = ((Targetx-1 – Actualx-1)/Actualx-1)*0.75 + ((Target4/96 to 12/x – Actual4/96 to 12/x)/(Actualx-1 * (1+ SGRx))) * 0.33

Note that boundaries are placed on the UAF where the percentage must be between negative 7% and positive 3%.4 If the calculated value exceeds one of these boundaries it becomes the boundary (for example positive 5.8% would not be used, instead positive 3% would be used). Normally this calculated UAF value would be added to the MEI to calculate the percentage that would be used to determine the new conversion factor for year x. Originally just the UAF and MEI made up the conversion factor adjustment, but recent legislation, as mentioned, have added a budget neutrality factor for the coming year as well as a 5-year budget neutrality adjustment.

Unfortunately over the last decade the actual expenditures that have comprised Medicare have exceeded the expected targets which would have resulted in reduction of conversion factors leading to lower reimbursement rates, if Congress did not pass temporary conversion factor adjustments to prevent those drops. Of course these temporary conversion factor adjustments only last for one year and do not solve the problem, instead it compounds as the conversion factor reduction percentage for the next year will absorb that reduction from the previous year and grow larger.

For example suppose an adjustment percentage of –4% is calculated, which would lower the CF from $39 to $37.44. Clearly Congress does not want to lower physician reimbursement otherwise physicians may begin to turn away Medicare patients, thus they pass a law that changes the adjustment percentage for that year from –4% to 1% raising the CF from $39 to $39.39. However, the new law does not wipe the –4% clean, it just masks the percentage so next year the adjustment percentage is higher than –4%.

Another example would go far to cementing understanding with regards to why these negative percentages appear in the first place. Suppose Person A is given a $500,000 check and needs to use that money to survive for a 10-year period. Thus Person A budgets $50,000 for each year. The first two years go by and Person A is still on budget (still has $400,000), but in year 3 Person A’s costs are $70,000, not the budgeted $50,000. Now Person A cannot squelch on the additional $20,000 so he pays it. However, now due to this unanticipated expense the money allocated for each additional year needs to be adjusted. So instead of having an average of $50,000 dollars available for each year the new yearly allocation is $47,142.86. The excess payouts in year 3 changed the amount of money available for the rest of the 10-year period.

The above example is similar to how Medicare works in that the $500,000 in the example represents both the Part A FICA taxes and the Part B premiums and other fees. Originally the total number of individuals in Medicare and their medical costs were low. However, increasing populations and associated medical costs, largely due to more costly diagnostic technologies, have eaten into that payment pool made by these taxes and premiums causing a decrease in payout (i.e. reimbursement rates) to ensure the continued solvency of the system. Basically this system is largely referred to as “pay as you go” (PAYGO), the money needs to be available in some context (if spending increases it has to decrease elsewhere accordingly). The system cannot rely on the prospect of future debt to fund the system (basically no ‘I.O.U.s’).

There are other arguments regarding reimbursement problems which do not reflect back on the SGR like concerns relating to the GPCI constants in that certain regions are snubbed in what they actually cost because there are a number of fixed overhead costs with regards to running a medical practice that do not relate back to costs associated with living in a particular region. This complaint has become even more relevant because of concerns regarding stoppage of bonus payments to physicians in physician scarce regions. Others believe that there are other inherent flaws in the formula itself, but these flaws are more related to new individuals entering Medicare costing more money than they are adding in revenue (through their premiums and previous FICA taxes). One of the problems is that although medical costs are rising across the board, Medicare costs are increasing faster because individuals on Medicare are older than the average individual and are more than likely going to have more health problem simply because of their advanced age.

So the underlying problem with correcting reimbursement is the two most popular strategies for increasing revenue in the insurance industry are not initially available, increasing premiums or increasing volume (number of patients). There appear to be three remaining main strategies for bridging the gap between reimbursement and revenue in a government system like Medicare. The easiest means would be to eliminate coverage or ration certain procedures thus freeing up the money that would normally be devoted to those procedures to increase the amount paid to other procedures. It is likely that this strategy would face significant opposition from Congress due to extensive lobbying from groups like AARP. Also there would be questions regarding the morality of generating such cuts in Medicare.

Rationing might be required in this instance because rearranging the reimbursement scale would not be effective without inflation because the total allotment of funds to draw from to create the reimbursement scale is rather stable. Without inflation it does not matter where the capital is distributed if the total amount of capital available for distribution does not change and all of the procedures are legitimate. Basically it does not matter if a cake is cut into 17 slices or 4 slices the same total amount of cake is available, just in smaller or larger individual quantities. Note that inflation in this instance describes the administration of unnecessary tests for the sake of collecting high service payments from Medicare.

However, if there is inflation in the system then reorganization could work on some level, not in increasing the total average reimbursement, but closing the standard deviation between physicians. For example if patients were demanding unnecessary tests and/or physicians were administering unnecessary tests, such actions would place an irrational strain on the reimbursement system which would skew the capital allotment within the system. Once these unnecessary procedures are removed from the reimbursement allotment, then the capital that was assigned to pay for those tests can be redistributed to other more reliable categories of reimbursement. This redistribution would typically increase the total reimbursement for Medicare patients due to less manipulation and waste in the system. However, it must be remembered that total reimbursement across the board does not change even in this situation because the total amount of capital available for distribution does not change. Rationing basically works the same regardless of whether or not inflation exists, which may be necessary because reliably identifying inflation can be problematic.

Eliminating inflation is a valuable tool though because it levels the playing field so more honest and/or successful physicians are better rewarded for those attributes. Basically the point of redistribution through the elimination of inflation is to reduce the profit made by a select number of physicians and increase the total amount of money reimbursed to physicians doing meaningful work. This redistribution can also be directed against specialized fields that are receiving more compensation than is thought rational due to the statistical relevance of their testing procedures. For example consider a scenario where three doctors receive $100,000 from Medicare, partially due to over-treatment, and three other doctors receive $60,000. Redistribution could be applied to narrow that gap where the first three are only receiving $82,500 and the second three are receiving $77,500 due to the elimination of those over-treatments and increasing reimbursement for general practices.

A second option in lieu of rationing is to improve the overall health of new Medicare enrollees. The continuing problem in the United States with more and more individuals either being obese or overweight is that these individuals are more than likely having a negative affect on the general health costs of the populous. Therefore, if steps can be taken to reduce the probability that these individuals become obese or overweight in the first place, such action should reduce the probability of increased losses when these individuals join the Medicare roster. Unfortunately there does not appear to be any easy way to accomplish this goal because individuals have their own free will and probably would react negatively to any limitations or rules restricting their eating habits; also nutritional education does not appear to be working well either as indicated by the climbing rates of unhealthy individuals.

The third option involves changing the formula used to calculate reimbursement. This option is what H.R. 3961 proposes, replacing the 1996 baseline for the SGR with a new 2009 baseline which is thought to ‘better represent’ the current costs of medical treatment.5 Unfortunately this change absorbs the gap in cost between the 1996 and 2009 baseline. Normally such a debt allotment would not be valid under the most current PAYGO statute, newly passed in July, without decreased spending elsewhere. However, H.R. 3961 circumvents PAYGO through the addition of the text from H.R. 2920 thus the change in reimbursement would not be considered a ‘new’ measure of spending that would have to abide by PAYGO.5

Although resetting the baseline may be a valid strategy, but not by itself. The concern is that by de-linking the statute from the PAYGO provision such a change will result in significant debt. Changing the baseline does nothing to actually reduce rising medical costs, which projects continued increasing costs into the future and more money required to pay those costs through reimbursement. Basically circumventing the previously operating ‘pay-as-you-go’ aspect of Medicare is risky because depending on how medical costs change in the future it could easily eliminate the solvency of Medicare much faster than is currently projected at this time. Once Medicare solvency is gone then the entire funding base of Medicare becomes tied to deficit spending, which removes almost all stability from the system and would more than likely dramatically increase the national debt.

One strategy that has been proposed to ward off dependence on debt is creation of private medical insurance accounts similar to the private accounts proposed for Social Security. Unfortunately the ability of private accounts to be part of the solution to future shortfalls, especially when used for both Social Security and Medicare, is becoming less and less probable. Thus the above problem of debt reliance once again becomes an issue making an adjustment of baseline without corresponding effective measures to reduce costs less and less attractive for anyone that cares about financial solvency.

H.R. 3961 tries to make some serviceable improvements in reimbursement by creating a new category of care for preventative care with its own CR.5 Unfortunately there is also reason to believe that abandonment of PAYGO in the above respect will result in a more rapid increase in Part B premiums which as previously discussed will place a greater burden on Social Security recipients. Also on a side note H.R. 3961 also appears to solidify the Stark rules which prevent physicians from making self-referrals in order to collect more fees.5

With these roadblocks the best strategy may be a combination of rationing and improving the general health of society. For example limited rationing through decreasing payment on procedures more widely associated with poor health vs. random chance while increasing payment on procedures associated with ‘normal wear-and-tear’ and general care. Realizing that certain conditions will afflict an individual regardless of health, rationing would not be appropriate to exclude any conditions outright, but limit certain conditions to a predetermined amount of covered treatments per year. Such a strategy would be preferable if applicable, i.e. there are a number of individuals who receive multiple passes of expensive treatments that would fall into the rationed category. However, if there is little repeating treatment then rationing in such a strategy may not be effective. Realistically the best bet for generating a method that will allow for the increase of physician reimbursement without excess debt is increasing the overall health of society, but if such a thing is not possible then rationing on some level may be the only response.

Unfortunately there is another problem altogether with reimbursement, the problem relating to the core of its existence as a fee-for-service redress methodology. There is a real concern that because physicians are compensated based on the amount and type of service rendered that not only are unnecessary tests administered to patients, but also some physicians practice a more sinister attitude of not focusing on curing the patient in a single treatment instead offering piece-meal treatments to maximize profit. Clearly both of these elements are going to have a negative influence on the total cost pool reducing the total effective reimbursement available from programs like Medicare and even private insurance companies forcing premium increases. Although the solution seems obvious, reward physicians for proper diagnosis and treatment when applying appropriate cost measures, the problem is how to identify those elements in a fair and rational manner. As mentioned H.R. 3961 creates a new payment category relating to preventative treatment, but how exactly one would measure success in this new category is still unclear.

Most medical experts look at the Mayo Clinic methodology as the new standard for treatment that should be applied for all hospitals where there is no fee-for-service pay structure, but instead all physicians and staff receive a salary. Without pay tied to the number and type of services rendered physicians have no incentives to not work as hard and efficiently as possible to diagnosis and treat patients as soon as possible. In fact such a system creates a negative incentive for errors and disingenuous action because more time would be spent treating a given patient instead of on other endeavors. Under such a system doctors are also more incline to help each other in diagnosis which creates a glut of new ideas which reduces the average number of tests that need to be run in order to successfully diagnosis and treat a patient reducing costs.

A salaried system also removes any incentive to cherry-pick patients that have high quality insurance that could afford more expensive tests. Basically the general credo of the Mayo Clinic is that patient needs come first and financial endeavors come second. Of course this philosophy can only go so far because a hospital does need capital to operate, thus a hierarchy of treatment does exist. This hierarchy is why Mayo Clinics are beginning to turn away Medicare patients due to lack of reimbursement because absorbing too much debt will put any institution out of business and that hurts many more individuals that those that are initially turned away. In many respects the transition away from an individual fee-for-service reimbursement structure to a cooperative salaried structure would be the best thing the medical community could do to reduce the escalating costs of medicine in modern society. The interesting problem with such a change is more psychological than logistical, in that in a capitalistic society can a certain profession voluntarily handicap their ability to make money for the good of society, especially when entrance into the profession requires such a high initial capital and opportunity investment? It would go a long way if the government could make such a change easier through financial incentives.

The lack of electronic records is probably the easiest of the big problems in health care to solve and many individuals have pointed out that the lack of electronic records is irrational and unacceptable for a society that is supposed to be modern. Although electronic records are important upgrading process has been quite slow. One reason for the slow transition from paper records to electronic records is the cost involved. Although approximately 19 billion dollars was allocated for the expansion of electronic records in hospitals through the recent stimulus package when actually dividing the total amount among all of the available hospitals that would use it to update their systems, the amount per hospital is only somewhere from 4 to 6 million dollars, enough for some of the smaller hospitals, but not nearly enough for the larger hospitals.6,7

Remember although proponents say that electronic record keeping would eventually be cost negative individuals and companies still do not like to foot the initial investment, especially when rate of return values are uncertain; this rationality is largely why energy efficiency has not really taken off yet outside of government grants and loans. Also that philosophy could explain why only approximately 17% of U.S. physicians use a minimally functional or a comprehensive electronic records system.8 It should be noted that such percentage information was collected before the passage of the American Recovery and Reinvestment Act of 2009 [a.k.a. the stimulus package], thus there is no current information regarding how these numbers have changed, it is probable that there has been some progress, but probably nothing dramatic.

Utilization of electronic records makes sense largely due to a reduction in errors and testing repetitiveness as well as reducing treatment time. Unfortunately it is highly probable that the dollar figures attributed to what could be saved via electronic record keeping are overestimated because of the difficulty separating the specific savings provided by electronic records from other non-related test mediums. For example proponents like to talk about the increasing costs of imagining modalities and the positive influence that electronic records could have on reducing those costs. However, they rarely if ever, attempt to separate the costs associated with conducting repetitive tests due to lost/misplaced images vs. necessary repetition due to changing circumstances in a patient’s condition. It seems reasonable to suggest that the former makes up a very small percentage of costs associated with medical imaging, which is all electronic record keeping would really influence.

Error reduction largely comes in the form of ensuring accurate information pertaining to the patient’s medical history and family history. Electronic records eliminate the need for the nurse to have to take a family history, ask about any allergies or other potential cross-reactions with other medications for each visit. Also electronic records limit miscommunications between physicians and pharmacists due to poor physician penmanship when prescribing drugs which will reduce a large percentage of the adverse effects from drug mix-up and incompatible drug combinations.

In addition to a perceived lack of funds, there are also problems in the application of the technology. Few hospitals actually have the personnel that could install such a system and private contractors that could do so are not as plentiful as needed. Therefore, even with the appropriate funds finding the proper personnel to install the system in a timely manner may be difficult. One option to speed up the process in this respect may be to create state sponsored technician groups that would be responsible for installing electronic record keeping first in all government sponsored hospitals (VAs, etc.) then private hospitals that wish for electronic record keeping. Such a mechanism eliminates some of the hassles for hospital administrators associated with making the switch from paper to electronic records making sure the work is actually done and done properly. Psychologically this may be a very important issue because one of the hardest things when making a change is actually finding someone that can facilitate the change.

Another significant problem relating to the application of electronic record keeping is changing the behavior of physicians in general. A number of practicing physicians have practiced for a reasonably long period of time and are rather set in their ways. Add that psychological mindset to a profession that is already typically over-worked and one can anticipate little initial success when trying to convince these individuals to spend significant amounts of time learning a new electronic system, especially the older physicians that may already struggle when acclimating to modern technology. Such a belief is not inherently irrational because these physicians may have a routine and certain way of doing things that may not seem efficient from the outside looking in, but for these individuals their way of doing things is extremely efficient for them. Unfortunately it is highly probable that this mindset will create a problem in the mixed environment of larger hospitals between physicians that would use electronic record keeping and those that would not, especially in the same discipline like radiology. Thus it would be up to the younger physicians to convince the older physicians that electronic record keeping is an advantage not a detriment.

Another problem that may prevent hospitals from incorporating electronic record keeping is the fear that such records may be used against them and their physicians as evidence in malpractice lawsuits. Such a fear is warranted, but misses the point in the context that if an electronic record is used as evidence that properly implicates malpractice by a physician then the physician screwed up and should be sued. The real problem is that malpractice lawsuits still revolve around a jury system that does not understand the practice of medicine very well. This unfamiliarity means that juries do not have a sufficient understanding about what is and what is not reasonable in diagnosis and/or treatment, they typically only understand extreme conditions (person gets the wrong foot amputated, etc.). Thus they have to rely on lawyers to frame the arguments, which is not a good thing as the lawyers on both sides obviously have their own agendas. Any malpractice issues involving electronic record keeping will reasonably vanish when the legal system can better operate in the realm of medicine. In fact any real concerns regarding genuine liability will vanish when such reality occurs. Remember if a physician genuinely screws up he/she deserves to be sued, some people seem to have forgotten that whole thing about taking responsibility for one’s actions.

One advantage to a national network based electronic record keeping system is that it could also be used as a data mining center where standards of care for a given disease could be established which would act as a guide for diagnosis and treatment of a given disease. In addition the effectiveness of various treatments could be weighted so physicians know the relative probabilities of success when prescribing a given therapy. Basically the electronic system would better tie evidentiary medicine with the actual practice of medicine.

Unfortunately this advantage could also be viewed as a double-edged sword in that physicians could be wary of such a design. Such a system would limit/eliminate any individual feeling or instinct in the diagnosis and/or treatment as the record would act as a form of game plan that could not be deviated from under any circumstance, basically making a generation of robot physicians. Also there is concern that a precedent could be established where deviation would result in an open-and-shut malpractice lawsuit. Such a possibility would require a set of rules to govern the use of any such database created through electronic record keeping.

One strategy that could be explored to ease the transition for older doctors, especially those with children or grandchildren, is to develop/improve a physician-based video game that incorporates electronic record keeping into the play. Such a game would acclimate the physician to real-world type situations in a forgiving environment where mistakes would not cause problems. Such a game may be interesting for new physicians as well exposing them to real-world situations where any neophyte diagnostic mistakes could be better controlled.

Overall there are more advantages than disadvantages when considering the installation of an electronic record keeping system, but the transition from a paper system to an electronic system involves much more than simply making the physical shift, considerations for the psyche and attitudes of the attending physicians must be made as well. Just throwing money at the problem of installation almost guarantees failure. An important step would be establishing a universal protocol that administrators can review to determine the best course of action when planning to install their electronic record keeping system.

Overuse of medical procedures and treatments has generated significant additional costs to the medical establishment. Both the patient and the physician perpetuate this overuse, although the patient probably bears more of the blame. For instance Webmd and other medical websites have given rise to self-diagnosing patients that lack the expertise to properly identifying correlations between symptoms and cause. Instead of realizing that there is a 99% chance that symptom x is caused by common condition y, these patients are instead treating common condition y and rare condition z with equal weight and thus demand expensive tests to better verify the correct condition. This behavior is aided by the general lack of statistical understanding in the public.

This mindset harkens to an underlying problem in society of the ‘normal expert’; the stance that expertise is only valid when said expert agrees with the opinion of the individual petitioning the expert. Basically the patient disregards the diagnostic expertise possessed by the physician, placing his/her own diagnostic ability at an equal level eliminating trust leading to the demand for the expensive tests. Unfortunately there is little the physician can do because if he/she elects not to put the patient through these tests using the reasoning that it is a waste of time and money, the patient will more than likely leave and visit another physician until he/she finds a physician that will perform the tests. Therefore, because performing the tests for patients of such psychological standing is a foregone conclusion, the initial physician might as well administer the tests to collect the associated service fee.

An additional driver of overuse stems from a sense of equal importance or equality. The richer the individual the more likely he/she is to have insurance that will cover almost any medical procedure, thus these rich individuals will almost always undertake medical care, even if it is unnecessary or irrational based on a probability/statistical assessment. These actions tend to influence other individuals of lower income through the thought process ‘if test x is good enough for person y and I have the same general symptoms, I want test x too.’ Rarely will logic be an effective means of diffusing the situation (you do not need test x, person y is stupid and wasting money undertaking test x), thus the doctor is left with a patent demanding that he/she be treated the same as the richer patient.

Physicians tend to overuse medical technology into one of two ways, financial gain or legal protection. There is little argument that imaging modalities like MRI and PET have large price tags associated with their application and serve as an effective means of profit for the physician/hospital, especially those that own their own machine, so associated use fees do not need to be paid. For some physicians, putting a patient with appropriate insurance through a MRI (a relatively painless procedure) even if no logical reason exists spells big bucks.

The second category is defensive medicine. Note that defensive medicine will be defined as: ‘the administration of a statistically irrelevant medical procedure for the sole purpose of lowering the future probability of success by a plaintiff taking specific legal action against the physician.’ The realm of medical malpractice is a tricky one because perception and reality differ in large respects. The perception of medical malpractice seems to be a sense of entitlement in that errors are unacceptable regardless of the circumstances and if a physician makes one then that physician should be kicked out of the profession forever and be sued costing both his insurance and the hospital millions of dollars. Again with a change in the way the legal system handles malpractice lawsuits this perception will change and more than likely lead to a reduction in both lawsuits and premiums.

While waiting for the legal system to evolve, the best way to eliminate patient demand for and physician application of unnecessary tests is to use evidence-backed medicine as a guide. The medical procedures that are covered by insurance companies must be statistically relevant for a given condition. If empirical research and testing has demonstrated that the given medical procedure does not statistically aid diagnosis and/or treatment of a suspected condition then there is little benefit to administering such a procedure. Basically remove the ability of physicians to increase their fee-per-service salaries through running needless tests and return the expertise factor back to the practice of medicine where a patient can no longer automatically conclude that because symptom x fits an obscure disease that was found on-line test y needs to be run to rule it out. In this situation a serious discussion with the physician will be required because if test y is desired then the fee comes directly from the patient’s pocket.

Note that such a strategy may work even better in a mixed insurance environment. For instance a public based government option could enforce these statistical standards where a private insurance company may not and charge a higher premium for “peace of mind”, however irrational it may be by covering statistically unnecessary tests. There may be some level of initial backlash to this statistical strategy because lower income individuals may believe that they are unfairly targeted because richer individuals will be able to afford these tests that are no longer covered by insurance. However, the key fact to remember in this situation is that these tests are not statistically significant for diagnosis or treatment of a given condition, thus in every instance the richer individual is simply wasting his/her money by authorizing and paying for the test. In short genuine medical treatment is not being withheld from lower income individuals.

Also expansion of pre-certification testing and proper accreditation would more than likely have a positive influence on lowering the costs associated with imaging modalities by removing unnecessary testing and errors during the imaging process. Overall although overuse of medical procedures is a significant problem there are clear strategies that can be used to neutralize the causes of the overuse problem.

It was previously mentioned that increasing physician reimbursement would require increasing the premium pool while reducing the amount of services that those in the premium pool consume. One suggested method to accomplish this goal was to ration health care services. However, such a response is favorable to no one, for limiting an individual to something like 2 MRIs a year could seriously limit the ability of hospitals to treat patients and it also reduces the amount of potential money paid to physicians for services. Fortunately rationing is not the only way to change the premium/cost ratio from less than 1 to greater than 1, one can instead improve the health of the individuals entering Medicare or any government based health care system. The problem executing this strategy is that the general health of society has been going down in recent years not going up, despite increases in life expectancy. Thus not only will this trend need to be reversed, but it needs to be amplified significantly.

One of the biggest reasons individuals suffer from ill health at a controllable level is the lack of quality/healthy food. There are four primary reasons that individuals lack quality/healthy food: lack of desire to eat healthy, lack of access, lack of knowledge regarding what is healthy and what is not healthy and cost gap (eating healthy is too expensive). Of the four the cost gap and a lack of access seem to be the most correctable because they have limited reliability on the individual doing the eating.

A lack of access is seen in both urban and rural regions of the country. In certain parts of these regions supermarkets have become less and less of a factor leaving convenience stores or fast food restaurants as the primary food supplier. Unfortunately the few healthy options these establishments offer are more expensive than the less healthy options and are of low quality. Without access to better variety and quality the populations in these environments, which some refer to as ‘food deserts’, are almost forced to select the unhealthy options at further detriment to their health. Somewhat ironically a big reason why supermarkets and other large food distribution dealers are not present in these communities is that they do not believe that it would be profitable to do so because of the poor eating habits of those in the community. Whether or not these eating habits are derived from the lack of food options is unclear. However, one step that can be taken by government to encourage risk-taking in these areas in an attempt to change these habits is provide tax incentives to individuals that want to open supermarkets.

If tax incentives are provided to entrepreneurs it would probably help with the initial capital costs of constructing the supermarket and leasing the land or building required for the supermarket, but probably would not influence food prices. Such a result is unfortunate because as the recent recession demonstrated, eating healthy is typically more expensive than eating poorly. Therefore, establishing access to the healthy food may not be enough to truly change the eating habits and health demographics of the given region in a positive direction.

Access will make it easier for the wealthy of the region to eat healthier, but most food deserts lie in below-average socioeconomical regions, thus another strategy must be implemented to aid both the supermarket and its potential consumers. As previously mentioned the supermarket is in a difficult position because lower prices for healthy foods may create a problem with respect to current and future profitability unless their supplier lowers their prices in turn which is unlikely because their supplier would have to lower prices and so on and so forth. Increasing the prices on unhealthy food to make them more expensive relative to healthy food as a means to entice purchase of healthy food would not work because such a situation would just price unhealthy food out of the purchasing power of the local populous. If they cannot afford healthy food making the unhealthy food more expensive than the healthy food does nothing to change that fact.

Therefore, it may make sense for the federal government to get involved again providing subsidies to supermarkets that lower the prices of certain healthy foods, mostly fruits and vegetables, past a certain price point in effort to recoup profits that would have been made on sales at the original price. It is irrational for supermarket proprietors not to accept such a strategy because it is highly likely they make more money under this program (greater volume of sales at an equivalent price). Based on simple preventative treatment these subsidies hopefully would pay for themselves in a reduction of medical care for individuals in this region.

Once access and economic barriers are overcome the next two significant barriers can be addressed, lack of desire to eat healthy and lack of knowledge regarding what is healthy. Unfortunately lack of proper knowledge regarding health has become much more troublesome due to the influence of industry lobbyists on Congress. This influence prevents proper scientific labeling on foods allowing foods to skeet by without consequence or challenge when saying food x has mineral y which improves heart health, lowers cholesterol, increases muscle flexibility or whatever else regardless of whether or not unbiased clinical tests have demonstrated such a fact. Therefore, with food manufactures allowed to plaster unsubstantiated claims on labels the art of label reading, which used to be the only thing that was really needed to differentiate healthy and unhealthy food, is further complicated. The problem with this complexity is most individuals do not have the time nor the inclination to scientifically identify which aspects of a label are valid and which are invalid. With this mindset any solution to this problem must be simple, but effective.

A potential solution to the above problem could be for each supermarket to distribute a weekly 2-4 page leaflet that targets 2-3 health-related issues. The leaflet would be printed on recycled paper to ensure limited environmental impact. Nutritional information in the leaflet will be characterized at a high school level of readability with a more specific focus on the relevant issues/foods at hand, basically simple language with focused analysis. Footnotes can be placed at the bottom of the leaflet to provide more detailed information for readers that want it. Note that there is no reason for these leaflets to be restricted to these new supermarkets located in food deserts, they could be distributed to a much wider audience. Also an electronic station can be set up in the supermarket so consumers can access archived information or look up a quick reference regarding the nutritional elements in a particular food or just in general.

With regards to the final problem of creating a desire to eat healthy, there is little that can be done from an outside influence beyond provide the means, through cookbooks or other recipe mediums, to improve/mask the taste of foods that individuals believe to be unappetizing. Understand that this masking needs to be done in a healthy way, not ‘hey just put a bunch of melted cheese on broccoli’. Unappealing taste is one of the few remaining logical factors preventing healthy eating when access, cost and knowledge barriers are lowered. Also parents need to be a prime motivator in getting their kids to eat healthy, especially early as very young children, for eating patterns are best developed early in life. That means parents need to be parents and not buy unhealthy snack food instead substituting those foods with fruits, vegetables and other healthy alternatives.

Education regarding healthy eating is only one portion of the education that must be provided on some level to society. There are basic health care prevention protocols that people should follow to reduce the probability of ill-health. These simple elements do not include getting various tests for various conditions, but instead the basic health care steps that most people already know about, but for some reason some fail to perform consistently, like washing hands, getting an annual physical when past a certain age, consistently brushing the teeth, exercising, etc.

Also although most of the time preventative testing is a benefit there are times when it is inappropriate and wasteful. Incorporating family histories as well as lifestyle choices is important when considering whether or not to undertake a test for a given condition because if the individual in question does not any conditions that predispose the tested condition, then the test itself can be viewed as wasteful. Unfortunately it is difficult to understand the waste because of a general lack of statistical knowledge, hence demonstrated by the uproar response surrounding the new recommendations recently announced for mammograms in relation to breast cancer.

As previously mentioned there are concerns about how health care reform would affect medical malpractice lawsuits. Medical malpractice premiums associated with the corresponding insurance have been increasing steadily since the late 90s and early 2000s.9,10,11 These increases have lead to concerns regarding either a reduction in the ability of physicians to practice medicine due to the lack of malpractice insurance or the increase of medical costs due to the practice of defensive medicine. Some believe that a federal law, similar to those in some states that place a monetary cap on non-economic damages usually of $250,000, should be passed to limit premium growth. Recall that economic damages are related to medical bills and lost wages, but not psychological pain and suffering or quality of life issues. Unfortunately such rationality is an over-reaction to the increase in premiums probably born of a lack of understanding regarding its origin.

Medical malpractice premium increases can be attributed to four causes.9,11 First, from 1998 to 2001 a vast number of insurers experienced decreased investment income due to reduced interest rates on their investment bonds, which made up a significant portion of their investment portfolios (80%).11 This reduction in earnings forced insurers to increase premiums as a means to ensure adequate funds to cover any payouts and other associated costs with being an insurance company. Unfortunately due to the recession, bond interest rates have been threatened again which may repeat insurer behavior seen earlier in the decade. Second, there was an increase in reinsurance rates for insurers which raised overall costs that needed to be recouped through premium increases. Third, competition lead some insurers to short-sell insurance policies that failed to sufficiently cover all probable losses which lead to insolvency for some of the providers. Fourth, in certain regions of the country malpractice claims increased rapidly forcing a justifiable increase in premiums. Unfortunately a significant lack of data regarding medical malpractice claims and division between economic and non-economic related plaintiff losses significantly reduces the ability to effectively analyze the true cost of these payouts.

Although cycles permeate the insurance industry, similar to most other industries, cycles in medical malpractice are more extreme than other insurance markets because of the randomized average time required to determine the outcome of the trial and any possible payment.11 Also the uncertainty surrounding both the number of suits and the amount of payment from those suits creates problems for insurers when creating future budget projections. These elongated cycles explain why when interest rates went up in the middle of the decade premiums did not fall accordingly. However, one problem may be the issue that it is rare that premiums are raised against a single physician; instead the insurance company tends to raise premiums across the board.

There are two problems with the strategy to implement a federal cap on all medical malpractice redress. The first problem is that a fixed cap does not appropriately address the needs of individuals that are actually victims of legitimate medical malpractice. If a physician amputates the wrong arm how is it morally justifiable to say to the wronged individual that the loss of that arm is only worth a maximum of $250,000? It is impractical and unfair to penalize an individual that has genuinely been mistreated by a physician solely to ward against the fear that less scrupulous lawsuits would result in inflated and unjustified monetary awards.

The second problem is that the lack of malpractice information has made it difficult to determine if malpractice caps directly correlate to lower premium rates or if other factors are involved in lower premiums. Both the CBO and the GAO, two organizations with no bias regarding the issue of malpractice, failed to determine conclusively that malpractice caps were responsible for lower premium rates or a change in healthcare spending (lower probability of elimination of care due to malpractice premiums).9,10,11 In some instances caps decreased premiums whereas in other instances caps had no influence or even increased premiums.11 Savings from particular caps are also difficult to analyze due to the unreliability of quantifying the actual costs derived from the practice of defensive medicine. Currently there has been no definitive study that has not raised questions regarding its bias or validity regarding the costs of defensive medicine.

One potential explanation for lower premiums in states with medical malpractice caps could be the rather underhanded tactic by malpractice insurers to purposely keep rates low in cap states to perpetuate the belief that they work due to the uncertainty involved in malpractice litigation in effort to maximize profit. Therefore, until definitive evidence can be determined regarding the true effectiveness of medical malpractice caps it does not appear warranted to pass a federal medical malpractice cap in effort to corral malpractice premiums.

Instead a more appropriate and malleable solution, if one were really interested in limiting needless medical malpractice lawsuits, would be to evaluate the credibility of a malpractice suit before it could proceed to trial. In essence create an independent state or federal board that would review the merits of a medical malpractice suit and could either find evidence supporting the argument of the patient or find that a unique set of circumstances, not neglect, was responsible for the event that is triggering the suit. Basically this board would act as a grand jury of sorts for the civil division with regards to medical malpractice suits; they would either allow the case to proceed to trial or ‘no true bill’ the case, which would preclude the case from proceeding to trial without new evidence. Unlike the cap proposal, this method treats each situation as unique and judges on a case-by-case basis instead of classifying everything under the same umbrella. Such a system has been previously described on this blog.

Abortion has always and probably will always be a contentious issue and not surprisingly that conflict does not escape health care reform. One of the main arguments against providing coverage for abortions under a new mandated public option type structure is the belief that some/many (depending on perspective) taxpayers would not want their tax dollars paying for abortions. Unfortunately for those that use this argument, such reasoning is complete and utter nonsense. Tax dollars are not earmarks nor are they sorted by individual; they cannot be reserved to pay for only certain government funded projects. A taxpayer cannot state that he/she will only pay taxes if those taxes are only used for project x, y or z but not project d. Thus, the only purpose of making the above argument in effort to prevent federal funding of abortion through a federal medical insurance program is to embarrass and make a fool of the individual making the argument.

In fact there is no real argument against preventing abortion coverage for a majority of abortions in a public option type plan. A vast majority of abortions take place in the 1st trimester where no one can intelligently argue from a scientific perspective that a human life is definitively being terminated; one can only argue from the position of ‘potential human life’ which from a legal standpoint is not enough. No ‘potential’ anything has the expectation of any form of legal protection. The only categorical position that can be taken against 1st or 2nd trimester abortions is one of a religious nature with the perspective that life begins at conception not at birth. Unfortunately for those making this argument the 1st Amendment separates church and state preventing the passing of laws based on specific religious beliefs or scriptures. Thus any law preventing coverage of abortion by federal funds in general should be viewed as unconstitutional. Now if anti-abortion proponents wanted to attempt to specifically limit only 3rd trimester abortions, it would be more difficult to argue a singular religious belief construct as the sole mindset for legislative opposition.

One of the lingering issues regarding health care reform that seems to have been ignored, which is funny because its link to the uninsured is critical, is emergency room reform. It would be very difficult to argue that the current state of ER operation is not at a crisis in the United States. Overcrowding and a lack of resources have pushed wait times before treatment to multiple hours, which hardly allows the ER to live up to its namesake. Therefore, unless the problems within the ER infrastructure are directly addressed there is little reason to think that the situation will get better, in fact odds are it will continue to get worse.

One of the biggest prevailing misnomers regarding ER overcrowding is that most individuals visiting the ER have no insurance, thus do not see a primary care physician resulting in ER visits varying from genuine emergencies to benign non-threatening conditions. Belief in this myth is largely drawn from misreading “The Emergency Medical Treatment and Active Labor Act of 1986”12 (surprise surprise individuals have misinterpreted a piece of legislation and have proceeded to make complete fools of themselves by over-exaggerating that misinterpretation).

The misinterpretation involves the erroneous belief that ERs are required to treat any individual that comes through their doors for free. Ha, really just think about that logically for one moment, how long would ER hospitals stay in business if that were actually true… 5 seconds, maybe 10? Actually “The Emergency Medical Treatment and Active Labor Act of 1986” states that ERs cannot withhold medical treatment from an individual who is in need of emergency attention solely based on their ability to pay.12 Individuals who visit the ER that lack insurance still receive a bill for services rendered, but the ER is not allowed to ask for insurance or cash upfront and if the individual does not have any throw the person out the door. Of course the individual can squelch on the bill, but for anyone that has suffered from the inability to pay a medical bill; such an action rarely works out in one’s favor.

Therefore, the uninsured actually visit the ER less (from a percentage standpoint based on their demographic) than the average person with insurance because while a 300 dollar a month premium and maybe 50 dollar co-pay are annoying, a 6,000 dollar ER bill is crippling. Thus, the uninsured tend to avoid going to the ER for as long as possible hoping that their bodies will be able to neutralize a detrimental condition without the assistance of modern medicine. Blaming overcrowding on the uninsured is simply not logical or intelligent. Realistically overcrowding is a simple matter of increased input and decreased output.

One unfortunate aspect of overcrowding is the simple fact that people are living longer and suffering more injuries, the same general problem afflicting the medical community in general. Also a lack of primary care physicians is putting more pressure on the ER to fill in the gaps for those that feel sick, but are unable to see a non-emergency care physician in a reasonable period of time pertaining to the condition at hand. For example it may be fine to wait 2 weeks for a physical when nothing appears to be wrong, but when afflicted with a rash that is spreading, a 2 week wait is too long a wait.

These two elements largely account for the increased input component responsible for ER overcrowding. Unfortunately little can be done about the first element other than try to further promote a healthy lifestyle of quality food, realistically low stress and exercise. The second element would obviously be best handled by increasing the number of primary care physicians. Unfortunately such an endeavor is difficult because primary care physicians are like division commanders in the health care army. They have to do a lot of work, more than specialists, but get paid less money. Clearly if a medical student is confronted with the choice of job A vs. job B where job B gets paid more money and works fewer hours, job B is obviously going to be more appealing.

The shortage of primary care physicians is especially important in the issue of the uninsured. Recall that the uninsured typically wait too long to go the ER to receive medical treatment largely due to the fact that they do not have insurance. However, even if these individuals were given insurance if there were still a lack of primary care physicians then there would only be a very small shift in the probability that these individuals would not go the ER because of the previously noted time discrepancy. In fact there may be reason to believe that insuring the uninsured without any change in the number of available primary care physicians will increase ER overcrowding because conditions can be treated in less advanced stages over a situation where the individual does not have insurance for a lower cost against the insurance company.

One strategy for neutralizing this gap is once again the involvement of government. Medicare already funds a significant portion of physician residency training through subsidies to teaching hospitals. Transferring some of those funds into specialized government grants that could subsidize a significant portion of medical school costs for individuals that sign up to be a primary care physician for a pre-determined period of time may improve the ratio of medical students that become primary care physicians over a particular specialist. Hopefully in the long term the addition of new primary care physicians will result in fewer ER visits across the board as well as a general increased level of health reducing costs to both private insurance companies and Medicare while reducing ER overcrowding by reducing a portion of the input factor.

The second component in overcrowding has to do with a decrease in output speed. One of the primary reasons for the decrease in turnover in recent years has been the lack of available nurses. Keeping with the above military analogy, nurses are like the infantry of the health care army. They typically have to do more work than primary care or emergency care physicians, get paid even less and have to have a significant portion of the education that is required of a physician. With these conditions it is not surprising that the occupational field of nursing is prone to shortages. Regardless of any other factor, processing time is negatively influenced if there is no one to conduct the processing and aid in the in-patient care.

Another important factor for the lack of turnover is the lack of beds for those that need to remain in the hospital for further observation and/or treatment. Obviously not everyone that comes into the ER has a condition that allows for a same day discharge. In fact common sense would imply that such a reality should not be the case for a number of individuals visiting the ER, especially the elderly. For example suppose someone is rushed to the ER after getting into an automobile accident, clearly that individual will need to stay at least one night in the in-patient unit. However, if there are no available beds in the in-patient unit that individual will have to stay in the ER or be moved somewhere else after the initial round of treatment, which will reduce the rate of recovery and contribute to over-crowding.

Some seem to think that the administration of an electronic record keeping system would go a long way to ending overcrowding in ERs. Although the incorporation of electronic records would more than likely create net positive benefits, it is questionable how useful such a system would be at actually reducing overcrowding. The problem is electronic records have no influence on how many patients show up to an ER on a given day nor do they have any real influence on how fast an individual in in-patient care will heal to the point where the bed can be given up to another individual. Basically electronic records would only avoid processing errors and smooth out some rough edges in possible time gaps between patients.

Of course there are other problems besides the ones discussed above, such as gaps in coverage backed by little rationality that may force patients to absorb large costs. However, such micro concerns are second tier concerns in that their solutions are limited by larger macro problems faced by the health care system in general.

Although the plight of the uninsured receives far and away the most attention, there are a number of other problems with health care that are intertwined with the uninsured that prevents directly solving the uninsured problem by itself. Recognition of these additional problems and how they influence each other is an important consideration when attempting to solve any of the main problems in health care. Unfortunately it appears that most of the power players in the United States do not understand this interconnection and are plowing ahead trying to solve each problem one at a time in isolation.

The current health care bill that passed the House of Representatives does not appear to effectively control future costs nor find a means to both affordably provide health insurance and proper reimbursement to physicians treating Medicare patients in a cost-effective manner. Instead of doing something just for the sake of doing something, how about taking a step back and actually solving a problem. Of course such a tactic is difficult when Republicans appear to have less of a clue than Democrats and for political reasons choose to reject everything possible. Whether or not individuals want to admit it, a strong universal federally run public option is essential to medical care cost reduction. However, the public option needs to also control for the problems discussed above otherwise it would be meaningless.

1. Centers for Medicare & Medicaid Services.

2. Preece, Derek. “The ABC’s of RVU’s.” The BSM Consulting Group. 2007-2008.

3. Centers for Medicare & Medicaid Services. Medicare Claims Processing Manual: Chapter 12 – Physicians/Non-physician Practitioners. Rev. 1716: 4-24-09.

4. Clemens, Kent. “Estimated Sustainable Growth Rate and Conversion Factor for Medicare Payments to Physicians in 2009.” CMS: Office of the Actuary. November 2008.

5. House Resolution 3961: Medicare Physician Payment Reform Act of 2009

6. Kluger, Jeffrey. “Electronic Health Records: What’s Taking So Long?” Time Magazine. March 25, 2009.,8599,1887658,00.html

7. Jha, Ashish, et, Al. “Use of Electronic Health Records in U.S. Hospitals.” New England Journal of Medicine. 2009. 360:16 1628-1638.

8. DesRoches C, et, Al. “Electronic health records in ambulatory care — a national survey of physicians.” New England Journal of Medicine. 2008. 359: 50-60.

9. “Medical Malpractice Tort Limits and Health Care Spending.” Congressional Budget Office Background Paper. April 2006.

10. “Medical Malpractice: Implications of Rising Premiums on Access to Health Care.” Government Accountability Office. August 2003.

11. “Medical Malpractice Insurance: Multiple Factors Have Contributed to Premium Rate Increases.” Government Accountability Office. October 2003.

12. The Emergency Medical Treatment and Active Labor Act of 1986.

Wednesday, November 18, 2009

Improving the News Part 1

Previously this blog hypothesized that one of the principle elements required for the survival of newspapers was to initiate a form of ‘smart’ revolution. Unfortunately the success of such a strategy is not only contingent on the actions of the newspaper industry, but also that of the cable news industry. If the cable news industry does not embark on its own ‘smart’ revolution, its ubiquitous nature and continuing influence on the increasing lack of general knowledge possessed by the average U.S. citizen regarding domestic and international issues will certainly reduce the probability of success for any ‘smart’ revolution and further contribute to the continuing deterioration of U.S. authority on the world stage.

The rise of cable news created a significant advantage over the classic nightly national news provided by the big three national networks, time. Typically barring special events, the national news was allotted only a half-hour to cover all of the relevant news of the day, thus a majority of that news, especially on the international stage not directly related to the United States, was left on the cutting room floor and even the news that aired lacked a level of depth due to the allotted time constraints.

Cable news networks are not burdened by this critical handicap, they have 24 hours of programming devoted to the news. Unfortunately where it would have been anticipated that without a time burden these networks would endeavor to dig much deeper into a myriad of topics both covered and not covered by the network news, such a mindset did not emerge. Instead cable news networks elected to copy the general format of the national news (perhaps because it was viewed as successful) and simply repeat the formula every half-hour. Even so-called specialty shows follow such a formula although they dress it up a bit with snazzy bells and whistles.

One potential reason for utilizing such an inferior methodology is that the cable news genre requires people watching solely for the news, thus a loop strategy increases the probability that at any given moment an individual that tunes into the particular channel will continue watching because the bulk of the news is either on or will be in the very near future. The problem is that this loop strategy creates an environment of water-downed sound bites that have little meaning and seem to be more successful at spreading panic and anger through incomplete and/or misinformation than spreading calm and understanding through logic and rationality.

The recent coverage of the so-called ‘balloon’ boy is a perfect representation of this model. [Note that even the title of balloon boy is a misnomer because the boy was never in the balloon and for all intensive purposes probably did not even release the balloon.] All of the major cable news networks latched on to the ‘drama’ of the story when it first hit the wire. However, the coverage quickly became stale and meaningless after spending the first 90 seconds summarizing the story.

This lack of developing details forced entry into what can only be viewed as a contorted version of the movie ‘Speed’. A repetitive loop of the same coverage that was viewed just 90-120 seconds ago with the station manager mumbling to him/herself ‘If our station doesn’t continue to cover this story despite the sheer lack of new developments we will certainly go out of business.’ At least that must have been the mindset otherwise there is no logical reason to have continuous real-time coverage of a story where the only real event of note that could have occurred would be the descent of the craft.

This attitude seems derived from the inability to properly address the mindset of the news consumer. Only irrational consumers select a news network because that particular network beat the other stations to the punch by some arbitrary short period of time like 5 seconds. It is difficult to conceptualize an individual furiously changing channels back and forth between different news networks in an effort to determine which station was covering a given story with the greatest amount of turnover. For example if CNN loses more than 0.001% of its audience because MSNBC achieves a 10% faster update turnover, then society has a much bigger problem over where the populous chooses to gets its news.

News networks need to explore being much more thorough in reporting various topics. CNN seems in prime position to execute such a strategy because of the existence of Headline News. Originally Headline News was the standard repetitive news channel with 30 minutes identical cycles over a given time period.

However, due to competition brought on by MSNBC and Fox, Headline News changed format to include more talking heads. Instead of maintaining this change, CNN should return Headline News to its original format, where viewers can drop in to look for quick updates on the generic news of the day. Then CNN would no longer feel obligated to cover those popular sound bite stories and instead focus on going deeper on given issues. Recall that the best way to beat competition is to give the consumer something beneficial they cannot get with other products of the same ilk.

For example CNN could select three issues and devote three hours of expert qualitative and quantitative analysis on the various elements that make up those issues. Then CNN could select a fourth issue that may not be in the news, but members of CNN’s news staff believe is a lingering problem and propose various solutions to that problem. For the most part news organizations should think Masters or Doctorial thesis over 5-minute oral report for a class of fifth graders. For those that say such a format is not what the public wants, how could anyone know because all news organizations ever seem to offer is sound bite, sound bite, talking head, sound bite. It is difficult to rationalize that an individual likes or dislikes food x if food x is never offered.

Another trend that has emerged recently is the incorporation of instant response technologies in an effort to get viewers more involved, which is understandable from a ratings standpoint, but the methodology behind the participation serves little purpose. For example CNN encourages viewers to make comments on stories using Facebook, Twitter, Myspace and through I-reporting; however, these comments typically do not have significant purpose acting like a voice in a crowd just randomly shouting something for the sole purpose of shouting something in some diluted attempt to matter. Instead of simply reading the comments with no feedback, commentators should pre-select random comments to reply to on-air.

For example before reviewing any comments the commentator will elect to supply feedback to comments number 3, 5, 8, 11 and 14 out of 20 comments read. Remember these comments are not pre-screened, thus commentary cannot be determined prior to their on-air reading and cannot be cherry-picked so that only a certain type of comment is read. An interesting aspect of this procedure is that if the comment consists of no real substance, the commentator should be free to ridicule the comment. Note such an opinion needs to be carefully monitored because at no time should a comment that disagrees with the personal viewpoint of the commentator be ridiculed solely because of that disagreement.

For example if the topic is abortion and the commentator is one that favors restrictions on abortion a comment of ‘abortion stupid’ would be ripe for ridicule because it offers no substance, no rationality to why the individual that provided the comment believes abortion to be stupid. One means of ridicule would be to comment that the individual in question should be forced to repeat 1st grade English to acquire the requisite knowledge about proper sentence structure.

However, a comment of ‘individuals that wish to place restrictions on abortions do so under strict religious pretense which flies in the face of the First Amendment and should not be tolerated.’ should not be ridiculed, but instead the commentator should attempt to explain, if possible, how religious views are not the sole factor in creating legislation to oppose abortion. That way the commentator’s response could be used as a jumping off point to continue the debate with regards to the validity of the response.

Another means of improving the interaction between network and viewer is encourage viewers to ask questions regarding certain more complicated issues, like health care, climate change or energy generation, etc. During various points in the day the question that has been asked the most in some form or another and has yet to be addressed will be directly addressed and answered in an analytical, logical and objective fashion. At the end of the day (10:00 pm/11:00 pm) an hour-long recap can be aired regarding the questions that were addressed and answered during the viewing day. Such a recap can easily replace one of the thirteen different in name, yet remarkably the same in content, talking head shows that cable news networks run in a given day.

Initially it appears that diversification of the guest pool for headliner shows (shows that have a single individual as the moderator who typically has his/her name in the show title) would also improve the discourse and quality of the show. For if one is truly serious, how devoid of substance must the conversations be between the moderator and the guest if that same guest is on the show two/three times a week? However, there is the underlying concern that it stands to reason that diversification of the guest pool would be in name only not ideology.

If host A invites guest B instead of guest A, but both guest A and B have similar viewpoints on the issue at hand, inviting guest B over guest A serves little purpose. Of course inviting a guest with a viewpoint that differs from that of the host would spark debate; however, the fine art of debate has fallen so far into the dregs of modern society that it is unclear that any substance will be gleamed from these conversations as there is the likelihood that the conversation would simply boil down into one individual shouting down the other individual.

News organizations also need to stop behaving illogically by giving equal weight or time to issues that are not equal. An excellent example is the question of the role of humans in accelerated climate change. Giving equal credence to both sides of this issue (humans are responsible vs. humans are not responsible) is akin to giving equal credence to the statements ‘2 + 2 = 4’ vs. ‘2 + 2 = 5,679’. Clearly one answer is right and one answer is wrong (humans are the principle driver of global warming and climate change), thus debating the topic is fruitless and can only do harm. These types of issues are not questions of which is better dogs or cats, a question which has almost no empirical standing, but a question of issues where some answers/conclusions are clearly wrong based on the given boundary conditions that apply to their very nature.

Overall there are a number of positive steps that cable news networks can take to foster a more intelligent and meaningful society. However, these steps will require hard work in their application and diligence to ensure that proper coverage and objectivity are given to stories that deserve it. Although cliché it is appropriate to state that cable news networks as well as newspapers can either be part of the solution or part of the problem.

Friday, November 13, 2009

A Method to Reduce Ocean Acidity

Background or part 1 for this post can be viewed here:

The current infeasibility of available oceanic remediation mechanisms is troubling because as previously discussed it does not appear that global CO2 neutrality will be achieved at any point in the near future. This lack of neutrality will lead to further ocean acidity raising the probability of catastrophic loss of ocean biodiversity. Therefore, new strategies need to be proposed in effort to alleviate the problem of ocean acidity. Note that this proposal is theoretical and has not been tested in any way, shape or form.

As the situation currently stands it appears that the most viable economic route to CO2 removal would be to design a piece of technology that could somehow remove the unassociated CO2 from the ocean by facilitating a chemical reaction to bind it and then dissociate from the CO2 at a later time, making the material reusable. This strategy eliminates various problems with the catalytic option used in iron fertilization by anchoring any catalyzing agent to a device and even if needed sequestering it away from any detrimental elements. It also redirects the limiting factor of CO2 turnover to the material that is absorbing the CO2, which is more controllable and can be manipulated more easily than biological organisms or limestone deposits. In addition if the material can be manufactured at reasonable cost, the material being the limiting factor in CO2 absorbed would only be a minor inconvenience. Although such a strategy seems daunting, there is reason to be optimistic. Below is a description of the type of device that may accomplish the desired reduction in acidity.

Considering the solubility factors and the role of the natural carbon cycle, it appears that withdrawing CO2 closer to the surface is preferable. Therefore, it would be useful for the device to behave in similar fashion to a buoy in that the absorption portion of the device would be submerged below the surface, but most of the device remains above the surface. The main reason for this strategy is the fact that permanently submerging the entire unit may be counter-productive as the non-submerged portion could be used to support a solar panel system to power any autonomous actions for the device or some other non-aquatic advantage. Also salvaging the system after reaching maximum CO2 storage would be made more complicated if it were fully submerged.

Due to the sheer size of the ocean, reducing total average acidity without significant removal of atmospheric CO2 is rather farfetched. The principle idea behind the device presented here is not to reduce the acidity of the entire ocean, but instead focus on small critical portions to delay or even prevent the erosion of oceanic biodiversity and food chains. For example it would be difficult to argue that some portion of the Pacific Ocean in the middle of nowhere is of equal importance to oceanic biodiversity to that of the Great Barrier Reef. Granted that it is highly likely that due to the mixing differential of the ocean a point location reduction of acidity would not be straightforward, but by applying continuous acidic reduction pressure at a particular point, there is a high probability that acidity levels at that point will fall faster than will be recouped by mixing. In fact there is a small probability that if significant points of action are established total average ocean acidity throughout the system will be reduced. However, such reduction would not be anything of significance beyond the point locations.

There are a number of different materials that are known to interact with and bind CO2 either as a catalyst or in a chemical reaction (NaOH, different resins, amines, aqueous ammonia, ionic liquids, membranes, etc). Most of these elements have been explored or are currently being used in the design of carbon capture mechanisms for coal power plants. Unfortunately very few of these options, for obvious reasons due to the focus on source capture in power plants, have been tested in aquatic conditions. Another problem is that most of these processes are scaled-up to function over a much larger area than that which would be economically feasible for an ocean CO2 absorbing device. However, metal organic frameworks (MOF), a hybrid material constructed from metal oxide clusters with organic linkers1,2 appear to be a possibility. The reason MOFs are an attractive option is they do not appear to require as much supporting infrastructure as other CO2 absorption materials. Also MOFs have a fairly unique selectivity for CO2, which may increase the efficiency of ‘filtering’ CO2 from other molecules in the ocean while also reducing the probability of contamination and fouling.1

The selectivity of MOF for CO2 is derived from its ability to interact with the large quadrupole moment possessed by CO2.1,3 At certain times CO2 oscillates into a state where its electrons are not evenly distributed (the quadrupole moment). At this point in time based on the molecular arrangement of the particular species of MOF the CO2 binds to the MOF. Technically the quadrupole moment for CO2 is thought to be –4.1 to –4.4 x 10^26 e.s.u. cm^-24,5 Another useful attribute for MOF is the fact that although selective, the bond with CO2 is still rather weak; therefore, less heat and pressure is required to remove the CO2 from the MOF vs. other processes (most notable amine CO2 binding).3 However, it must be noted that similar to the methods listed above, MOF has yet to be tested in an aqueous environment, so there could definitely be some future concerns.

While there are a wide variety of MOFs to choose from, the best option appears to be MOF-177 because so far in empirical studies it has the greatest surface area of all MOF and MOF-similar compounds and has the highest CO2 capacity between all of these compounds.6 MOF-177 has a BET surface area of 4,508-4,750 m^2/g, a bulk density of 0.43 g/cm^3 and absorbs CO2 at a capacity of 1,470 mg/g.6 Note that if covalent organic frameworks (COF) 102 and 103 are much cheaper to produce, they may become viable alternatives to MOF-177.6

Due to the presence of the target CO2 in ocean water, water would need to make contact with the material (probably MOF) doing the binding with CO2, for any attempt to collect out-gassed CO2 would be a rather inefficient means of reducing ocean acidity. There are two primary ways to accomplish this interaction, passive or active. Passive interaction would rely on the natural movement of the water to initiate contact with the material. Active interaction would work to create some form of pressure difference that would draw the water over the material, thus the material would be in contact with the water at certain periods of time instead of random periods of time. For the sole purpose of driving the reaction between the CO2 and the material there does not appear to be a significant difference between passive and active interaction, with the exception that active interaction would require additional energy and/or complexity to power the pump or other drawing mechanism.

Similar to CO2 absorption through technological means via either point source capture or air capture, ocean CO2 absorption has the important lingering question of where to transport the CO2 after absorption. It makes little economic sense to keep the CO2 bound to the material in question; therefore, the CO2 needs to be relocated to an environment where it will not easily re-enter either the atmosphere or the ocean. This question has always been somewhat problematic because there are few options for the collected CO2. As previously discussed in the air capture/sequestration post there are some that would like to utilize capture CO2 in industrial applications like making carbon-neutral fuel, enhancing oil retrieval or augmenting greenhouse-based crop growth; however, none of these options are viable long-term to utilize the amount of CO2 that would be collected and it is difficult to view anything, but enhancing oil retrieval as viable in the short-term. Due to the lack of a viable long-term industrial application and the sheer amount of CO2 that needs to be sequestered, most view storage in natural sinks as the best option.

Based on the specific location of the acidity reduction, storage in sinks could be useful for implementation of such a device. However, if the device is floating on the surface transport to an appropriate storage site could require either a long transfer line or increasing the depth of operation. A short transfer line would not be a significant problem, but when considering that the device will be at a depth of 5-20 ft when on the surface and the typical storage region will have a depth ranging from 5,000-10,000+ ft one could understand how such a long transfer line/pipe would be cumbersome. Therefore, it seems reasonable that the device would have to change depth.

Unfortunately storage in this manner from the device itself is highly unlikely because oceanic sequestration requires that the CO2 be in liquid form, which involves the application of a significant amount of heat and pressure, to be applied within the device, which would probably be largely isolated to a specific compartment. This phase change would provide increased complexity in design because not only would there need to be a separate storage area for the CO2 in gaseous form, but a storage area would be required for CO2 in liquid form as well as the means to generate the necessary levels of heat and pressure. These additional pieces will increase the total weight of the device reducing the maximum capacity of CO2 acquisition and things that could go wrong with the device in general. However, as will be seen, the idea involving a depth changing cycle is still viable.

If direct from the device oceanic sequestration is not rational, then the CO2 collected from the device will need to be manually retrieved and taken to a processing plant to be prepared for sequestration. If this is the case then it is important that the device have as high a maximum capacity for CO2 absorption as possible. It is unlikely that such a capacity can be achieved if passive interaction is used because too little of the material would be in contact with water at a given time. Therefore, it would be wise to create isolated compartments where a large percentage of the environment could contain the material and react with CO2 from water that is moved using active interaction through these areas. However, in order to make the attempt to maximize CO2 capacity mass and density shifts will be expected in the device creating depth changes.

So how will the change in depth be achieved in a device that has the primary function of floating on the surface of the water while trying to maximize CO2 capacity? To best illustrate the process first begin with the question of how a ship floats on water. Basically a ship floats on water because the bulk density of the ship is less than the bulk density of the liquid supporting it (i.e. the water). For reference recall that density is defined as the mass of an object divided by its volume. A ship will no longer float when its density becomes greater than the density of water; most notably this change occurs when the ship’s hull is breached and water begins to flow into the ship increasing its mass. In normal function a ship will sink to an overall depth relative to its density vs. the density of the water (the closer its density is to water the more it will sink). Note that overall object buoyancy is more complex than a simple relationship between densities, (weight related liquid displacement relative to exotically shaped objects and their buoyancy) but for general practice, restricting the discussion to density is fine for non-exotically shaped objects.

Clearly it would be a mistake to breach any portion of the device to induce sinking; however, there is something to be learned from adding water to change the density of the device to initiate sinking. A controlled rate of water acquisition would require compartmentalization and a form of active transport, which is exactly what was proposed to increase CO2 absorption capacity and efficiency. The water would be driven into an alternative compartment(s) in the device via a pump. This alternative compartment would also contain the absorption material. As water continues to flow into these compartments, the device should begin to sink. Once the compartments fill and enough time is allotted for binding reactions, the device can then eject the stored water from the compartments both lowering its density and creating a concentrated jet propulsion stream to hasten its ascent to the surface.

Upon returning to the surface the material should have absorbed a significant amount of CO2 from the water. Regardless of the material, to release the CO2 a considerable amount of heat (temperature increase) will need to be applied. This temperature increase can be achieved through activation of heating units placed on the wall opposite the material. Once released the CO2 will be drawn into a gaseous CO2 storage compartment, which is normally restricted via a valve or some other obstruction. Then the process begins anew with the device changing depth and sinking again.

For this device to function in such a capacity it needs to have a significant level of autonomy. Various sensors and valves (for restricting access) in addition to a centralized computer system would be required. Although difficult to accomplish, autonomous action can also facilitate a form of repeating action or multiple passing, which will increase the probability of reaching the maximum level of CO2 extraction before collecting the CO2 for sequestration. Fortunately the autonomous action elements of the device are not developed for use in a blind environment. Information can be acquired regarding the maximum depth, submergence time, device surface area and volume, etc. which can generate versatile designs for a given region reducing the work required to attain autonomy. For example it is reasonable to know deployment depth, thus timing mechanisms can be utilized to start and end certain processes such as pump action, heating and valve opening and closing.

In some respects think of this device as a significantly more complicated APEX type float. The pump would have to be of greater horsepower and the communication systems more advanced, but the general descent and ascent properties would operate in a similar capacity. The biggest difference is instead of using the pump to transfer fluid to and from a hydraulic bladder, the pump transfers ocean water to and from the MOF absorption regions.

With all that has been said, an example description of how such a device would operate is given below:

The device consists of four units, one main unit and three wing units. The main unit is a sealed rectangle constructed out of titanium or some other non-corrosive metal, which is airtight and houses all of the electronics that issue the commands to facilitate autonomy. The electronics in the main unit are powered by either a lithium-ion battery or a series of solar cells that are positioned on the top of the main unit.

The wing units are attached to the main unit in a way that forms a tripod base structure to aid stability and uniformity of shape when on the surface and sinking and are connected to the main unit through ascending sealed pipes/tubes. The volume of each wing unit is approximately 30%-90% the size of the main unit, the size is dependent on how much space in the main unit is required for the necessary electronics, with a spherical bottom and rectangular top and a centralized ascending pipe sealed by mechanical valves ascending from the spherical bottom. Spherical bottoms are used because the conical shape further aids stability and buoyancy. Behind the valves are grated sieves covering the pipes, which allow for the influx of water, but not elements of significant size like various forms of marine life.

A wing unit has a secondary compartment containing the absorption material that can be sealed off from the main portion of the wing unit. The material, which is MOF-177 in this example, is lined on all of the sidewalls of the rectangular portion of the wing unit. As the water fills the wing unit it will come into contact with the MOF-177. Test results demonstrate that MOF-177 interacts better with CO2 as pressure increases to about 30-40 bars.1,6 However, if such a pressure increase proves to be too complicated or too detrimental within the device (there is a sufficient probability that it may) MOF-177 can still recover CO2 at atmospheric pressure although at a very significant efficiency loss. The external pressure during submersion could aid in the reaction process, but the extent of that aid is unclear if even significant.

Fortunately, the repetitive action of the device should compensate for this efficiency loss. Heating units are sandwiched between the inner wall of the wing unit and an outer wall, which shields the units from the outer environment. These heating units increase the temperature of the inner wall up to at least 70 C to facilitate separation of the CO2 from the MOF. The heating units will automatically shutoff after a preset time determined through empirical study.
The newly freed gaseous CO2 will then be moved to a storage unit attached to the top of each wing unit. These storage units will have a sensor reporting to a base station when the unit is full and will also be detachable so that recovery crews can remove the collected CO2 and transfer it to a storage unit on the recovery ship. The storage unit will then be reattached to the device and the device can be reinitialized. If such a design proves too cumbersome, there is the possibility of storing the CO2 in the main unit with the electrical equipment, but a minor concern of long-term corrosive damage would need to be addressed.

A summary of the lifecycle of the device:

- The device is placed in the water at a point of interest for ocean acidity reduction where it floats/bobs like a buoy on the surface

- after an initial acclimation time the valves in at the bottom of the wing units open and the corresponding pumps activate increasing the uptake of water and the mass of the device causing it to sink - during the uptake of water and the descent of the device the pressure of the water within each wing unit increases to increase the efficiency of the interaction rate between the material and the CO2

- once the carrying capacity of the wing units is reached (identified by a sensor), the pumps reverse action and push the water out of the wing units resulting in the device ascending back to the surface

- once on the surface heating units opposite the material activate separating the CO2 from the material

- after a pre-determined time the heating units turn off, triggering activation of vacuums transferring the free gaseous CO2 from the wing units to the storage units

- the storage units are sealed and the process begins anew.

1. Walton, Krista, et, Al. “Understanding Inflections and Steps in Carbon Dioxide Adsorption Isotherms in Metal-Organic Frameworks.” Journal of American Chemical Society. 2008. 130: 406-407.

2. Long, Jeffrey, and Yaghi, Omar. “The pervasive chemistry of metal–organic frameworks.” Chemical Society Reviews. 2009. 38: 1213-1214.

3. Voosen, Paul. “New Material Could Vastly Improve Carbon Capture.” Scientific American Online. June 30, 2009.

4. Buckingham, A, and Disch, R. “The Quadrupole Moment of the Carbon Dioxide Molecule.” Proceedings of the Royal Society of London. Mathematical and Physical Sciences. 273(1353): 275-289.

5. Xu, Ruren, Chen, Jiesheng, Gao, Zi, and Yan, Wenfu. From Zeolites to Porous MOF Materials. The 40th Anniversary of International Zeolite Conference. Vol. 170. 2009.

6. Furukawa, Hiroyasu, and Yaghi, Omar. “Storage of Hydrogen, Methane, and Carbon Dioxide in Highly Porous Covalent Organic Frameworks for Clean Energy Applications.” Journal of American Chemical Society. 2009. 131: 8875-8883.