Tuesday, August 27, 2013

Is Curing Cancer the Appropriate Step? (Preparing for a World without Cancer)

Since 1971 when then President Nixon declared war on cancer society have awaited the breakthrough that would render the elimination of cancer as a societal and psychological boogie man. Interestingly the pursuit of this accomplishment has developed a certain type of viewpoint of how the future would operate after success with few individuals discussing how the world will change in both a positive and negative way. As counter-intuitive as it may seem would identifying a completely effective cancer treatment leave the world worse off than not identifying one at all?

At first thought almost everyone would find the above question preposterous citing both qualitative and quantitative rationalities for its ridiculousness. For example the National Institutes of Health (NIH) in 2008 estimated the annual costs of cancer to society were 201.5 billion (77.4 billion in direct medical costs and 124 billion in indirect mortality costs);1 those costs increased further in 2010 to $290 billion (154 billion in direct medical costs and 146 billion in indirect mortality costs).2 In 2007 cancer was responsible for approximately 13% of all human deaths globally (7.9 million)3 and in 2008 cancer killed another 7.6 million4 with overall occurrence rates increasing due to general increases in global life expectancies, changes in diets and lifestyles in the developing world, and changes to the environment. In addition there is the emotional damage that cancer afflicts against individuals who do not even have the condition (friends and family) that is not calculated in the mortality costs, regardless of whether or not the afflicted dies, raising economical and societal damage further.

Initially one may assume that the enormity of these numbers should dispel any individual arguing that a cure for cancer would be a bad thing, but the problem is that those numbers are only viewed through a positive lens not through a realistic lens. For example modeling economic costs associated with any condition or disease is currently simplistic and heavily dependent on certain assumptions. There are three common methods for estimating costs associated with various health conditions: Cost-of-illness (COI), Value of Lost Output (VLO) and value of statistical life (VSL).2

COI is the most common analysis method to calculate economic impact of an illness and assigned costs based on the sum of direct and indirect medical costs associated with the illness. Direct medical costs are sub-categorized as: diagnosis, drugs, hospital stays, surgeries, etc. Indirect mortality costs are sub-categorized as: transportation, income losses, pain and suffering reduced productivity, education, etc.5 Note that productivity losses (associated with indirect mortality costs) are typically modeled through the human capital approach, which calculates the potential production by an individual based on average wages adjusted for household productivity.5

The human capital approach is not the only method for calculating productivity, some analysis also use the friction cost method instead. Friction cost estimates the indirect costs of an illness relative to the amount of time it takes the employer/society to substitute the production lost from the individual who has taken ill.6 Basically costs are estimated by multiplying the number of days required for recovery by income and including an elasticity element for annual labor time versus labor productivity.6 Normally there is some form of time boundary (typically some number of days) associated with friction cost pertaining to the average expected recovery time for the illness. If recovery exceeds this boundary condition the excess costs are not counted towards the estimated production costs/losses. For example suppose person x contracts disease y that has an average recovery time of 30 days. If person x is not available to work for 17 days the productivity losses from all 17 days are counted. However, if person x takes 43 days only the first 30 are counted. This condition frequently results in lower estimated costs for a given disease because it typically does not encapsulate the more severe and costly cases of the disease.

COI also has two analysis “viewpoints”: prevalence or incidence. Prevalence estimates costs over a specific time period normally averaged to an annual cost, but does not focus on costs over the duration of the illness.5 Incidence estimates costs over the expected lifetime of an illness. Savings born from prevention strategies are derived from incidence-based analysis because of their timeline estimates over cross-sectional costs.5 While incidence estimates provide more information, they also require more assumptions and information, which can increase the probability for more inaccuracy in the results.

VLO estimates costs based on the impact on GDP with regards to lost capital, production, efficiency and other factors that affect commerce.2 VSL is categorized by two difference analysis methods: the human capital method and the mortality risk method.7,8 The human capital method calculates the present value of future income forgone due to death (similar to the method for COI). However, this method does not include intangible psychological elements on the family and friends of the deceased and ignores the non-working members of society who do not significantly act as human capital in the labor market.8 The mortality risk method calculates what costs members of society are willing to incur to change mortality risks from given illnesses. For example the World Bank classifies VSL as how people’s preferences affect the measurement of change (increase or decrease) in human well-being relative to the change in mortality risk based on the amount of money they would pay.9 Basically how much money would a person pay to reduce the chance of dying from disease A from 40% to 30%?

The reason these three methods exist is that they all examine economic costs relative to illness from different perspectives [public versus private (COI) or individual versus social (VOL) or some aspects of both (VSL) over different time periods]. However, because of these different analysis perspectives, along with complications due to co-morbidities, it is difficult to compare these methods directly.

Regardless of method calculating the possible savings from indirect costs is tricky. First of all productivity losses are more than likely overestimated because unlike past instances the current labor market functions on a shortage of jobs, not a shortage of labor. Of course some may argue against this point citing some of the higher paying jobs that have been vacant for years, but these jobs are highly specific demanding high education and experience levels, which are not available to a vast majority of the population, thus these jobs are radical outliers. Most modern vacant jobs are lower paying, but have high levels of competition including many individuals who are overqualified. Thus, most of the production from labor that is lost to cancer is more easily replaced now than in the past making the savings from curing cancer less than expected.

Another question is how the overall economy will be affected by the redirection of indirect cancer costs. For example current indirect cancer costs are high scale with a narrow focus (i.e. large amounts of money applied to a small section of the economy). If cancer is cured the indirect costs will be diverted more than likely at a smaller scale due to greater sector spread. Will this change in fund distribution result in a net gain or net loss for the economy? The loss of large-scale funds in the cancer treatment industry will result in job loss and contraction in this industry. The concern is that will the redirection of the funds be too scattered to significantly promote job growth in other fields, thus resulting in a negative economic impact? One counter-argument is that the new survivors will spend their money bolstering economic activity. While possible the real validity of this counter-argument is contingent on what happens to the assets owned by these individuals in probate and raises the question of whether government or private control of funds provides better stimulation to the economy.

Another concern is the increasing intracountry global inequality gap. While globalization has decreased the inequality gap between developed and developing countries, it has increased the inequality gap between rich and poor individuals within both developed and developing countries. Assuming that the general death total associated with cancer remains similar over the next 10 years as cited from 2007 figures above, then if a cure for cancer is discovered in 2015 over the next nine years 71.1 million people who should have died will live (and that does not include any offspring these individuals may have).

Note that the above estimate is probably on the low side because the World Health Organization (WHO) estimates that by 2030 at least 13.1 million worldwide will die from cancer.4 Based on the existing inequality level most of these survivors will not have vast amounts of wealth (this assumption makes sense because 70% of all cancer deaths in 2008 occurred in low- and middle-income countries),4 but instead will compete with other individuals for a pool of resources that is at best static and worst declining. This competition could place a greater strain on a vast majority of parties including those who developed cancer, but did not die, lowering the quality of life of a greater number of individuals.

Some may argue that while the savings born from indirect costs may be inaccurate and possibly non-existent, the savings from direct costs are certainly real and substantial. Direct cost savings are real, but more than likely will not be useful to society. The reason for this judgment is the commerce arrow that governs cancer treatment. When individual x develops cancer and receives treatment (note that second part) most of the time he/she has health insurance where after paying a deductible the insurance company will pay for a vast majority of the costs associated with the cancer treatment (80-100%, depending on the specifics of the plan). This payment will typically go to a hospital (most doctors do not own their own treatment centers, etc.). The hospital uses some of those funds to purchase elements of the treatment from various companies. Overall most of the costs associated with cancer treatment are born by the insurance companies, whether or not those costs are passed on to consumers is unclear, but is definitely a possibility.

If a cancer cure is developed, the direct costs of treating cancer will significantly drop reducing the amount of money that insurance companies pay to hospitals. Therefore, insurance companies will gain most of the savings from a cancer cure. However, the benefit to society from a standpoint of direct costs depends on what the insurance company does with these saved funds. There are ten common actions that a company can take with their earned profits:

1) Keep it on their balance sheet to pay for future expenses (i.e. suppose next year is not profitable);
2) Pay off existing debt;
3) Invest in new equipment or other modernization upgrades to increase production and/or lower costs;
4) Issue a dividend or increase the dividend to stockholders;
5) Hire more employees;
6) Increase wages and/or pay bonuses to various employees;
7) Buyback stock;
8) Invest in other companies either as a passive investment or a takeover;
9) Provide some benefit for the consumer (in the case of insurance this would most likely be reducing premiums);
10) Donate the money to a charitable cause;

There is little reason to suspect that insurance companies would pledge profit neutrality where insurance premiums would be lowered to correspond to the savings garnered by the cancer cure versus current cancer treatment. Whether legitimate or not the failure to lower premiums would be justified through arguing the prevalence of other chronic conditions like high blood pressure, obesity and high cholesterol. Anything else that could be done with the saved money that would benefit society directly or even indirectly like donating the money to charity, increasing stock dividends or hiring more employees seems even more far-fetched than lowering premiums.

With the most likely use of the saved funds going into the coffers of the insurance company to be used for CEO bonuses and other frivolous actions to bolster the already wealthy (modern reality dictates that this will happen) the newly saved direct cancer costs will not be reinvested into society at any real magnitude. Any rational person realizes that with decades of empirical evidence against it, supply side economics (a.k.a. trickle down) does not work thus increasing CEO bonuses is rather irrelevant to societal economic benefit. In addition other economic changes would lead various hospitals to lose millions of dollars (it stands to reason that they charge more for cancer treatments over the elements used in those treatments) and the medical companies that provide current cancer treatments will more than likely lose money as well because longer-term treatments yield more revenue than shorter-term treatments (which is what a cancer cure is assumed to be). Whether or not these losses will result in further expanded economic damage to society through things like a cutback in the existing workforce is unclear.

It is also important to address the fate of money devoted to researching a cancer cure. A vast majority of the money that funds cancer research comes from the government through the National Institute of Health (NIH), somewhere in the neighborhood of at least 5.6 billion.10 Therefore, it is reasonable to suggest that this money could be directed elsewhere from the NIH general fund to other worthy grant applicants. However, whether or not this will occur is unknown because of possible future cutbacks in NIH funds. For example some members of Congress may simply decide to eliminate a large amount of past cancer funding from the NIH in an inaccurate and foolish attempt to “cut government pork/waste”. While it is difficult for a legislator to cut funds for cancer research from a public relations standpoint, the public would not be so resistant to a cut in cancer funds that would have been redirected to say Chronic Fatigue Syndrome or something else.

The money derived from charitable organizations like the National Cancer Society is less likely to be recovered. Clearly the development of a cancer cure would heavily limit the continued functionality of such organizations. Most people who donate to cancer charities do so because of a personal connection to cancer, either they had it or know someone who has/had it. Therefore, it is unlikely that these individuals will donate to other charities if such a personal connection does not exist, they do not have a “charity donation quota” that needs to be met each year. Overall it stands to reason that most of the government funds for future cancer research will be directed to other medical research (unless Congress gets involved) and most of the private funds that would have gone to cancer research will not go towards other medical research. Some of these private funds could be invested in the economy yielding positive results, but the extent of benefit for such investment is unclear.

Part of the economic analysis associated with the possible societal influence of a cure for cancer is based on the assumptions that the cure would be widely available and will treat most, if not all, of the major forms of cancer. Initially it could be argued that the widely available assumption is not appropriate. However, the reasoning behind including this assumption is as followed. In the developed world it stands to reason that major insurance companies would cover the cost of the cure. This reasoning makes sense on two fronts: first, it is highly probable that a cure for cancer will cost less than the existing standard cancer treatment, thus it makes direct business sense for insurance companies that cover existing cancer treatments to shift coverage plans to cover the cancer cure.

Second, from a public perception standpoint any insurance company not providing coverage for a cancer cure would more than likely be basically ostracized from the marketplace with consumers avoiding that company in favor of one that does provide cancer cure coverage (recall that the assumption for this discussion is a cancer cure not a cancer vaccine thus people will still develop cancer). For the developing world it seems likely that various NGOs and charities will cover a majority of cancer cases with significant cost parameters.

So the general summary of change to society with the development of a cancer cure like treatment is as followed:

- Approximately 7.6 - 7.9 million people per year no longer die (assuming that the cure is available to all and that estimate may be on the lower end if cancer rates and deaths increase in the future which they are expected to);
- A vast majority of the direct costs associated with cancer treatment becomes increased profit for various insurance companies that will not result in an increased benefit to society;
- The growth/loss transfer to the general economy associated with indirect costs is unclear due to the simplistic level of modeling associated with cost replacement in the models themselves;
- The increased population will increase strain on existing resources, especially in developing countries, which have the majority of cancer deaths, as well as increase competition between individuals more than likely decreasing the quality of life of a larger number of individuals;
- There may be a contraction in employment at hospitals and other medical service companies, especially in high healthcare cost countries like the United States;
- Redirection of government grants and other monetary awards, both public and private, from cancer research to another medical condition/treatment should occur for government funds (again depending on how Congress acts), but not for private funds;

Understand that this blog post is not suggesting that the medical research community stop pursuing better cancer treatments, including one that may result in a cure, but instead is raising the important discussion point regarding how society may change in response to a cure. These changes are important to consider because of the momentous influence cancer as a physical disease and a psychological condition has on modern society. Cancer is in a rather unique position as a disease because while technically increasing age increases probability of development, no other disease permeates human health along all age groups at such a level of scale. Most other major diseases only afflict the elderly population. Therefore, eliminating this influence of cancer will cause significant change that must be accounted for and properly addressed to maximize the efficiency of curing cancer.

Overall it is troubling that no one really discusses these potential changes. Perhaps society is functioning under the belief that a cure to cancer is still decades away so addressing potential problems stemming from a cure is not necessary or that there will be no problems stemming from a cure. Neither of these explanations makes sense because there will be problems and even if a cure is delayed by decades developing a thought methodology to analyze how society could change is beneficial. Therefore, there are no excuses for the lack of attention being paid to possible negative societal changes that could be brought on by the development of a cure for cancer. As a starting point the most pressing issue for study is how the increased population will increase competition and possibly negatively influence society as a whole more so than if cancer is not cured.

Citations –

1. American Cancer Society. Cancer Facts & Figures 2012. Atlanta: American Cancer Society; 2012, page 1.

2. Bloom, D, et Al. “The Global Economic Burden of Non-communicable Diseases.” Harvard School of Public Health. World Economic Forum. 2011.

3. Wikipedia Entry – Cancer. 2013.

4. World Health Organization Fact Sheet N-297. Cancer. January 2013. In conjunction with Globocan 2008, IARC, 2010. http://www.who.int/mediacentre/factsheets/fs297/en/index.html

5. Corso, P, Soyemi, A, Lane, R. “Part II: Economic Impact Analysis. Cost of Illness.” Center for Disease Control and Heart Disease and Stroke Prevention.

6. Hutubessy, R, et Al. “Indirect costs of back pain in the Netherlands: a comparison of human capital method with the friction cost method.” Pain. 1999. 80:201-207.

7. Johansson, P. O. (2001). Is there a meaningful definition of the value of a statistical life? J Health Econ. 20(1):131-139.

8. Viscusi, W, and Aldy, J. “The Value of a Statistical Life: A Critical Review of Market Estimates Throughout the World.” Journal of Risk and Uncertainty. 2003. 27(1):5-76.

9. World Bank. “The Effects of Pollution on Health: The Economic Toll, in Pollution Prevention and Abatement Handbook, World Bank, Washington, DC. 1998.

10. National Institute of Health. Estimates of Funding for Various Research, Condition and Disease Categories (RCDC) between 2009 and 2014. April 2013. http://report.nih.gov/categorical_spending.aspx

Tuesday, August 20, 2013

Methane, Siberia and Bubbles

One of the more dynamic issues regarding the progression of global warming and its ability to induce detrimental effects on society is the role of methane trapped in permafrost on land and methane hydrates in the ocean and their release into the atmosphere as surface and ocean temperatures increase. Methane garners such attention because some are concerned that once a significant and consistent amount of methane starts to discharge from natural sources a runaway effect will begin dramatically increasing the probability of detrimental environmental damage. Unfortunately the estimates surrounding this “tipping point” vary considerably with large uncertainty because no one actually understand how the environment will respond once these methane sources start emitting methane.

While surface permafrost trapped methane is important because the ocean is absorbing most of the initial additional heat created by the combustion of fossil fuels the near-term focus should be placed there. Due to millions of years of methane accumulation1-3 and sea level change during the last glacial maximum it is known that large amounts of methane are trapped in hydrates (a form of methane in a clathrate molecule containing water ice) on or beneath the sea floor; however, the actual amounts in both quantity and stability are unknown (methane estimates range from 700 to 10,000 Pg of C).1,4-6 The stability may be unknown, but more frequent plumbs of methane release, especially around the Eastern Siberian Arctic Shelf (ESAS) have begun to worry scientists.7,8 The excessive greenhouse potential of large scale methane release should be a concern, thus it is important to determine a counter-strategy to reducing the probability that significant “tipping point” amounts of methane are released from these hydrates.

Typical hydrate formation was driven by pressure as melting temperatures for given compounds increase with pressure. Most of the hydrates that formed over time did so at large ocean depths (a few hundred meters below the sea floor) due to this melting temperature change principle.1 The ocean water column also experiences a reduction in temperature with an increase in pressure. Therefore, most methane hydrates are “protected” by a double security blanket: the higher melting point and the cooler deep ocean temperatures that are further buffered by a sediment layer. There are two types of methane hydrates deposits: stratigraphic and structural with a majority of the formations being stratigraphic, which appear to contain less methane than structural.9

The ESAS is especially important in the issue of methane release because its hydrates are located in much shallower water (45-50 meters) than most others because instead of relying on pressure to induce temperature changes for formation, the ESAS, as well as some other parts of the Arctic, simply used existing lower temperatures as a driver for hydrate formation.1,8 Unfortunately now with the Earth influenced by global warming these shallower hydrates have a much higher probability of releasing methane over their deeper counterparts. Also while some climate scientists have shown concern about the land based permafrost in Siberia, the average temperature of the ESAS bottom seawater is 12 – 17 degree C warmer than the average surface temperature over land based permafrost8,10,11 making methane release from the ocean more probable than release from the land.

Of course methane release from the hydrates is only the first step for producing additional atmospheric methane and aggravating global warming. There are additional “safeguards” even after methane bubbles have formed from the hydrates. First, the production of bubbles associated with melting attempts to destabilize the sediment column, but fortunately the depth of sediment packing prevents such a catastrophic occurrence typically limiting rate of release.12 Second, because the sediment column does not collapse it acts as a physical barrier and typically remains cold enough that the methane bubbles migrating through it results in the dispersion of the bubble.1 Third, free flowing sulfate creates a chemical barrier that can oxidize the methane.1 Fourth, methanotrophic bacteria can react with the methane converting the methane to CO2 (clearly not an ideal situation).

Not surprisingly though these “safeguard” are not able to neutralize all of the released methane. The probability of successful migration is largely dependent on bubble volume as the larger bubbles can create a larger pressure differential between both the sediment and the water versus the bubble at the top and bottom of the bubble.1,8,9 Unfortunately this critical element of bubble volume is difficult to measure or model; this is one of the elements that make it difficult to accurately portray methane release.

Despite the lack of good information pertaining to creating accurate models of methane hydrate release, there is little uncertainty that continued warming of the ocean by releasing larger concentrations of CO2 and other greenhouse gases into the atmosphere will result in large amounts of methane release from methane hydrates. One of the trickier elements for this situation is that CO2 mitigation is a long-term solution, but not a short-term solution and methane release may be an all-term problem. The reason for this concern is that while reducing CO2 emissions will eventually result in a cooling atmosphere, oceanic release of heat through convection should proceed at a much slower pace maintaining the threat of methane release for a considerable period of time after CO2 mitigation is completed. Therefore, a strategy for mitigating this release probability beyond mitigation of CO2 emissions must be developed.

CO2 emission reduction is a longer-term strategy even if it rapidly occurs because of the physics of ocean heating and cooling. The surface layer of the ocean is warmed by sunlight penetration typically increasing the ocean surface temperature beyond the above atmosphere leading to heat loss. This rate of heat transfer is determined by the temperature gradient of “cool skin layer”, a thin viscous region of the ocean (0.1 to 1 mm thickness) that is in contact with the atmosphere.13,14 Due to the heat transfer between the atmosphere and the cool skin layer water molecules are forced together in a more organized formation limiting heat transfer to conduction only. When dealing with the thermodynamics of conduction temperature gradients are critical.

Addition of excess greenhouse gases to the atmosphere trap heat and redirect random percentages of the heat back to Earth including the ocean surface. This heat only penetrates the “cool skin layer” warming the top portion of the layer changing the temperature gradient.13,14 The change decreases the gradient between the atmosphere and the top portion of the “cool skin layer” and increases the gradient between the top portion and bottom portion of the “cool skin layer”. Due to these gradient changes heat will travel between the top portion and the bottom portion of the “cool skin layer” reducing the probability that heat is expelled back into the atmosphere. Thus the greenhouse gases have to be eliminated (through technological or natural processes) after mitigation to allow the ocean-atmosphere gradient to normalize for the ocean to start expelling heat into the atmosphere on a consistent basis.

There are two general short-term strategies for reducing the probability of methane release into the atmosphere: prevent the methane hydrate from melting in the first place or prevent the methane from reaching the surface and entering the atmosphere after melting. Despite potential protests from certain parties, the execution of these strategies will entail technological techniques that can be regarded as geoengineering. Two strategies come to mind when attempting to prevent the hydrates from melting: cloud thickening and increasing ocean surface albedo. One important aspect of strategy selection is to focus on locality to limit the costs and increase efficiency of the strategy. Such a consideration handicaps the injection of sulfuric aerosols into the atmosphere to promote cooling because over a significant period of time (multiple years) it is nearly impossible to maintain localization of these aerosols due to wind currents, thus the aforementioned two options become the most attractive.

Fortunately because the late fall, winter and early spring temperatures in the ESAS provide no threat to inducing methane hydrate melting any executed strategy would only need to be administered at most five months per year (early May to early October). The chief advantages of cloud thickening is its easy execution utilizing wind-propelled ships with reactants that are not environmentally detrimental and in very limited testing seems to reduce atmospheric temperatures. The chief disadvantage of cloud thickening is that it is a catalytic agent in that it only thickens existing clouds; it cannot create clouds in clear skies. Therefore, while this catalytic element is not a significant problem when considering cloud thickening for a global solar radiation management strategy, it could be significantly detrimental in its inconsistency for a local strategy.

Increasing ocean surface albedo is a little trickier because there are two chief possibilities: increase ice coverage and increase wake formation. Increasing ice coverage is almost a non-starter because it would involve fighting against decades of additional absorbed oceanic heat that has been reducing Arctic ice coverage including in the ESAS. Therefore, the increased albedo must come from something else. One possibility is increasing wake formation. While the process of creating a wake is theoretically simple, propeller generated vortices pressurize air creating submerged bubbles that rise to the surface,15,16 its overall reliability is questionable. For example to create the necessary speed to produce wake from ships would be counterproductive due to negative elements associated with the fueling components of those ships (relying on wind would not be appropriate).

Therefore, instead of directly applying a wake, one can indirectly create a wake through the production of a surface bubble layer. Bubbles require little energy to create, thus the operational costs for such a system are low.17,18 Bubbles increase ocean surface albedo by increasing the reflective solar flux by providing voids that backscatter light.17 In addition modeling the reflective behavior of bubbles is similar to aerosol water drops because light backscattering is cross-sectional versus mass or volume dependent and the spherical voids in the water column have the same refractive index characteristics. Note that ocean surface albedo varies with angle of solar incidence. Common values are less than 0.05 at 12:00, below 0.1 at 65 degrees solar zenith angle and a maximum albedo, which range from 0.2 to 0.5, at solar zenith angle 84 degrees.19-22

Experiments have already demonstrated the creation of hydrosols from the expansion of air saturated water moving through vortex nozzles, which applies the appropriate level of shearing forces creating a swirling jet of water.18 Also by using an artificial two-phase flow smaller microbubbles can be created to the point of even creating interfacial films through ambient fluid pressure reduction.19 Microbubbles can possibly form these films because they typically last longer than visible whitecap bubbles, which rise and bust in seconds. Note that whitecaps are froth created from breaking waves and can increase ocean albedo up to 0.22 from the common 0.05-0.1 values.23

While whitecaps from waves and wakes do provide increased surface albedo, the effect is ephemeral. Microbubble lifespan can be influenced by local surfactant concentration and fortunately the ESAS has limited surfactant concentration that would influence this lifespan, thus granting more control in the process of creating those bubbles (less outside factors that could unduly influence bubble lifespan). For example, if these bubbles are created through technological means additional elements can be added to the reactant water like a silane surfactant that could add hours to the natural lifespan.24 Bubble lifespan is probably the most important characteristic for this form of ocean albedo increase.

Another method for creating microbubbles comes from biomedical engineering or biology arena where microfluidic procedures and sonication are used to enhance surfactant monolayers to stabilize microbubble formation.25 However, there are two common concerns about this method. First, it is used primarily in a laboratory largely for diagnostic and therapeutic applications, not in the field; therefore there may be questions about transition. Second, while sonication increases stabilizing time, it limits control of microbubble size distribution, which limits the total reflectiveness of the bubbles.26,27

An expanded and newer laboratory technique, electrohydrodynamic atomization, generates droplets of liquids and applies coaxial microbubbling to facilitate control over microbubble size. Unfortunately one concern with this technique is that as mentioned above ideal bubble size is in microns, this technique is currently only able to create single digit millimeter sized bubbles.25 However, the increased size may be offset by the increased stability of the bubble (less overall reflection, but longer residence times). Comparison testing will be required to make the appropriate judgment.

Initially the idea of cloud brightening was dismissed above due to its catalytic ability versus an inherent driving reactant ability, but this dismissal was based on cloud brightening as a standalone application in the ESAS. However, cloud brightening could be a useful secondary component to a microbubble system. Another point of note is that over time microbubble surface application will result in hastened cooling of the ocean, especially the surface, which should increase CO2 retention capacity. Basically increasing ocean albedo should result in a very small localized increase in CO2 absorption increasing ocean acidity.

Most of the above discussion has centered on preventing the hydrates from thawing versus preventing the released methane from reaching the atmosphere. The reason for this focus is quite obvious; preventing thawing is easier than preventing released methane from reaching the atmosphere, especially in the oceanic environment itself. There are two main methane “removal” reactions utilized by nature and neither one is appealing for eliminating ocean born methane.

The first and principle method of elimination involves the reaction of methane with a free hydroxyl radical (OH-) in the troposphere or stratosphere creating water vapor and CH3- radical. This CH3- radical usually later reacts with another hydroxyl radical to form formaldehyde. While this reaction almost exclusively occurs in the upper atmosphere transferring it to the ocean in some form will not improve upon the situation. Methane can also react with natural chlorine gas to produce chloromethane and hydrochloric acid (free radical halogenation), but this is another atmospheric reaction that probably cannot be effectively transferred to an ocean medium.

The chief ocean methane reaction involves metabolization by microorganisms known as methanotrophs (or methanophiles). There are two major types of methanotrophs (ribulose monophosphate users and serine carbon assimilators) divided into numerous additional groups, which use two principle reactions with selectivity governed by the availability of oxygen.28 Note that methanotrophs are also located in soils and landfills and aerobic/anaerobic methanotrophs are of different families.28 Both the aerobic and anaerobic basic reactions are shown below (the reactions have numerous intermediates and their efficiency is largely based on what type of monooxygenases (MMO) enzyme is utilized):28,29

CH4 + 2O2 → 2H2O + CO2 (1)

CH4 + SO4(2-) → HCO3- + HS- + H2O (2)

The aerobic reaction is the principle reaction between the two, but has two drawbacks. First, the consumption of oxygen limits its ability to scale and address large amounts of methane release from melting hydrates due to the creation of oxygen limiting factor regions (a.k.a. dead zones). However, this magnitude of this drawback is limited in the ESAS because of the limited amount of life by scale. Second, the reaction produces CO2 as a product, which may be a net detriment overall because of the lifespan of CO2 and increasing ocean acidity. Unfortunately this drawback is not as limited as the first because ocean mixing will de-localize the increase in acidity. The scaling efficiency of the anaerobic reaction is the chief problem with its use because most of the oceanic available SO4 is located near the ocean floor, which limits its usefulness once the methane release concentration begins to increase significantly. Therefore, it does not appear that either relying on existing methanotrophic or other methane-oxidizing bacteria or attempting to increase their numbers will be an effective strategy for addressing methane hydrate melting.

There is a bit of question whether or not methane in the ESAS is a genuine threat. Air sampling surveys have revealed large variability in methane concentrations versus the standard global background concentration of 1.85 ppm with average increases of 5-10%. Some have also calculated a total methane flux from the ESAS of 10.64 million tons of methane per year.30 However, modeling studies suggest that permafrost lags behind changes in surface temperature, thus current outgassing is tied to long-lasting warming initiated by permafrost submergence approximately 8000 years ago versus recent Arctic warming.31 Such a conclusion is possible, but the rationality seems far-fetched due to the magnitude of the time lag.

Overall the threat of significant methane release in the ESAS is a legitimate one, but not one that demands immediate strategy implementation. While immediate strategy implementation is not required, strategies to address this melting possibility must be studied to ensure a solid and effective plan when the time for implementation comes, which appears to be soon. Currently local production of microbubbles by some form of floating device (a buoy for example) initially appears to be the best strategy for preventing methane release, but as mentioned future study must be conducted to ensure the validity of this promise.

Works Cited

1. Archer, D, Buffett, B, and Brovkin, V. “Ocean methane hydrates as a slow tipping point in the global carbon cycle.” PNAS. 2009. 106(49): 20596-20601.

2. Davie, M, and Buffett, B. “Anumerical model for the formation of gas hydrate below the seafloor.” J Geophys Res. 2001. 106:497–514.

3. Dickens, G. “Natural Gas Hydrates: Occurance, Distribution and Detection”
(American Geophysical Union, Washington, DC). 2001. 124: 19–38.

4. Milkov, A. “Global estimates of hydrate-bound gas in marine sediments: How much is really out there?” Earth-Sci Rev. 2004. 66:183–197.

5. Dickens, G. “The potential volume of oceanic methane hydrates with variable external conditions.” Org Geochem. 2001. 32:1179–1193.

6. Holbrook, W, et Al. “Methane hydrate and free gas on the Blake Ridge from vertical seismic profiling. 1996. Science. 273:1840–1843.

7. Archer, D (2006): Destabilization of methane hydrates: a risk analysis. A Report Prepared for the German Advisory Council on Global Change (40pp). PDF

8. Shakhova, N, et Al. “Extensive methane venting to the atmosphere from sediments of the East Siberian Arctic Shelf.” Science. 2010. 327:1246-1250.

9. Milkov, A, and Sassen, R. “Economic geology of offshore gas hydrate accumulations and provinces.” Mar Petrol Geol. 2002. 19:1–11.

10. Romanovskii, N, et Al. “Offshore permafrost and gas hydrate stability zone on the shelf of the East Siberian Seas.” GeoMarine Letters. 2005. 25:167-182.

11. Flemings, B, Liu, X, and Winters, W. “Critical pressure and multiphase flow in Blake Ridge gas hydrates.” Geology. 2003. 31:1057–1060.

12. Kayen, R, and Lee, H. “Pleistocene slope instability of gas hydrate-laden sediment of Beaufort Sea margin.” Mar Geotech. 1991. 10:125–141.

13. Fairall, C, et Al. “Cool-skin and warm-layer effects on sea surface temperature.” J. of Geophysical Research. 1996. 101(C1):1295-1308.

14. Painting, R. “How increasing carbon dioxide heats the ocean.” Skeptical Science. October 18, 2011. http://www.skepticalscience.com/print.php?n=939

15. Reed, A. and Milgram, J. “Ship wakes and their radar images.” Annu. Rev. Fluid Mech. 2002. 34:469–502.

16. Gatebe, C, et Al. “Effects of ship wakes on ocean brightness and radiative forcing over ocean.” Geophysical Research Letters. 2011. 38:L17702.

17. Seitz, F. “On the theory of the bubble chamber.” Physics of Fluids. 1958. 1: 2-10.

18. Seitz, F. “Bright Water: hydrosols, water conservation and climate change.” 2010.

19. Evans, J.R.G, et Al. “Can oceanic foams limit global warming?” Clim. Res. 2010. 42:155-160.

20. Davies, J. “Albedo measurements over sub-arctic surfaces.” McGill Sub-Arctic Res Pap. 1962. 13:61–68.

21. Jin, Z, et Al. “A parameterization of ocean surface albedo.” Geophys Res Letters. 2004. 31:L22301.

22. Payne, R. “Albedo of the sea surface.” J Atmos Sci. 1972. 29:959–970.

23. Moore, K, Voss, K, and Gordon, H. “Spectral reflectance of whitecaps: Their contribution to water-leaving radiance.” J. Geophys. Res. 2000. 105:6493-6499

24. Johnson, B, and Cooke, R. “Generation of Stabilized Microbubbles in Seawater.” Science. 1981. 213:209-211

25. Farook, U, Stride, E, and Edirisinghe, J. “Preparation of suspensions of phospholipid-coated microbubbles by coaxial electrohydrodynamic atomization.” J.R. Soc. Interface. 2009. 6:271-277.

26. Wang, W, Moser, C, and Weatley, M. “Langmuir trough study of surfactant mixtures used in the production of a new ultrasound contrast agent consisting of stabilized microbubbles.” J. Phys. Chem. 1996. 100:13815–13821.

27. Borden, M, et Al. “Surface phase behaviour and microstructure of lipid/PEG emulsifier monolayer-coated microbubbles.” Colloids Surf. B: Biointerfaces. 2004. 35:209–223.

28. Lo-sekann, T, et Al. “Diversity and abundance of aerobic and anaerobic methane oxidizers at the Haakon Mosby Mud Volcano, Barents Sea.” Applied and Environmental Microbiology. 2007. 73(10):3348-3362.

29. Wikipedia Entry – Methanotroph.

30. Shakhova, N, et Al. “Anomalies of methane in the atmosphere over the East Siberian shelf.” Geophysical Research Abstracts. 2008. 10:EGU2008-A-01526.

31. Dmitrenko, I, et Al. “Recent changes in shelf hydrography in the Siberian Arctic: Potential for subsea permafrost instability.” Journal of Geophysical Research. 2011. 116:C10027.

Tuesday, August 13, 2013

Addressing Improper Patent Litigation

Non-practicing entity (NPE) or patent assertion entity (PAE) are organizations that are most commonly lumped into the debate about patent trolls. Patent trolls are regarded as individuals or companies that have little to no research base, but purchase patents from other companies and then enforce those patents against alleged infringers in an overly aggressive and illegitimate matter because the patent holder has no intention of making commercial use of the patent themselves. Instead the threat of litigation is typically used to extort higher than appropriate licensing fees so the NPE can collect a quick dollar. While some praise NPEs for allowing startups and other small businesses an opportunity to better enforce their patents, enforcing a patent against others with no intention to commercialize or license the patented idea/technology at a fair price defiles the very purpose of the patent and is an inherent detriment on society both economically and culturally. Unfortunately there are too many individuals/companies that care only about their personal finances, not about the positive development of society; therefore, it is important to develop strategies to eliminate the benefits for behaving like a patent troll. Note that it must be understood that not all NPEs are patent trolls.

In addition to providing an unnecessary cultural and creative developmental barrier the quantitative economic costs facilitated by patent trolls are enormous estimated at 29 billion dollars in the U.S. in 2011 (note this estimate does not include opportunity costs only direct capital costs).1 In fact “patent trolling” was thought to make up approximately 61% of all patent litigation in 2011 and 2012.2 However, this economic damage could have been worse if not for the ruling from eBay v. MercExchange in 2006, which eliminated blanket permanent injunctions simply for patent infringement instead reaffirming the necessity of the four factors judgment when determining the level and length of injunction, if any at all.3 This decision limited the ability of patent trolls to “threatened” permanent injunctions as a negotiation tool during possible licensing agreements, which has been thought to limit the boldness of the settlement demands.

Another one of the problems with patent trolls is the structure of their organization limits the “defensive” options of the alleged infringer individual/company. The lack of manufacturing limits the ability to monitor activities by competitors and their existing patents to searching patent databases, thus infringement might only be recognized after significant investment is made in production and infrastructure. Also counter-suits typically lack significance against individuals/companies that generate income from litigation over direct commerce. Litigation costs for these types of plaintiffs are typically less than for the defense limiting mutual assured destruction tactics (i.e. litigation costs bankrupting both parties). Finally patent misuse claims are also difficult because of the necessity of anti-trust violations born from manufacturing dominance, which is lacking for a patent troll due to a limited/non-existent manufacturing basis.

In one attempt to combat patent trolls, in September 2011 the Leahy-Smith America Invents Act (AIA) officially became law with its central provisions going into effect last March (officially March 16, 2013). The most important features of this legislation are a change in patent prominence from “first to invent” (FTI) to “first inventor to file” (FITF) system, which also eliminated interference proceedings and an expansion of post-grant opposition options for outside parties. While this legislation has good intentions, there is little reason to suspect that it will have a significant impact on reducing improper patent litigation over the coming years and in fact it may create more problems than solutions in the overall patent environment.

For example switching prominence from FTI to FITF demonstrates no real advantages (it is somewhat ironic that this is the system utilized by the rest of the world sans the Philippines). The original idea was to eliminate interference costs and procedures as well as reduce costs and increase efficiency for acquisition of foreign patents. For example FTI focuses on when a patent applicant can prove “invention” of the idea through documentation of when it was put into practice (i.e. prototype construction). If two inventors file patent applications on the same invention an interference hearing is conducted to determine which inventor conceived of the patentable concept first in appropriate form. Basically even if inventor A applied for a patent a month after inventor B, if inventor A created a prototype two years before inventor B, inventor A would be first in line for receiving the patent. Interestingly enough while interference hearings can be expensive they are actually quite rare relative to the number of patent applications filed each year.

Under FITF interference costs are eliminated because the date of idea conception is no longer relevant only the official date of application for the patent matters. Unfortunately this system gives rise to derivation proceedings, which can more expensive than interference hearings. Derivation proceedings are exactly what they sound like, individual/company A has a patent application challenged by individual/company B on the grounds that individual/company A independently conceived of the patentable idea. Most derivation proceedings will focus on the petitioner claiming that the defendant developed the idea from ideas conceived by the plaintiff.

One of the chief problems with FITF is that the need for the inventor to file creates a “race to the Patent Office” mentality, which more than likely will decrease patent quality (ideas must be kept secret for fear of being scooped on the patent) and increase overall work for the USPTO because of fear that similar ideas may cast a broad net eliminating the patentability of other ideas. In these patent office races larger companies with in-house patent lawyers have a huge advantage over smaller companies that have to contract out patent applications to independent patent lawyers. The idea of “best mode” disclosure is also handicapped by FITF. Also new prior art published after the invention date but before the filing date could eliminate the validity of valid patentable ideas. Finally it will be interesting in the future to see if any patent attorneys are sued by a prospective patent holder for filing the appropriate patent application paperwork after a second company despite receiving the material earlier.

Some could counter that FITF eliminates hidden prior art and their ability to unfairly restrict other patent applications. Hidden prior arts are inventions that have no patent application filling, thus cannot be found through a database search, but in FTI would force the rejection of all relevant patents issued before the filing, but after the invention. However, to address this fear by changing FTI to FITF is similar in ridiculousness to how Republicans try to deal with alleged voter fraud through forcing the use of voter IDs. Basically both systems punish millions of individuals/companies for the bad behavior of say seven people because both incidents of violation are so incredibly rare. In both situations there are much better and fairer solutions if one is genuinely interested in actually addressing those rare outliers. Overall changing FTI to FITF was perplexing and more than likely will create more problems than solutions.

Some believe that AIA policy of restricting plaintiffs from filing large multi-defendant lawsuits versus filing multiple serial lawsuits will limit litigation abuse of bad patents. Another element thought to be limiting are executive orders requiring the USPTO to create a structured policy governing more frequent updates to patent ownership and NPEs having to publicly file their demand letters. Unfortunately there are problems with expecting significantly limitation from these new policies.

First, NPEs have already adjusted to the lack of multi-defendant filing capacity by asking for courts to consolidate pre-trial proceedings through multi-district litigation filings (a strategy allowed by the United States Judicial Panel on Multidistrict Litigation), which limits the disadvantages of disallowing multi-defendants. Second, the public demand letter issue is rather irrelevant because NPEs can simply file a lawsuit before sending a demand letter eliminating any detriment born from a public announcement, especially because the threat of a lawsuit is the power behind the letter and lawsuits are much less expensive for NPEs. Third, these changes do not influence the use of “exclusive licensees” as the motivator behind the initiation of infringement litigation.

The expansion of post-grant opposition options also appeared positive initially, but has not produced and probably will not produce the anticipated results. The principle element to post-grant opposition changes is allowing a third party to challenge patent validity. There are three significant elements to this third party challenge potential: 1) pre-issuance third-party submissions; 2) post-grant review; 3) inter partes review. Unfortunately despite these changes the probability that they will reduce the approval of bad patents or litigation involving bad patents is questionable.

First, for pre-issuance third-party challenges it is rare that a third party will have sufficient knowledge of ongoing patent applications, unless that third party is a plant or sponsored by some organization, because such applications are difficult to locate and track. Second, most patents only become relevant after an individual or company begins producing a marketable product, not while a patent is pending. This behavior also limits the effectiveness of the time period (nine months) associated with the post-grant review process. After the expiration of the nine-month initial post-grant review period the AIA allows for inter partes review. However, this type of review is not the same as the current inter partes review in the reexamination process as judgment responsibilities are transferred from the USPTO to administrative law judges on the Patent Trial and Appeals Board and the review standard is raised from “substantial new question of patentability” to “reasonable likelihood” with regards to whether or not the patent will be overturned upon actual reexamination.4 These two changes should reduce the number of inter partes reexaminations granted due to third party request.

Furthermore reexamination requests are typically utilized as supplement to patent litigation not substitution for it. For example in 2008 30% of ex parte reexaminations and 62% of inter partes reexaminations were enacted due to pending litigation.5 Also expansion of post-grant reexaminations could hurt the patent process as companies have greater opportunity to file oppositions for the purpose of wasting time and financial resources of their competitors reducing their ability to compete. A quick side note: ex partes examinations of a given patent are triggered by petition of a third party, the patent holder or the director of the USPTO, but once the examination begins only the holder and the assigned patent agent for the reexamination conduct the reexamination. Inter partes can be triggered similar to ex partes, but third parties are allowed to participate in the reexamination.

Finally one of the most important aspects of these new conditions is there is no change or hastening of the review process itself. While the legislation attempts to install a maximum time period of 18 months for disposition few actually believe the USPTO will be able to conduct an effective review in such a time period. Existing inter partes reviews typically take 34 to 53 months for non-reworked patents without appeals to be settled and five to eight years for appealed cases.6 The demanded “special dispatch” status of inter partes reexaminations is irrelevant because the USPTO is overworked and because the AIA expands inter partes to include discovery and a hearing in the Patent Trial and Appeal Board further increasing the take taken to resolve these types of reviews.4,6

In addition when infringement litigation and inter partes reviews are co-pending federal courts can grant a stay (the probability that a stay is granted depends largely on the court), which eliminates the ability for the patent owner to collect damages from the potential infringer. Thus smaller companies/individuals may be placed under great financial strain and have to declare bankruptcy before a ruling, thereby mitigating any value in eventually receiving redress from victory in the infringement litigation. Overall while some thought the AIA would be useful for battling NPEs most of the provisions in the AIA appear to be more detrimental than beneficial.

One proposed solution to addressing patent trolling litigation that has been floated is to invoke a mandatory patent reexamination for all contested patents that are the subject of litigation.4 In this proposal the plaintiffs bringing an infringement lawsuit will be responsible for all elements associated with the reexamination including the fee. Proponents of such an idea believe that the “extortion” effect of trial would be heavily mitigated by the specter of reexamination revoking the patent, thus reducing the probability that bad patent holders attempt to litigate. While this belief is true for bad patents there are a few concerns with such a strategy as well. First, the costs associated with a reexamination are not cheap, ex parte reexamination is $2,250, and an inter partes reexamination costs $8,800.4 Fortunately this cost is not a large issue for most patent suit plaintiffs.

Second, it will delay the ability of the patent holder to enforce his/her patents against infringing parties. As highlighted above both ex parte and inter parte reviews take years to complete, which could significantly handicap start-ups and small businesses against larger competitors. During this delay larger competitors with greater manufacturing, marketing and distribution capacity could completely sweep smaller competitors from the market after copying the patented technology and/or process crippling them to the point that even after reexamination and litigation are completed (5+ years later) the plaintiff company will be unable to recover. This scale effect is also why the elimination of process patents, championed by some, is foolish if one wants to create an environment where smaller companies have any chance to succeed against the “big boys”.

Some proponents have proposed additional safeguards including vacating the mandatory reexamination if a preliminary injunction could be obtained or an initial showing can be made that the patent is being asserted against a direct competitor or potential competitor.4 The one immediate concern is how the second voiding strategy is constructed because while most NPEs initially lack manufacturing capacity to compete with most of the companies they litigate against depending on how this provision is written they could “dummy” up some manufacturing to achieve this second condition, thus allowing any bad patents to skip the mandatory reexamination. Overall for the above suggestion the key question is how can this provision be attained by a small business or start-up that may not have any manufacturing yet, but plans to in the future versus restricting it from NPEs that have no future intention of manufacturing a product that utilizes information from the patent in question?

Third, there is a minor concern about court venue coming out from reexamination where the plaintiff may no longer have the ability to determine venue instead initiating a “race to the court house” mentality. Fortunately this “concern” is rectified in another suggestion later.

Fourth, reexaminations do not have the same level of stringency as court proceedings regarding the types of invalidity that can be applied. For example under 37 CFR 1.552 prior art rejections are only made due to prior art patents or printed publication not on prior use, sale or inventorship. Therefore, reexamination may miss certain issues of invalidity giving both the plaintiff and defendant the false impression that the patent is high quality entering into litigation.

Fifth, similar to the second concern, is that mandatory reexamination proposals will force more work on an already overtaxed USPTO. Some argue that because on average only about 2,500 patent infringement lawsuits are filed each year7 the additional work is minimal relative to the overall number of patent applications filed each year [500,000-600,000 (applications have increased significant in recent years)].8 Unfortunately the problem with this mindset is that the USPTO is already supersaturated in its workload thus even a small increase does not act as a linear increase, but an exponential increase in time allotment.

Some have argued that the USPTO can address these additional issues by increasing fees for these mandatory reexaminations. There are two concerns with such a strategy: first, again small businesses may have trouble paying the fee when enforcing their patents and unable to acquire a waiver. Second, the total additional funds from this increase is dynamic, thus there is no guarantee regarding revenue. What happens if revenue falls short on a given year, will various patent examiners become yo-yos hired and fired on a yearly basis? Overall a mandatory reexamination is an interesting idea, but there are some concerns chiefly how to manage the additional workload for the USPTO and how to differentiate between patents owned by start-ups and patents owned by NPEs with no economic manufacturing intent.

Clearly another means to address future bad patent litigation is to reform the USPTO itself so bad patents are not granted in the first place. While this suggestion is obvious the key to USPTO reform is simple funding. There is not enough money to hire or retain enough quality patent examiners, especially when considering the cost of living in Alexandria, Virginia or surrounding areas. The planned PTO “branch offices” are uncertain in their ability to alleviate patent backlog. Unfortunately U.S. Congress has not demonstrated any desire to increase USPTO funding; in fact as time passes Congress has become more likely to cut funding from the USPTO over increase it. In fact due to a lack of funding the USPTO invoked a hiring freeze in 2009 and has expanded that freeze in 2013 partially due to the Federal Government sequestration resulting in the USPTO facing an approximate 140 million dollar shortfall (requiring further cuts).9

If improving monetary availability is not practical then systematic changes within the USPTO can be administered to reduce the probability that bad patents are left to the court for termination. For example Congress can grant the USPTO the mandate to reject the use of broad/generic language in an application when disclosure only focuses on a very specific improvement to an existing art. The highly criticized “sealed crustless sandwich” patent would have been rejected during the initial examination due to this mandate instead of during reexamination. Also the office could reorganize so that two examiners need to sign off on an application in that one examiner would do all of the standard work and the second examiner would “edit” that work so to speak. A second perspective can see things that might have been missed or ask questions on the methodology of the first examiner. Such editing would be cursory, thus not adding significant time expenditure.

Expansion and reorganization of the USPTO search database would help both examiners and applicants regarding existing prior art and the probability that a patent will be/should be approved. Also the official declaration of specific terms for given subjects (think a glossary of sorts) should dramatically reduce the ability of certain patent holders to hide behind ambiguity to avoid prior art eliminations or expand their patent to cover more than it should.

Additionally Congress could convene various panels of industry experts to review all existing patents in their respective fields to make recommendations on whether or not the patent is legitimately proprietary and suggest whether or not a reexamination of the given patent should occur. Industry, with their mega-profits, could provide the funding for such panels if needed because industry is most affected by patent trolls. Such a process could clarify the legality of numerous patents greatly increasing potential patent litigation efficiency and limiting total patent trolling in those given subjects.

One misconception is that patent trolls prey on the large corporations looking for the big payoff, but because most patent trolls do not want to actually go through the process of litigation about 55% of the NPE defendants have a net worth of $10 million or less.10 The reason for this targeting is officially unknown, but it can be assumed that these smaller companies have less experience and financial resources to actually engage in litigation and are more likely to avoid litigation by signing a large licensing fee. The large costs associated with defending an infringement suit and the uncertainty of outcome are the chief problems with bad patents. For NPEs plaintiff litigation costs are minimized because the lack of manufacturing and business operations limits the necessary work hours associated with discovery and data mining. Counter-suits are not typically available to defendants further lowering costs. Also NPEs consistently employ lawyers who work on contingency versus hourly fees11 in these infringement suits as well as sue multiple defendants with the same patent infringement claim (while they can no longer sue multiple defendants in the same action, litigation “reach” has not changed).

While NPEs use the above strategies to reduce their costs, they also apply various legal strategies to increase the defendant costs. First, NPEs demand maximum discovery even if most of their request is irrelevant.11 Second, they frequently assert infringement against multiple patents, which places pressure on the defense increasing resource expenditure. Third, because the plaintiff controls the jurisdiction due to patent infringement being a federal procedure an inconvenient venue can be selected to increase transportation costs. However, this strategy is typically foregone for selecting the Eastern District of Texas officially because of its expertise in patent issues, but realistically because it tends to favor plaintiffs in patent-based litigation.

A survey by the American Intellectual Property Law Association determined that for smaller companies a claim that would net a most 1 million dollars generally cost $650,000 to defend, a $1 to $25 million dollar claim generally cost $2.5 million and a claim exceeding $25 million cost approximately $5 million.12 With these general benchmarks as expected litigation costs, it is not surprising that numerous defendants sign licensing fees at above market value to avoid these costs; even if the defendant is in the right, the costs of the patent litigation could cripple them.

One means to counter high defendant litigation costs is if the defendant is found not guilty of patent infringement and the plaintiff is found lacking cause to bring the infringement suit in the first place, the plaintiff will be obligated to pay the defendant one and half times his court costs. This idea has been floated before, but usually only the English system of redress is utilized where the loser pays the court costs of the winner. The concern with only limiting the reimbursement to “at cost” instead of some multiplier of cost is that a majority of the charged defendants would have to defend themselves and win for it to provide a negative motivation factor, which even with “loser pays” could still be risky due to the uncertainty of court decisions. Another concern is that recent suggested legislative action (i.e. SHIELD) would restrict “loser pays” to only corporations not individuals. Although difficult to speculate at this time, such a condition could completely mitigate any benefits of such a system in that NPEs could sell patents to individuals associated with the company and then that individual could file the infringement suit avoiding the “loser pays” aspect.

The federal government could also establish a specific loan for individuals/companies at 3% interest to cover court fees pertaining for patent litigation (clearly the loan would need to be evaluated for the bad patent potential). The projected interest related costs should not be significant concern for those defending against genuine patent trolls if there is some form of “loser pays” system because even if there is no multiplier the interest cost will typically be less than the licensing fee demanded by the plaintiff. Another possibility is that the federal government could provide lawyers on retainer (working for the federal government) to mitigate costs for needy individuals/companies (these lawyers would also receive a 3% commission from won cases). This lawyer pool could be an interesting idea because the vast majority of costs associated with patent litigation are lawyer fees.

As mentioned above litigation history has demonstrated that plaintiffs overwhelmingly prefer to select the United States District Court for the Eastern District of Texas largely because it tends to favor plaintiffs in patent-based litigation. Therefore, one means to discourage patent trolling is to allow for random selection from the venue pool eliminating the ability of the plaintiff or defendant to select the venue for future litigation. Removing the ability of the plaintiff to select what may be regarded as a “home court advantage” would inherently reduce the attractiveness of a frivolous patent lawsuit. Finally one means for the government to “strong-arm” a reduction in bad patent litigation is to force licensing arbitration between conflicting parties to limit the overall extortion and litigation costs. However, such a strategy would have to be carefully constructed because numerous groups would seek to manipulate it in some way if enacted.

The cost of patent trolls and their actions are a drain on society both from a creative and economic standing. However, one must be careful when dealing with patent trolls; while most patent trolls are NPEs, not all NPEs are patent trolls. NPEs can provide a useful service as a middleman to allow smaller companies a better opportunity to enforce their patents. Dealing with patent trolls demands dealing with bad patents. One critical element for addressing bad patents is to neutralize them as bad or obvious ideas before they become patents in the first place hence more vigilance at the USPTO and providing improved resources to allow for this vigilance. However, USPTO reform will not address existing bad patents. Mandatory reexaminations are one means of addressing currently existing bad patents, but the better way may be through litigation with some form of loser pays system and government loaned funds. Overall addressing patent trolls appears straightforward, society simply must be realistic about doing what needs to be done.

Citations –

1. Bessen, J and Meurer, M. “The direct costs from NPE disputes.” Boston Univ. School of Law, Law and Economics Research Paper No. 12-34 - Cornell Law Review. 2014. Vol:99. Forthcoming.

2. Goldman, D. “Patent troll: ‘I’m ethical and moral.’” CNN. July 2, 2013.

3. Wikipedia – “EBay Inc. v. MercExchange, L.L.C.” Accessed August 9th, 2013.

4. Bradford, B and Durkin, S. “A proposal for mandatory patent reexaminations.” The Intellectual Property Law Review. 2012. 52(2): 135-166.

5. Guest Post: Hot Topics in US Patent Reexamination. Patentlyo. http://www.patentlyo.com/patent/2009/03/guest-post-hot-topics-in-us-patent-reexamination. html?cid=6a00d8341c588553ef011168d0dd50970c

6. Reexamining Inter partes Reexam. Institute for Progress. http://www.iam-magazine.com/blog/IAMBlogInterPartesReexamWhitepaper.pdf

7. Lemley, M. “Where to file your patent case.” AIPLA. 2010. 401(38): 404.

8. U.S. Patent Statistics Chart Calendar Years 1963 – 2012. http://www.uspto.gov/web/offices/ac/ido/oeip/taf/us_stat.htm

9. Welcome Back Fee Diversion: USPTO Likely to Begin Sending Collected Fees back to Treasury. Patentlyo. April 2013. http://www.patentlyo.com/patent/2013/04/welcome-back-fee-diversion-uspto-likely-to-begin-sending-collected-fees-back-to-treasury.html

10. Chien, C. “Patent Trolls by the Numbers.” Patentlyo. Mar. 14, 2013. http://www.patentlyo.com/patent/2013/03/chien-patent-trolls.html

11. Sudarshan, R. “Nuisance-value patent suits: an economic model and proposal.” Santa Clara Computer and High Tech L.J. 2008. 159(25):160.

12. American Intellectual Property Law Association. “Patent litigation costs: Report of the Economic Survey.” 2011. http://www.aipla.org/learningcenter/library/books/econsurvey/2011/Pages/default.aspx