Wednesday, April 30, 2014

Changing the way drugs are patented

One of the big problems with drug research is the conflict between producing low cost drugs that can manage or treat various medical conditions and allowing drug manufactures the ability to remain in business by ensuring a profit. These conflicting objectives are further stressed in third-world and developing countries that do not have robust middle class populations that can afford high price medication or insurance that will cover this medication. However, it also must be understood on the issue of drug company profits that drug development successes have become more difficult in modern times and these successes must pay for the research and development of both the successes and the failures.

Originally fully functional patents, not provisional patents, offered market exclusivity sans negotiated licenses, for 17 years. However, patents issued after June 8, 1995 now have an original operation period of 20 years. In addition there are various options for extending the intellectual property protection of a patent based on how long it takes for the FDA to approve the drug for marketing and sale or if the drug falls into a specific category of treatment.

Drug companies also have other “less genuine” strategies for extending market exclusivity of a drug most notably “evergreening”. Evergreening typically involves making minute changes in drug formulation like the inclusion of chirality (left-handed and right-handed isomers), different inactive components or specific hydrate forms. Both the initial time period and evergreening have been widely criticized by generic drug advocates looking to hasten the emergence of lower cost options in the marketplace. Evergreening is viewed as especially troublesome because it is frequently thought of as gaming the system and a form of patent trolling by producing a weak secondary patent or change to support a nearly expired primary patent.

Unfortunately for generic drug advocates it is difficult to expect a significant change in the general length or structure of patent protections without a trade-off because of the research and development costs associated with drug development. Generic drug manufactures do not have to absorb the costs associated with drug discovery typically involving high-throughput active compound analysis and lengthy clinical trials. While the general costs associated with drug development are incredibly controversial with some estimates ranging from the upper hundreds of millions of dollars to even billions of dollars and other estimates ranging in the lower hundreds of millions of dollars, no rational person disputes that it costs at least tens to hundreds of millions of dollars to produce a new drug that fails in a phase 3 clinical trial and hundreds of millions of dollars to produce a successful new drug. Therefore, drug manufactures must have sufficient opportunity to neutralize those costs and other general overhead with revenue from their successes. While the principle creator of a drug can license production to another company, the lower prices that emerge in combination of the time limit associated with the patent greatly limit revenue and overall profitability.

Notwithstanding the fairness element in drug manufacturing profitability, waiting 15+ years for lower cost options of various new drugs is too long, especially in a world where bacteria resistance to existing antibiotics and understanding of biological methodologies associated with neurological diseases have advanced at a rapid pace. Therefore, a compromise needs to be reached that will allow drug companies to recoup their R&D losses as well as produce sufficient revenue for future research, yet still hasten the time taken for low cost generics to appear in the marketplace. The best strategy may be to create a mandatory compulsory license in the patent system for drugs that automatically triggers after a shorter market exclusivity period.

For example instead of a 20 year exclusivity period what if drug patents were 5 years of market exclusivity followed by a 10% royalty in perpetuity derived from price on all generics based off that patent. This strategy would allow lower cost generics to enter the marketplace typically 15 years earlier than they do now. While this arrangement will work better for those who want access to pharmaceuticals for poorer individuals both in developed and developing countries, does it allow pharmaceutical companies to cover the costs of R&D and continued expansion? This is a difficult question to answer in all situations because there are various moving parts, but a general idea to the efficacy of this idea can be demonstrated through a general example.

Suppose company A produces an effective suppressive treatment, drug A, for condition A. Note that this treatment must be taken continuously over a period of time, i.e. it is not viewed as a “cure”. This characterization is not surprising because a vast majority of pharmaceutical drugs that are commonly consumed in modern society have this feature, i.e. one prescription of statins do not permanently reduce cholesterol. Therefore, because drug A has to be taken constantly, multiple prescriptions will be filled over the course of a year; for this example the number of prescriptions per year for drug A will be 4. However, it has also been noted that due to the cost of non-generic drugs certain consumers “split pills” lengthening the total time required to fully consume a full prescription. Due to this behavior the example will assume that 3.5 prescriptions will cover a single year for drugs under patent. The price of drug A while on patent is 150 dollars per prescription with a manufacturing cost of 20 dollars yielding a profit of 130 dollars per prescription.

Typically due to the large prices of patent drugs the marketplace is limited to the more developed world. In this example the marketplace for drug A will be 900 million individuals and if condition A has an occurrence rate of 2% that would create a potential customer base of 18 million individuals. Over time this customer base will increase due to other individuals coming down with condition A and current consumers remaining on drug A. This increase will equal 1% of the current customer base (i.e. last year’s customer base). Starting as a new drug it makes sense that drug A will not acquire a full market share on the first year of its introduction. The market share will be 5% for the first year increasing to 10% for year 2, 20% for year 3, 40% for year 4 and leveling off at 70% for year 5 and beyond. For the current on patent method it will be assumed that no other company will be licensed to manufacture drug A and that after the patent expires in 20 years drug A will be pushed out of the marketplace entirely by lower price generics.

In the new suggested patent idea after 5 years on patent a 10% royalty for all generics would activate. Access to generics would open the entire rest of the world to treatment by drug A for condition A (another approximately 6.1 billion individuals). Since a number of chronic drugs are influenced by excess food consumption and less exercise it stands to reason that the occurrence rate for condition A in these developing and third-world countries will be lower. For the purpose of this example the occurrence rate for these new potential consumers will equal 50% the occurrence rate in the developed world (i.e. 1%). This scenario will produce an additional 30.5 million potential consumers for drug A. Generic prices are significantly lower than on patent drug prices, thus for this example the average generic price, which will service all individuals after 5 years even those in the developed world, will be 80% lower (i.e. 30 dollars per prescription). Due to the lower price, consumers will not feel the need to “split pills” thus the recommended 4 prescriptions per year will be observed. It is assumed that charitable organizations and NGOs will assist low-income consumers with purchases in poor countries. However, while NGOs and the like will help due to continuing income gaps, existing alternative strategies that have been utilized while drug A was under patent and distribution concerns in various countries where market stability is in question the global marketplace penetration of generic drug A will be 50%.

Table 1 below summarizes the major features of each strategy:

Table 1 – Important Initial Condition Elements

Current Patent Method:

Initial Customer Base – 18 million;
On Patent Years - 20
New Customers – 1% growth per year;
Price - $150;
Occurrence Rate – 2%;

New Patent Method:

Initial Customer Base – 18 million;
On Patent Years - 5
New Customers – 30.5 million after year 5 and 1% growth;
Price - $30;
Occurrence Rate – 1%;


After analysis of the above scenario the major results are summarized in the below table:

Table 2 – Important Financial Results

Current Method:

Total Revenue after 20 years - $9,952,552,601
Total Revenue after 40 years - $9,952,552,601
Total Consumers per year after 20 years - 4,754,302
Total Consumers per year after 40 years - 0

New Method:

Total Revenue after 20 years - $5,211,049,656
Total Revenue after 40 years - $10,260,542,661
Total Consumers per year after 20 years - 74,872,229
Total Consumers per year after 40 years - 93,272,020

Breakeven Year - 39

Not surprisingly after 20 years the current method produces a larger profit for company A versus the new proposed method. However, after 40 years the new proposed method creates more profit. After both 20 and 40 years the number of consumers that are aided by the new method is significantly larger. The most debilitating controllable factor preventing greater yearly revenue from the new method is market penetration. In the above example the 50% market penetration was viewed as conservative. If market penetration were 70% the breakeven year would be between year 30 and year 31. Of course it must be acknowledged that the nature of this example is heavily influenced by various factors, thus this result cannot be taken as the typical result in all situations.

Clearly the new method is superior for drugs that do not have an initial high price point like antibiotics. Another advantage of using this method for antibiotics is that while there exists the small probability that human beings will one day no longer suffer from the chronic conditions (high cholesterol, high blood pressure, erectile dysfunction, etc.) heavily reducing the need for these types of drugs, it is very unlikely that humans, sans becoming cyborgs, will be able to escape pathogenic infection and the need for antibiotics. Therefore, this new method should work better for antibiotic research in long standing companies versus the current system.

It should be noted that at least two important elements were excluded from the above analysis that should have significant economy impact due to a lack of specifics and pertinent information. First, the principle pharmaceutical developer, company A, typically will spend millions of dollars in direct to consumer advertisement and hundreds of millions marketing to physicians. It is rational to expect that a disproportionate amount of these advertising dollars will be spent in the initial launch of the drug to create interest and market share. However, it stands to reason that at least 50% of the total money spent on advertisement will occur between patent years 6 to 20. Under the new system this money would not be spent on advertisement and could be directed towards other activities like future research.

Second, as discussed above various pharmaceutical companies devote significant resources both in researcher labor and money to increase the length of their patents through the process of evergreening. Under this new system pharmaceutical companies would not have the ability to extend the patent, thus would not devote financial capital and man-hours to trying. The total value of these resources is unknown, but it stands to reason that an appropriate estimate is in the millions of dollars in direct capital with the additional costs born of the unknown opportunity costs of devoting valuable research staff to try to save the patent on an expiring drug rather than research a new drug.

Another side benefit from this system should be a reduction of the counterfeit drug market. The principle reason that counterfeit drugs are a desirable criminal enterprise is the high per unit profit margins. However, with only five years of patent protection the longevity of the counterfeit marketplace is significantly eroded making it financially risky for individuals to attempt to create a supply chain to forge these types of pharmaceuticals. Even if individuals do counterfeit these types of drugs the overall potential for harm is significantly lessened because of the small patent window before safe generics can enter the market replacing the counterfeit drugs.

However, one of the biggest concerns with this new suggestion is that while it will work for “blockbuster” drugs and should provide a boon to critical new antibiotic research, smaller marketplace drugs, most notably for orphan conditions, would be hurt due to the limited volume profitability potential. Both blockbuster drugs and antibiotics work in this new system because of the volume of individuals who will take these drugs over the decades long lifespan. Drugs for orphan conditions by definition do not have a large volume of potential consumers. This lack of a customer base is why orphan conditions have typically been neglected for so long in general practice. Those companies that do attempt to create drugs for these conditions are motivated largely by the ability to corner a market with small volume but large per unit profit margins. Cutting patent protection for these orphan drugs by 75% would be devastating for their profitability, which would lead companies not to attempt to discover them in the first place. Therefore, if the above suggestion is incorporated into new patent enforcement then a special condition must be made for orphan condition research.

Changing the operational nature of patents, especially in such a volatile market, due to the critical nature of pharmaceuticals in human longevity, cannot be taken lightly. The suggestion above attempts to address the two conflicting forces in the field by allowing drug developers the ability to take advantage of their successes to produce the necessary revenue to produce additional successes while also ensuring a humanitarian morality that drives the desire to place quality pharmaceuticals into the hands of those in need at affordable prices. The above financial analysis identifies the 5-year lifetime 10% royalty patent idea as a viable alternative to the current patent structure. However, as noted above an exception clause must be made for drugs that are being developed for orphan conditions lest those diseases may continue to be forsaken due to their characteristics as a financial loser. Overall while there are some financial elements that were only generally covered in the above analysis due to a lack of information and differential situations, it currently stands to reason from a logical perspective that changing the patent enforcement rules for pharmaceutical drugs could be a win-win for both global consumers and pharmaceutical companies.

Tuesday, April 22, 2014

Restoring the Arctic

There are numerous environmental concerns surrounding the progression of human-derived global warming. One of the most pressing is the persistent loss of Arctic ice. Due to a vast majority of global warming related heat being absorbed by the ocean all oceanic temperatures have increased, regardless of location, with the Arctic receiving the greatest temperature increase due to its lower base temperature. This increase has been significant enough that the ice extent at the summer minimum, which consistently occurs in September, has resulted in a net loss of 11% per decade since 1979 with a loss of 1.1 meters of mean ice thickness between 1980 and 2000.1,2 This loss of thickness has produced a general shift in the ice type from older multi-year ice to new single year ice resulting in an overall replacement of about 40% of the thick and old multi-year ice with single year ice.3 Coinciding with this empirical evidence various global and regional climate models have predicted that the situation will only get worse in the future.4

The chief purpose of ice in the Arctic, from a global warming standpoint, is to increase ocean albedo due to its reflective surface versus the darker surface of the water itself. When sunlight strikes the transparent/white surface of ice a vast majority of it is reflected back into the atmosphere. When sunlight strikes the dark blue, sometimes black, surface of Arctic water a vast majority of the light and its associated heat content is absorbed by the ocean rather than reflected back into the atmosphere. On a general level this heat absorption is a positive feedback effect where the more heat absorbed the more ice melts leading to even more heat absorbed, etc. Normally the ocean and its system of currents operate as a heat sink to control surface and atmospheric temperatures; however, this new massive heat absorption reduces sink efficiency allowing more heat to remain in the atmosphere increasing the detrimental effects associated with global warming. A secondary effect is that greater amounts of ice melt will increase global sea level rise in the future placing more coastal and even slightly inland cities at risk as well as negatively affecting Arctic wildlife by eliminating “land” surfaces for hunting and habitation.

With these near-future negative environmental events born from a lack of Arctic ice one would reason that it is important to find and execute a methodology that would increase Arctic ice volume and longevity. The most obvious means of increasing Arctic ice would be to eliminate the human derived excess heat, which would restore typical Arctic ocean temperatures seen in the 50s and 60s and even further past. One means of accomplishing this goal is to simply reverse the actions that lead to the heating. While reducing global carbon emissions is an important and critical step in addressing global warming, the realistic timetable for cooling the Arctic through carbon mitigation then reliance on natural processes is still decades if not even over a century away. Based on the rate of melting a more immediate solution will be required.

Recalling the albedo-heat feedback cycle from above, one method to break that cycle would be to increase the albedo of the ocean. Not surprisingly it is nearly impossible to change the natural color of the ocean due to its size and natural mixing, thus changing ocean albedo will require human intervention to change the surface albedo of the Arctic ocean. The easiest method is to mimic nature itself and increase surface ice by enhancing ice formation. Obviously enhancing ice formation will require large amounts of water; fortunately meeting this supply requirement is not a problem for water can be taken from the ocean itself and re-deposited on existing ice.

One of the principle reasons this strategy works is that ice is a quality thermal insulator, which can increase the speed of water freezing. In addition nucleation may also play a role in this ice formation enhancement where ice-forming nucleus tend to trigger freezing of under-cooled water droplets at higher temperatures when in solid contact versus liquid immersion.5-7 While the reason for this enhancement is unknown it is suspected that there are thermodynamically favorable interactions at the air-water interface8,9 leading to contact nucleation as a manifestation of an enhanced surface nucleation rate.5 Basically the liquid environment reduces the uniformity of the air-water interface reducing the efficiency of nucleation. Another important influencing factor may be that nucleation near the surface is greater because of a greater freedom of motion, thus the kinetic rate coefficient is larger at the surface than in the bulk (regardless of that bulk being solid or liquid); this change is important because the change in activation energy between phase changes is exponential.5 Overall the important point to take home is that water sprayed on to the surface of ice has a higher probability of freezing into new ice versus that water remaining adjacent or beneath the ice (all things being equal).

However, increasing ice formation will require managing the temperature increases that have lead to the reduced ice in the first place. There are two chief methods for addressing this temperature question. The first method is to take the water from the ocean and run it through a heat exchanger to remove a sufficient amount of heat to produce an appropriate freezing probability. The chief drawbacks to this method are the energy required to operate the heat exchanger and what to do with the heat absorbed from the water. The heat exchanger needs to be operated with an energy medium that has a very small carbon footprint otherwise the negative aspect of the added CO2 to the atmosphere through this method will more than likely exceed the benefits of adding more Arctic ice. In addition the heat removed from the water must be stored properly because if it is released to the environment it will either enter the atmosphere or the ocean, either result would largely mitigate any advantage to increasing Arctic ice.

The second method involves drawing ocean water not from the surface, but from deeper water near the bottom of the thermocline where the average temperature is much lower. The weakness of the first method is the reliance on the heat exchanger and its energy demands. Unfortunately while the second method eliminates the heat exchanger it cannot eliminate the need for additional energy usage because instead of using a heat exchanger a pump is required. The unknown question is which method will require more energy. Overall unless the first method is significantly more energy efficient, the second method should be favored because there is no excess heat to manage. While the power requirements for the pump and eventual energy consumption are easy to calculate experimentation will have to be conducted to identify the appropriate pumping rate, spray volume, and spray angle.

An important secondary question is what should be done about the salt in the supply water? One possibility would involve removing the salt because salt “decreases” the freezing point of water making it more difficult to form ice and could even result in ice sheet perforation. An alternative strategy would involve retaining the salt, which would strengthen down-welling currents when the ice melts. The best means to determine the best strategy would simply be to test this ice formation methodology and closely observe how the rate of secondary ice formation changes depending on the current temperature and time of year without any salt removal. If the formation rate is not sufficient then the salt will need to be removed.

If water cannot be used due to energy requirements the other major option for creating a change in the ocean surface albedo in an environmental neutral method is cover the water surface with bubbles. One of the chief advantages of this second option is that bubbles require little energy to create, thus the operational costs for such a system are low.10,11 Bubbles increase ocean surface albedo by increasing the reflective solar flux by providing voids that backscatter light.10 In addition modeling the reflective behavior of bubbles is similar to aerosol water drops because light backscattering is cross-sectional versus being mass or volume dependent and the spherical voids in the water column have the same refractive index characteristics.10 Note that ocean surface albedo varies with angle of solar incidence. Common values are less than 0.05 at 12:00, below 0.1 at 65 degrees solar zenith angle and a maximum albedo, which range from 0.2 to 0.5, at solar zenith angle 84 degrees.12-15 Based on this comparison information the principle formula governing brightening is:

DeltaF = DeltaA * Io * So * (1-Cf) * Tu * Td

where DeltaF = change in brightening; DeltaA = change in albedo on water surface; Io = solar irradiance; So = cosine of solar zenith angle; Cf = fraction of cloud cover; Tu = upwelling transmissive; Td = down-welling transmissive;10

Experiments have already demonstrated the creation of hydrosols from the expansion of air saturated water moving through vortex nozzles, which applies the appropriate level of shearing forces creating a swirling jet of water.11 Also by using an artificial two-phase flow smaller microbubbles can be created, which can even result in interfacial films through ambient fluid pressure reduction.12 Microbubbles can possibly form these films because they typically last longer than visible whitecap bubbles, which rise and bust in seconds. Note that whitecaps are froth created from breaking waves and can increase ocean albedo up to 0.22 from the common 0.05-0.1 values.16

While whitecaps from waves and wakes do provide increased surface albedo, the effect is ephemeral. Microbubble lifespan can be influenced by local surfactant concentration and fortunately the Arctic has limited natural surfactant concentration that would influence this lifespan, thus granting more control in the process of creating those bubbles (less outside factors that could unduly influence bubble lifespan). For example, if these bubbles are created through technological means additional elements can be added to the reactant water like a silane surfactant that could add hours to the natural lifespan.17 Bubble lifespan is probably the most important characteristic for this form of ocean albedo increase both from an economic and efficiency standpoint. However, while most surfactants and other agents like glycerin are typically not environmentally detrimental, the massive amounts required for increasing bubble longevity may make its use economically and environmentally unsustainable.

Another method for creating microbubbles comes from biomedical engineering where microfluidic procedures and sonication are used to enhance surfactant monolayers to stabilize microbubble formation.18 However, there are two common concerns about this method. First, it is used primarily in a laboratory largely for diagnostic and therapeutic applications, not in the field; therefore there may be questions about transition, especially for the dramatic increase in production scale that will be required for Arctic use. Second, while sonication increases stabilizing time, it limits control of microbubble size distribution, which could limit the total reflectiveness of the bubbles.19,20

An expanded and newer laboratory technique, electrohydrodynamic atomization, generates droplets of liquids and applies coaxial microbubbling to facilitate control over microbubble size. Unfortunately one concern with this technique is that as mentioned above ideal bubble size is in microns, but this technique is currently only able to create single digit millimeter sized bubbles.18 However, the increased size may be offset by the increased stability of the bubble (less overall reflection, but longer residence time). Comparison testing will be required to make the appropriate judgment.

The final method for increasing ice formation involves devising a piece of technology that can absorb excess heat from the Arctic Ocean. At first thought such an idea seems unlikely due to the size of the Arctic Ocean and its environmental inputs. However, it may not be as far-fetched as it seems. The key to making such a strategy viable is efficiency and scale within the utilized technology.

Scale is achieved through a design that is small enough that it can be produced at reasonable cost with a reasonable level of speed. Efficiency is typically achieved through producing a device that is self-cycling and thereby producing an autonomous operation. If human involvement is required beyond “pushing the start button” then efficiency is significantly compromised. Tie that efficiency loss in a single unit and multiply it by the units required for scale and the result can be devastating in both the terms of cost and viability.

If the objective is to withdraw heat from the ocean the most important element in the device is what agent will be utilized to accomplish this task. Ironically water is one of the best insulators of heat, which is why it is used for cooling purposes in power plants, thus removing heat could prove difficult. Fortunately there is promising research that supports the idea of incorporating zeolite as the heat absorbent material. Zeolite is a mineral make up of SiO2-, various AlO2 groups and alkali-ions and is capable of absorbing gaseous molecules including water due to its crystalline structure. When zeolite absorbs a gas it retains heat due to the absorption enthalpy.21 In addition because zeolite is commonly produced synthetically for use as molecular sieves and washing detergents it is cheap (50 – 75 cents /kg) and environmentally neutral.21

A good example of how zeolite is used in heat absorption is seen through their use in absorption refrigerators. Absorption refrigerators consist of two connected but independent vessels, the evaporator and absorber. The evaporator vessel acts as a quasi-vacuum containing only the vapor pressure of a liquid, which is usually water. When the valve connecting the two vessels is opened the water vapor moves into the absorption vessel and is absorbed by the zeolite reducing the vapor pressure. The loss of pressure causes a phase change as the water become liquid. Eventually the zeolite becomes saturated ceasing the heat transfer between the zeolite and the water. In the refrigerator model at a later time the zeolite is superheated condensing the absorbed water vapor and returning it to the evaporator vessel.

However, the secondary functionality of the above refrigerator design, zeolite recovery through heating, is not applicable in an oceanic environment. The water and resultant heat must be released from the zeolite so it can be reused, but this release will produce excess heat, which is similar to the problem of using a heat exchanger in the first strategy, there is no good place on the open ocean to store the heat without avoiding environmental release. One strategy to address this issue with a small movable device is when the zeolite becomes “full” the device can return, via a small battery powered motor, to a “mother” ship of sorts where the zeolite heat release process can be conducted. After restoring the zeolite to its rest state the device can return to the Arctic to withdraw more heat. After sufficient time the “mother” ship will be “full” of heat and would return to a land base, most likely Iceland due to its geothermal reserves as an energy source and well, to properly off-load the heat stores. Granted this method will place some limits on overall efficiency due to the trips between the Arctic and heat releasing stop over points, but necessary to manage the heat problem.

In the end the positive feedback associated with the warming-albedo reduction relationship is a legitimate threat to carbon mitigation and remediation strategies as a whole. Therefore, society needs to appreciate the time discrepancies associated with restoring colder temperatures to the Arctic Ocean in effort to preserve Arctic ice, especially during the summer. A technology-based solution will be required. Three possible strategies have been presented above in general detail to attempt to break this warming-albedo reduction relationship. One of the advantages of all of these strategies is that they can be experimentally explored with little overall detriment due to their ephemeral nature. Basically if the results are not similar to what is anticipated the experiments can be stopped with little environmental or economic damage. Overall something needs to be done about increased rate of warming in the Arctic and the dramatically increased rate of ice lost if global carbon mitigation strategies are going to be fully effective at reducing the detrimental effects of global warming.

Citations –

1. Perovich, D, and Richter-Menge, A. “Loss of sea ice in the Arctic.” Annu. Rev. Mar. Sci. 2009. 1:417–441.

2. Rothrock, D, Percival, D, and Wensnahan, M. “The decline in Arctic sea-ice thickness: Separating the spatial, annual, and interannual variability in a quarter century of submarine data.” J. Geophys. Res. 2008. 113:C05003.

3. Kwok, R. “Observational assessment of Arctic Ocean sea ice motion, export, and thickness in CMIP3 climate simulations.” J. Geophys. Res. 2011. 116:C00D05.

4. Bjork, G, Stranne, C, and Borenas, K. “The sensitivity of the Arctic Ocean sea ice thickness and its dependence on the surface albedo parameterization.” Journal of Climate. 2013. 26:1355-1370.

5. Shaw, R, Durant, A, and Mi, Y. “Heterogeneous surface crystallization observed in undercooled water.” Journal of Physical Chemistry B Letters. 2005. 109:9865-9868.

6. Vali, G. In Nucleation and Atmospheric Aerosols; Kulmala, M., Wagner, P., Eds.; Pergamon: New York, 1996.

7. Pruppacher, H, and Klett, J. Microphysics of Clouds and Precipitation, 2nd ed.; Kluwer Academic Pub.: Norwell, MA, 1997. Chapters 7 and 9.

8. Djikaev, Y, et Al. “Thermodynamic conditions for the surface-stimulated crystallization of atmospheric droplets.” J. Phys. Chem. A. 2002. 106:10247. doi:10.1021/jp021044s.

9. Tabazadeh, A, Djikaev, Y, and Reiss, H. “Surface crystallization of supercooled water in clouds.” PNAS. 2002. 99(25):15873-15878.

10. Seitz, F. “On the theory of the bubble chamber.” Physics of Fluids. 1958. 1: 2-10.

11. Seitz, F. “Bright Water: hydrosols, water conservation and climate change.” 2010.

12. Evans, J.R.G, et Al. “Can oceanic foams limit global warming?” Clim. Res. 2010. 42:155-160.

13. Davies, J. “Albedo measurements over sub-arctic surfaces.” McGill Sub-Arctic Res Pap. 1962. 13:61–68.

14. Jin, Z, et Al. “A parameterization of ocean surface albedo.” Geophys Res Letters. 2004. 31:L22301.

15. Payne, R. “Albedo of the sea surface.” J Atmos Sci. 1972. 29:959–970.

16. Moore, K, Voss, K, and Gordon, H. “Spectral reflectance of whitecaps: Their contribution to water-leaving radiance.” J. Geophys. Res. 2000. 105:6493-6499

17. Johnson, B, and Cooke, R. “Generation of Stabilized Microbubbles in Seawater.” Science. 1981. 213:209-211

18. Farook, U, Stride, E, and Edirisinghe, J. “Preparation of suspensions of phospholipid-coated microbubbles by coaxial electrohydrodynamic atomization.” J.R. Soc. Interface. 2009. 6:271-277.

19. Wang, W, Moser, C, and Weatley, M. “Langmuir trough study of surfactant mixtures used in the production of a new ultrasound contrast agent consisting of stabilized microbubbles.” J. Phys. Chem. 1996. 100:13815–13821.

20. Borden, M, et Al. “Surface phase behaviour and microstructure of lipid/PEG emulsifier monolayer-coated microbubbles.” Colloids Surf. B: Biointerfaces. 2004. 35:209–223.

21. Kreussler, S, and Bolz, D. “Experiments on solar adsorption refrigeration using zeolite and water.”

Tuesday, April 8, 2014

Unions and College Athletes – What Happens Next

On March 26, 2014 the Chicago office of the National Labor Relations Board (NLRB) ruled that the football players of Northwestern University are employees of the university not simply student-athletes, thus they have the ability to form a union and have the general protections afforded to all employees under federal law. While there are numerous hurdles left for college athletes to climb before officially having the ability to join a union long-term, this post will not deal with whether or not this ruling is legally valid and will survive NCAA appeal or the methodology behind their formation and operation of the future union(s), but will instead ask what steps a union should take to enrich the lives of college athletes.

The chief reason behind the desire of college athletes to unionize is that currently they have no effective power to participate in the decisions and operations of the NCAA governance on any level. For workers one of the major advantages of a union is it coordinates focus and awareness across and between participating parties. This focus is critical to creating scale power because workers in any industry have little power if only able to act on their own or in small groups. Unfortunately for college athletes this scale power critical for maximizing bargaining ability from this ruling is only limited to private universities in states with NLRBs that rule similarly to the Chicago office; public universities are governed by existing state law, so there will be other obstacles for unionization for these universities, especially due to the fact that 24 states have active right-to-work legislation restricting unionization. However, ignoring this concern for a moment what would college athletes require of universities with the new power to form a union?

The most public complaint/driving force used by Northwestern athletes is a concern regarding medical coverage. In 2005 the NCAA mandated that athletes must be covered by health insurance in some form with limited restrictions on the provider (basically the insurance could be from the university, individually purchased, from the athlete’s parents, etc.). In addition the NCAA operates a “catastrophic injury” insurance policy through the Mutual of Omaha when an injured athlete has medical costs that typically exceed $90,000 born from a single injury event (although it can be $75,000 for universities that participate in the NCAA Group Basic Accident Medical Program).

While many universities provide medical insurance to athletes as part of the scholarship, the chief problems with this structure is a lack of legal requirement (most do it out of a form of social responsibility), a lack of transparency and a lack of uniformity as various universities have various types of insurance coverage. Most athletes receive proper medical attention when injured, but these three above problems catalyze the probability of athletes entering a state of “medical limbo” with regards to their treatment. Not surprisingly these are the “horror” stories that major media periodically latches on to; however, the problem is that these types of stories are not unique to athletes, but afflict non-athletes as well, thus are not an inherent problem in the college system.

Clearly the current system of medical coverage does have its holes, but holes that are easily repaired especially in the face of new legal protections. Note that for football players it is difficult, despite the “certainty” of concussion proponents, to directly link participating in football to brain damage that occurs decades later. Understandably it is reasonable to suggest that there is an increased probability for future brain damage from playing football, but to suggest that any element of damage can be derived exclusively from playing football is incredibly difficult. Therefore, while it makes logical sense to extend medical coverage for college athletes beyond their playing days, this extension should have a valid time limit. There are two strategies for negotiation.

The first strategy would be to focus on a simple flat time period that would be applied to all athletes and extend beyond the individual’s playing career. For example a good time period appears to be five years, which is also used by the NFL. Therefore, suppose an athlete stops playing sports for University A on March 28, 2015 under such a system this individual would be covered by the university’s healthcare program until March 28, 2020 regardless of whether or not they are still a student. This strategy appears fair because it allows all athletes to have sufficient time to recover from all major physical and short-term mental injuries acquired while playing sports for a particular university.

However, some may view a flat rate as inappropriate because it treats all athletes as equal despite the amount of time these athletes may have actually participated in the given sport. Therefore, a second strategy would be to focus on an extension tied directly to the length of time a sport was played. For example one could create a system where an athlete is covered by the university’s healthcare program for an additional two years after the playing career is concluded for each year the individual played. So if an individual played ice hockey for two years and stopped playing on April 3, 2015 under such a system this individual would be covered until April 3, 2019. This system would operate on the mindset that the longer a person played the higher the probability of acquiring an longer-term injury, thus the longer an individual plays the longer that individual should have extended health coverage.

While the exact details of such a system would have to be developed through negotiation between the union and each particular university or possibly the NCAA directly, it stands to reason that this healthcare coverage would be secondary coverage in that it would fill any gaps in principle coverage that the individual receives from their employer. If the individual does not receive health insurance from their job then this university-affiliated coverage would apply. However, the time period on this coverage would be concurrent with any employer insurance. Basically if an individual stopped playing on June 30, 2015 got a job that provided health insurance on July 15, 2015 and was laid off on April 17, 2018 under a five year fixed time program their coverage with the university would still end on June 30, 2020 despite not using that coverage for almost 3 years due to the coverage provided by the job.

Obviously the university should cover an athlete in some way until the NCAA catastrophic policy would take over and this university coverage policy must be transparent to the point where potential recruits can actually see what is covered and what is not covered. Additionally there should be a minimum level of coverage mandated by the NCAA to ensure appropriate medical treatment. One could argue that this legal mandate is addressed by the Affordable Care Act (ACA) and while true, the ACA may not be permanent due to the zeal of certain members of Congress to repeal it, thus the need for a separate required NCAA mandate. Finally another element could be negotiated would be healthcare substitution. Suppose an athlete wants more coverage than the university is willing to offer, the university could include an additional healthcare stipend at equal monetary value of the standard university healthcare coverage to help pay the athlete pay for the other more desired policy.

Staying in the medical area one raises the question could a union actually change the number of games and/or when those games are played in a particular sport? Over the last few decades the number of college games that various sports have scheduled has increased significantly due to increased travel options and most notably the expansion of incentive to play these games (television money). Clearly the probability of injury increases and the probability of academic success decreases when the number of games an individual plays increase; therefore, could a union attempt to actually reduce the number of games that their particular sport plays? While this idea may be an interesting one, success would be difficult simply because of the money involved with playing each game in these high value college sports.

The principle mission of colleges is to provide an advanced level of education that further prepares individuals to become productive members of society. Unfortunately that principle mission and being an athlete has come into some level of conflict in recent years with the added workload attributed to participating in college athletics. Due to the extensive practice, travel and game schedules the available academic options of athletes at a number of universities have been compromised. In some situations athletes have been confronted with the choice of majoring in subject x or playing sport y because of the inability to schedule and/or attend the required classes.

One of the chief elements driving this conflict is that despite a 1991 decree by the NCAA that limited the number of required countable athletically related activities (CARA) to four hours per day and twenty hours per week, almost all institutions have worked around those restrictions by allow coaches to organize “voluntary” practices. Of course the secret that is not a secret is these “voluntary” practices are not really voluntary; at least not for non-star players who if they do not attend typically find themselves with reduced playing time. It is through these “voluntary” practices and workouts along with travel time that the NLRB could cite an average workload for football players at Northwestern at 40-50 hours per week despite the 1991 limitation. This designation by the NLRB is somewhat controversial because some argue that participation in additional practices behooves the athlete because it enhances their playing ability, similar to non-athletes like musicians and actors, thus these practices should not be controversial relative to the 20 hour CARA limit. However, the controversy stems from the team organized nature of these activities versus the athletes simply putting in the work lifting weights, conditioning, etc. by their lonesome.

In addition to this extended workload for the average week the length of time over a calendar year that athletes have to invest is significant. For example for football players the regular season begins around Labor Day (typically on the preceding Thursday) and depending on the conference ends on the second Saturday or Sunday in December with bowl games starting anywhere from two weeks to six weeks later. During the off-season football players begin preparing for the coming season through an extensive conditional program that involves multiple practices per week that typically starting early in the morning. In general for most sports the conflict between educational opportunities can be broken down as such – during the regular season afternoon classes are off-limits because of practice and game priorities and during the off-season a selection of morning classes are off-limits because of practice and conditioning priorities. How is an individual supposed to pursue their academic and athletic dreams if these conflicts exist?

A union could address this conflict by using expanded legal protections for those who wish to treat “voluntary” practices as exactly that voluntary. Any changes in the playing status of an individual who only abides by the required practice hours would force the authority structure (typically the head coach) to explain the demotion, which would become significantly more difficult with a union behind the scenes protecting players. In addition to practice hours, unions could also address the “big brother” type system that most universities create to “help” athlete time management including types of classes taken, where one sits, how much study hall is attended, personal travel arrangements, where the athlete lives, acquisition of money from family members, etc. Additionally a union could organize “vacation” time for athletes that could be used during the off-season for recuperation purposes. Finally more flexibility could be added to the practice system in the off-season allowing athletes to attend either a morning or an afternoon conditioning session allowing greater class selection ability for their education.

Another popular idea for unions would be to establish new policy governing athletic scholarships. Skipping the period where athletic scholarships were controversial due to their non-academic and possibly non-amateur nature, the first “generation” of athletic scholarships covered four years and had sufficient certainty in that it was rather difficult to cancel the scholarship even if an athlete struggled with injury. Even when these four-year scholarships fell out of favor, early on in the one-year renewal system a university scholarship committee, not athletic directors or athletic coaches, made decisions regarding renewal. Unfortunately due to Proposition 39 in 1973 both four-year scholarships and scholarship committees became rare replaced by single year scholarships renewed year-by-year by the head coach. While Proposition 39 was later rescinded in 2011 allowing universities to offer multiple year scholarships once again, most universities have retained the one-year renewal model.

It may be too much and not appropriate to attempt to go back to the four year guaranteed athlete scholarship, but a union could ask for increased scholarship allowances for injured players as well as a return to scholarship committees, removing a significant element of power from head coaches to “encourage” athletes to devote more time to athletics. In addition the expansion of scholarships after the conclusion of a playing career based on the total time of performance could serve as a valuable tool for the acquisition of a degree. One example of this idea would be for every year an individual plays for a university team that individual would receive an additional half year scholarship, thus playing for four years would yield that individual an additional two years on a specialized scholarship not related directly to the athletic program. Clearly before any scholarship idea is administered it would have to be applied separately from other scholarships because it would not be appropriate to trade one scholarship from a financial need student to an athlete.

The ability to transfer between universities without eligibility penalty would also be a point of interest for negotiation. Currently the transfer rules are rather restrictive towards athletes. The biggest problem with transfer rules is the lack of uniformity. Too many rules depending on type of school, conference, and sport, but the chief component to almost all of the transfer rules, especially for those transfers between major programs (4-year schools), is that the athlete has to sit out at least one year and take a full class load for both semesters (not summer) to establish academic “residence” before he/she is able to play.

In addition these rules have been viewed as rather hypocritical in that coaches routinely breach their contracts to leave for another “better” university job while athletes do not have that same freedom. Realistically it would make more practical sense that an athlete should be allowed to transfer retaining all remaining eligibility at any point during the off-season with the ability to play immediately pursuant to their existing academic eligibility. The university the athlete is departing from should have no ability to prevent the transfer through legal means. However, similar to its current prohibition it would not be appropriate to allow athletes to transfer during their playing season.

Of course the elephant in the room regarding the potential new employee statues of college athletes is whether or not they should be paid in financial capital that is not simply earmarked for educational expenses. This blog has addressed this issue before in the following post [http://www.bastionofreason.blogspot.com/2011/03/paying-college-athletes.html] and a vast majority of the argument still holds up regardless of whether or not college athletes are regarded as employees or students. However, there is an interesting angle that exists within the gap between amateur and professional status.

One could make the argument that it is still possible for college athletes to be regarded legally as employees and retain their amateur status, although the importance of this distinction is somewhat foggy. Maintenance of amateur status could be achieved through requesting a form of stipend that could be used to cover college-based expenses outside the scope of the scholarship. Most scholarships cover tuition, room and board, and direct educational materials like books and software, but do not cover common “everyday” expenses like personal travel expenses, non-team associated food, and other miscellaneous expenses. The stipend should fill this gap with the exact amount negotiated based on a general uniformity across all universities with an effective cost of living adjustment based on where the university is located.

A secondary advantage for the NCAA as an organization to providing this stipend is that it could offer protection against anti-trust litigation. Some argue that capping scholarships at the cost of attendance constitutes unlawful restraints on commercial activity. While this argument is suspect because the NCAA is not a monopoly nor is it required for future employment in the NBA, there does exist the possibility that a court could rule against the NCAA on this issue. However, agreeing to stipend restrictions through a collective bargaining processes should offer sufficient non-statutory labor exemption protection from anti-trust litigation mitigating one avenue for players to sue in an attempt to acquire a form of revenue sharing.

While revenue sharing is unlikely and a stipend is uncertain, college athlete unions could negotiate a payment structure for athletes when the university or third parties make additional funds from direct usage of their likeness or name. The one significant drawback to this possibility is that this very issue is currently moving through the courts via the Ed O’Bannon trial and could come to a conclusion before the union issue has resolved. However, if the union issue is resolved before the Ed O’Bannon case then both sides may be in favor of negotiating a settlement structure on this issue.

Unfortunately lost in the controversy of the decision by the NLRB ruling is that despite the ability to form a union, college athletes at private universities may not have sufficient power to make any real changes. The chief problem is the issue of scarcity. The difference in skill level between the Top 50% and the Top 10% of college athletes is small and with customer loyalty at the college level firmly behind the university versus the athletes that participate for that university the power of a strike in effort to enforce demands is limited. Universities only have a limited amount of scholarships and there would be more than enough individuals of similar talent and willingness to play by the rules of the current system to fill in for any striking athletes. In fact university would more than likely just have to sweep through the intramurals to replace a vast majority of the initial scholarship talent.

These “replacement” athletes would not produce any significant loss of revenue for the university because most of the money acquired from college football and basketball involve television contracts with the affiliated conferences, thus as long as the university fields a team, no matter how bad, the university will receive a vast majority of their planned revenue. The real power of a strike be the negative precedence created by striking athletes and how it will negatively influence future recruitment, thereby potentially hurting the bottom-line of the university through the continuation of a poor quality product that could eventually lead to dismissal from the conference and loss of the television contracts. However, would the first group of striking athletes be willing to act as sacrificial lambs to accomplish this goal because if they transfer to another university the power is lost and they would more than likely not receive a renewal of their athletic scholarships in the aftermath?

Therefore, the real power of the NLRB ruling may actually be the basic legal protections that come with recognizing student-athletes as employees. Overall while most of the above changes should be made solely because it allows the athletes to genuinely be student-athletes and it is morally right, a new college athlete union structure may have to pick its battles if it wants to produce change beyond the basic protections of the law.