Colonizing Mars will be a significant endeavor with many moving parts and critical decisions to make. One of the most important decisions is how to design the appropriate food supplementary methodology for the colonists as Martian environmental conditions differ significantly from Earth. This difference demands a clear and transparent strategy to ensure the safety and productivity of future colonists. Fortunately there is sufficient predictability and routine with regards to creating this food production strategy making it easier to compare and contrast competing options.
The first element to understanding the dietary requirements for Martian colonists is deducing the minimum requirements for survival on Earth. The typical energy recommendations for a sedentary individual approximately 70 kg are about 2,000 calories, which should be familiar to most individuals because it is the basis of daily recommended allowances for nutrients used by the FDA. There is the argument that more active individuals will require double that at 4,000 to 4,500 calories. Some reference that most astronauts involved in the Apollo missions consumed an average of 2,793 calories, but their missions were extremely short (less than a week).
A more apt reference comes from Biosphere 2 where participants consumed 2,216 calories per day, but even at these consumption levels participants lost an average of 8.8 kg over the 2-year experiment. Unfortunately it stands to reason that Martian colonists will be more active than Biosphere 2 participants due to required frequent extra-vehicular activities (EVAs) to construct additional elements to expand the initial habitat and scientific exploration. Also there is little information regarding how nutrition needs and absorption capacity change in a low gravity environment, especially with regards to gut bacteria.
Another problem is that these calories need to include the 9 essential amino acids for healthy adults: phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, and
histidine. Studies on a minimal diet required for survival included 10 different foods: soybean, peanut, wheat, rice, potato, carrot, chard, cabbage, lettuce and tomato with recommendations for additional nutrients from sugar beets, broccoli, various berries, onions and corn.1 Unfortunately it is unlikely that such a wide array of foods will be available for a Mars colonization mission past the food that initially travels with the colonists. In addition early on in the expedition colonists will have to eat additional food brought from Earth to compensate the lack of sufficient growth on Mars.
However, the weight and cost of carrying a large amount of food with the colonists could be crippling. A general estimate can be made using MRE information. Each MRE contains about 1,200 calories.2 A colonist would consume at least two MREs per day. The general average weight of an MRE is estimated at 635 grams or slightly under 1.4 pounds.2 Therefore, the average weight of food for a day per colonist is 2.8 pounds. The generic cost associated with launching something into space is 8,000 – 10,000 dollars per kilogram (i.e. 3,636 – 4,545 dollars per pound), thus 3.716 million to 4.645 million dollars per colonist per year in food costs. Some could argue that this price is lower due to the activities of Space-X, but most people forget that these estimates are not made to scale. There is a big difference between $2,000 per pound when launching 2,000 pounds and $2,000 per pound when launching 200,000 pounds. Also any estimate can be made depending on how much money a company is willing to lose on a launch. Unfortunately cost is not the only limiting factor for colonists bringing their own food.
Some could argue that the nutrients provided by some of these foods can be substituted through vitamin consumption, but there are lingering questions about nutrient absorption when vitamins are principally responsible for nutrition. Another more minor concern revolves around shelf life for freeze-dried and MRE-type packaging, which will limit the use of initially sent food to a maximum of approximately 2 years. As stated above this concern should not be significant because greater than average food consumption will be expected due to activity levels and a lack of grown food. Finally for some there is the continuing pseudo concern of unappetizing food in space due to the specific cooking and harvesting techniques required for reduced gravity environments. This concern is rather meaningless because if someone has the choice between eating something boring, repetitive and unappetizing or dying, any sane individual will select the first option.
Based on the anticipated workload and a difficult living environment (pressurized homes and bulky pressurized spacesuits) all settlers on Mars will require additional calories beyond average consumption levels. While freeze-dried food shipments can be delivered periodically from Earth the costs associated with such missions, as estimated above, should prohibit executing this strategy indefinitely. Overall the reality is that some form of food synthesis/production methodology needs to be created for Martian colonists.
Obviously growing food on Mars will be difficult because the lack of quality soil, rainfall and consistent sunlight will force all growth to occur indoors in a pressurized environment under artificial light in a hydroponic or aeroponic infrastructure. The advantages to using soil versus a nutrient baths are numerous including, but not limited to: 1) soil playing a significant role in air purification; 2) acting as a central and low energy recycling and composting system for various types of waste; 3) difficulty re-supplying nutrient solutions away from Earth potentially limiting the lifespan of a hydroponic or aeroponic system; 4) increased gaseous aeration and reduced water leaching in the presence of no toxic agents due to the gravity difference.
Clearly somehow incorporating soil would be a large boon to the colonization process. Some individuals have very optimistic notions that the soil can be rehabilitated to the point where it can support food growth. Some initial experiments argue that it is possible to grow food in Martian soil.3 However, this research has its concerns in that the soil used to emulate the Martian soil was free of contaminants along with a lack of pressure and gravitational changes inherent to Mars, thus perceiving these results as accurate to cultivation on Mars is irresponsible. A rehabilitation process will take years, if not decades, and more than likely will not start until after colonists have made landfall.
The problems with this rehabilitation process are as followed: 1) high concentrations of detrimental agents including various salts, oxides and toxins, especially chlorine and aluminum; 2) impurities heavily reduce water uptake efficiency, which due to the lack of available water on Mars would dramatically reduce yields; 3) a theoretical lack of ability to support continuous microorganism growth which is essential for quality soil health; 4) a lack of important secondary nutrients that foster plant growth like boron and molybdenum; 5) pH of regolith soil can vary from place to place, similar to Earth, but the variations on Mars are more radical. pH will be very low in places with large amounts of jarosite and very high in places with large amounts of NaHCO3 and Na2CO3. Neutralization of these high acidic or basic regions would require large amounts of CaCO3 or olivine deposits and peat moss respectively. 6) A direct lack of principle nutritional agents most notably nitrogen and phosphorus. Some argue that nitrogen can be created through weathering, a process that will take far too long, or nitrogen fixation through various microorganisms, a process that is questionable due to existing soil conditions and a lack of phosphorus. Phosphorus only seems available through fertilizers and also requires leaching CaSO4 deposits to avoid phosphorus interaction before plant absorption. Therefore, it is unreasonable to assume outdoor food growth for the first few decades.
Some have argued that even if the Martian soil cannot be utilized the Martian atmosphere could be due to its high CO2 percentage. While approximately 95% of the Martian atmosphere is CO2, the total concentration of CO2 is much smaller than the concentration of CO2 in Earth’s atmosphere because the Martian atmosphere is dramatically thinner. Therefore, on its face there is not enough CO2 available to allow free flow of air from the Martian atmosphere to produce a net benefit in plant growth. Even if CO2 concentrations were large enough the frequent dust storms with additional regolith deposits would cause significant problems for the free airflow greenhouse and it would be incredibly difficult to filter these elements due to their very small particle size. So currently it stands to reason that all food growth in a Martian colony for the first few decades will require complete isolation from native Martian conditions.
With the lack of viable soil the most popular strategies for growing food on Mars have been to forego soil use altogether and use hydroponics. Hydroponics eliminates the soil issue, but it raises its own concerns regarding water use and nutrient supplement. Even with high rates of recycling, water scarcity will be an issue on Mars and growing food through hydroponics will place further stress on that scarcity. While some hydroponic proponents report that hydroponics actually save water, these assertions are born from a comparison between hydroponic use and flood irrigation in traditional fields rather than drip irrigation. When compared against drip irrigation, hydroponics results in slightly greater water use. Also although soil is not used, a special nutrient mixture is required and it may be difficult to mass synthesize this mixture on Mars after the initial sample is consumed without having some base to work from that must either be created on Mars or sent from Earth.
Another option for food growth is aeroponic growth. Aeroponics attempts to optimize plant growth through the use of a pressurized water mist doped with nutrients sprayed on the entire exposed root system of the plant. One of the chief reasons aeroponics is successful is it does not require soil, which can provide growth inefficiencies due to poor drainage or lack of porosity limiting root aeration leading to reduced growth. NASA has even suggested that aeroponic-based food production through an ultrasonic technique will result in similar yields to conventional growth at 45% greater rates of growth despite using 99% less water and 50% fewer nutrients. However, this conclusion must be tempered with the fact that the comparison is more than likely (it is not specified) being made against crops raised through flood irrigation and fertilizer saturation, two common yet incredibly inefficient agriculture techniques, thus the actual benefits of aeroponics over more responsible farming are more muted.
The most significant detriment to aeroponics in normal conditions is a higher probability of pathogenic death due to root exposure, but this concern is somewhat mitigated due to the natural aseptic environment on Mars limiting the absolute probability of exposure. Additional sanitary elements can be added to an aeroponics system to limit contamination from colonists. A secondary problem may be synthesis of additional nutrient compounds for the mist for traditional farming develops nutrients from organic compounds and bacteria.
Significant research has been conducted by NASA and other NASA sponsored outside researchers since the early 1990s resulting in several effective water droplet nebulizer technologies and a low mass polymer aeroponic apparatus.4 Some inflatable growth chambers have also been developed for flora growth in space. With that said some argue that a growing area is not necessary in a Martian habitat because aeroponic structures could be incorporated within various other parts of the habitat resulting in more efficient use of overall available space. While aeroponics is viewed by some as the future of food growth in space no serious long-term aeroponic experiments have been conducted in space, so most of the supposed benefits remain theoretical. Note the lack of experimentation for such a system on Earth. None of the numerous “Martian Simulation” experiments have extensively utilized aeroponics in an isolated environment to support food production. If aeroponics is viewed as a valid option for providing food on Mars why have these simulation experiments failed to incorporate such a testable strategy?
Random deployment of aeroponic systems throughout the habitat seems inefficient due to lighting condition confliction. Regardless of growth medium, plants will benefit from exposure to a different wavelength of light over standard white light. Monochromatic blue and red lights have all demonstrated positive growth influences on plants and some positive results have been recorded for green, typically ordering from red to blue to green.5 Therefore, it stands to reason that all potential crops should be exposed to either a red or blue light source preferably from a LED. However, consistent exposure to red or blue light during wakeful hours could have a detrimental effect on the crew. Due to the possible lighting conflict as well as potential sanitation issues localization of food growth to isolated areas of the habitat principally responsible for food growth is advisable or its own future constructed habitat completely isolated from the principle habitat.
One final note when deciding between hydroponics and aeroponics is the issue of yield vs. available space. If an aeroponic system is properly designed it can maximize space utilization of the habitat module by using walls and ceilings. A hydroponic unit will have to compete for space that could be utilized for storage, manufacturing, sleep, leisure, etc. Alleviating this potential space problem would involve sending to habitation modules to Mars where one would act as the living unit and one would act as the farming unit devoted to hydroponic use. While clearly the costs of such a plan would be significant due to weight issues, success would allow for special oxygen/CO2 customization of the farming unit, which would reduce the complexities of isolating the farming and living units in the same habitation module. This farming unit could also be constructed on Mars using in situ resources to avoid weight based travel complications.
When addressing the food itself, while it would be ideal to grow a wide selection of fruits, vegetables, nuts, etc. to increase moral through variety of food choice, for the first group of colonists the lack of viable Martian soil converts space into the limiting factor with water close behind. Therefore, it is important to identify the foods that give the best “bang for the vitamin buck” with regards to growth space. As mentioned early on most foods that will be grown on site will require either hydroponics or aeroponics, thus growth method combined with space considerations will make it difficult to grow various vining plants like tomatoes, cucumbers, peas, grapes, etc. Also large surface area or volume crops like corn, squash, melon, zucchini, etc. would be ill advised. Due to the additional energy requirements for colonists, especially those actively searching or building on Mars, a large source of complex carbohydrates should be grown. There are numerous quality candidates for carbohydrates namely cassavas, soybean, sweet potatoes and lentils.
Of the possible carbohydrate options the cassava root is an attractive one. One of the principle advantages to the cassava is that it is significantly drought tolerant and capable of growing well in sub-optimal soils. Clearly these elements are advantageous in a water uncertain environment like Mars where any water savings that can be created is a benefit and a non-optimal nutrient mix could become the norm. There are two types of cassava, sweet or bitter and while bitter is preferred on Earth due to its enhanced pest deterrence, the lack of these organisms on Mars would make sweet a better choice for a more appetizing meal. The purpose of growing cassavas is to harvest the root, thus the leaves of the plant can be pruned early in its growth cycle to limit space use. However, if insects are also being cultivated, the leaves can be harvested as a secondary food source. The roots are good sources of calcium and phosphorus, which are critical elements for bone structure, as well as vitamin C.
In contrast to cassavas, sweet potatoes are more finicky in their growth requiring lots of light and warm temperatures (70-80 degrees F) along with significantly more water. Most varieties of sweet potatoes have some vining characteristics, which could create space issues, but there are bush-type varieties that should be used instead. Due to near immediate consumption sweet potatoes grown on Mars will not be cured eliminating that processing step. Sweet potatoes provide significant concentrations of fiber, beta-carotene, calcium, phosphorus and vitamin A. Overall it seems reasonable that there would be a competition between either using sweet potatoes or cassava with cassava having more overall nutrients and sweet potatoes having better flavor and concentration of certain nutrients like vitamin A.
Lentils are an edible pulse of the legume family and are widely grown throughout the world for its high protein and general nutritional content. Lentils contain essential amino acids phenylalanine, valine, threonine, tryptophan, leucine, isoleucine, lysine and histidine, lacking only methionine. Some report that sprouted lentils contain methionine.6 In addition to the large essential amino acid complement, lentils also have significant amounts of fiber, folate, iron and vitamin B1. However, while lentils have a wide variety of essential nutrients their preparation is more complicated than most foods requiring long-term soaking in warm water to reduce phytate and trypsin inhibitor content. This additional use of water beyond simple rinsing may give pause to the use of lentils as a food source in the initial stages of a Mars mission.
Another quality option outside the starchier ones above is broccoli. Broccoli is high in fiber, vitamin C, vitamin B2, Pantothenic acid (B5), vitamin B6, folate (B9), manganese and phosphorus along with numerous alleged anti-cancer and immune regulatory molecules like selenium and diinodlylmethane. A secondary advantage, beyond the high nutrient value, is that broccoli is resilient, grows quickly and is harvested easily. The one possible concern for broccoli is the total area of the leaves can become large, but these leaves can be pruned to eliminate this concern. Currently there is little reason to exclude broccoli from the food options for Martian colonists.
Soybeans are commonly considered a quality choice for Martian food because they are a source of complete protein (a food that contains significant amounts of all essential amino acids) in addition to it being a quality source of protein. However, there are some concerns. First, similar to lentils above soybeans must be cooked with “wet” heat to destroy trypsin inhibitors, which will take time and additional water resources. Second, modern cultivars typically reach a mature height of 3-3.5 feet, which could create space concerns depending on where the soybean crop is planted, especially for hydroponic strategies. If soybeans were grown, pruning would more than likely be required.
Keeping with the theme of green vegetables, spinach is another quality option. Rich in lutein (for the eyes), vitamins A, C, E, K, B2, B6, magnesium, manganese, folate, betaine, iron, calcium and phosphorus. It is also a quality source of folic acid, which has been in rather short supply for the other candidates mentioned so far. Also the inclusion of peanuts could be an interesting possibility. Peanuts are high in fiber, folate, niacin (B3), phosphorus, vitamin E and magnesium along with large concentrations of protein, much more than can be acquired from fruit and vegetable candidates. Some may argue that growing peanuts hydroponically is difficult because of the burrowing flower stem; however, peanut blossoms have successfully buried themselves in nutrient media and formed viable peanuts. Therefore, there is nothing to be concerned about under normal conditions, whether or not Martian gravity changes that is unknown.
A brief note regarding genetic engineered crops. There are two schools of thought regarding the inclusion of these types of crops. Proponents would argue that it is advantageous to genetically engineer all of the seeds that colonists bring with them to Mars for drought resistance, additional vitamin synthesis (i.e. Vitamin A in golden rice) and maximum photosynthetic efficiency. Due to the use of hydroponics each plant can be semi-isolated restricting the possibility of cross contamination if something goes wrong. Opponents would argue that this isolation is rudimentary and that if something were to go wrong from a genetic standpoint then the colonists would be put at severe risk depending entirely on food from Earth. Logically it makes sense for colonists to avoid homogeneity by having a variety of seed types some that have been engineered and others that have not and plant accordingly.
This combination of plant products does not, however, completely meet all nutritional requirements, as it is low in sodium and lacks animal origin vitamins and fat such as B12 and cholesterols. This is a common feature of plant-based diets. To overcome these deficiencies sodium can be supplied in mineral form. If one concluded that the use of plant based protein sources is unreasonable due to a lack of overall content, then additional sources of protein will have to be acquired elsewhere. Utilization of large animal based protein like cows and chickens is unreasonable due to the resource demands, thus insects and fish are appropriate animal food sources in a space agro-ecosystem, given the limited area available for their rearing and for efficient use of other resources to fill the nutritional requirements.
Muscular atrophy in a reduced gravity environment is a running problem. Skeletal muscle principally involved in maintaining proper posture are most negatively affected by the reduction of gravity because this muscle has evolved to balance an environment where gravitational forces are 9.8 m/s^2. That said it appears that slow twitch muscle fibers are more susceptible to the change in gravitational force versus fast twitch muscle fibers.7,8 This difference in degradation can be troublesome because not only are slow twitch associated with posture, but are also associated with muscular endurance. In addition to muscle atrophy there is a serious drop-off (>50%) in protein synthesis rates and a significant loss of calcium balance.9-11 Whether or not this loss of calcium is due to actual direct losses or indirect absorption losses (i.e. a lack of Vitamin D) is unknown. Therefore, in order for colonists to increase the probability of limiting apoptosis a constant supply of protein will be required.
One of the key advantages to utilizing insects is that they can be fed on substances that are inedible for humans yet are byproducts from other processes. For example two of the most promising insect candidates are the silkworm (Bobyx mori) and common termites because they survive on mulberry leafs and cellulose or lignin respectively. The silkworm is the better choice of these two because it cannot escape its rearing room to become a nuisance to the colonists, it produces a useful byproduct in its silk cocoon, and colonists can consume a part of its principle food source (the berries from the mulberry plant). Termites are popular for those who plan to incorporate wood into colony construction, a strategy that does not appear to be effective in its versatility or overall usefulness. Therefore, with the obvious advantages of silkworms as both a protein source and secondary material source it stands to reason that all insect rearing should focus on silkworms.
Additional protein sources can be created through aquaculture fostering suitable concentrations of small fish. It is not reasonable to expect ideal water quality in the aquaculture, thus the selected fish must be able to effectively survive during periods of high toxicity or salinity. In addition the fish must have a small maximum growth potential to avoid resource over-consumption due to overcrowding. Understandably in most situations fish harvesting would occur often enough that overcrowding should not be an issue, but overall it pays to be careful. With these two conditions in mind the two best fish candidates appear to be loach and tilapia due to their abilities to resist negative environmental elements like poor water quality, high salt concentrations and limited water availability.
Another option for a more advanced colony is to develop an aquaponic system. In such a system plants are grown in a way where their roots are immersed in the nutrient-rich effluent water of an aquaculture. The plants should filter ammonia and other toxic metabolites that could damage the aquatic life. The water is then reintroduced to the aquaculture water pool. There are many different types of aquaponic systems, but deep-water raft seems to be the best for Mars due to its simplicity, low power requirements and greater flexibility with germination staggering because different plants have different rates of growth.
Some also argue that including algae, either hydroponically or aquaponically, should be a boon to food production. One of the most powerful reasons to include algae is that it can form a closed ecological cycle. Add the algae to an environment with water, CO2, and energy (light source) and such a system can theoretically keep a person supplied with food and oxygen for as long as the system is maintained.
For some individuals Spirulina (a type of algae) is thought to be an ideal health food and some hope that these positive traits can be maintained as a food for Martian colonists. The inherent advantages of spirulina are that it is easy to digest due to a lack of cellulose, it contains a large number of vitamins sans vitamin C and eight of nine essential amino acids, and produces a high protein by weight percentage (55-65%). However, there are some drawbacks as well most notably it ability to effectively absorb environmental elements like radiation and heavy metals including producing anatoxin as well as producing large concentrations of nucleic acids which can lead to gout if more than 50 grams are consumed in a day. In addition it has an unappetizing green slime texture and taste. While that last negative should not matter in a survival situation, from a psychological standpoint there exists a high probability that eating Spirulina day after day after day will have a negative effect.
Apart from preparing an appropriate area to grow food and selecting what should be grown, a strategy to manage produced organic waste from both humans and plant matter needs to be developed. Unfortunately there is a significant limitation in possible strategies due to a lack of available oxygen on Mars. This lack of oxygen reduces the effectiveness of traditional composting making it difficult to select as a viable strategy. Some argue that the use of Geobacter, an anaerobic respiration bacterial species, which can oxidize organic substances using iron oxides and can even generate electricity as a byproduct. However, while iron oxides are available on Mars their extraction requires work either human or machine, which adds an additional element to colonization.
Some have argued for the inclusion of hyper-thermophilic bacteria may be the best option for eliminating organic waste in an 80-100 degree C environment.12 Basically the colonists utilize a small autoclave with these bacteria resulting in organic decomposition and the elimination of harmful organisms that may reside in the waste. In addition the waste heat from the autoclave process can be released into the living environment to reduce electricity demand over a short period of time or for distilling water. However, the problem with this strategy is the oxygen requirement. For a long period of time on Mars oxygen should be in short supply, thus transferring some oxygen for waste removal processes may not be prudent. Overall the best strategy appears to be using Geobacter as a principle source of waste elimination.
In the end it is important for Mars simulation experiments on Earth to study the initial best food choices to determine how they would grow in similar conditions sans gravity changes. Unfortunately current food consumption methodologies in these simulation experiments are too well developed. While it stands to reason that there will be some initial variety born from the food transported with the colonists (albeit most of this, if not all of it, will be dehydrated or freeze dried due to the travel time between Mars and Earth), this initial source food will be consumed over a period of time (1-2 years) and less hardy choices will be relied upon for a significant time period afterwards. This misrepresentation reduces the probability of collecting accurate information pertaining to how biological functions would change over time and how colonists would have to adjust when consuming significantly fewer calories.
The next Mars simulation study should only bring a small amount of food and focus on attempting to successfully grow broccoli, peanuts, sweet potatoes, soybeans and spinach in Mars like conditions using hydroponic and aeroponic systems. The type of information born from this experiment is much more important to a successful Mars colonization mission than the simple isolation/psychological experiments because those selected for Mars will be able to handle the psychological aspects of the colonization, but they will not be able to handle starving to the point of death.
--
Citations –
1. Hender, Matthew. “Colonization: a permanent habitat for the colonization of Mars.” 2010. http://digital.library.adelaide.edu.au/dspace/handle/2440/61315
2. Wikipedia Entry Meal, Ready-to-Eat (MRE);
3. Wieten, Jesse. “Dutch researcher says Earth food plants able to grow on Mars” Mars Daily. Jan 21, 2014. http://www.marsdaily.com/reports/Dutch_researcher_says_Earth_food_plants_able_to_grow_on_Mars_999.html
4. Clawson, James Sr. Aeroponics.com. January 1, 2012. http://www.aeroponics.com/aero43.htm
5. Kim, H, et Al. “Green-light supplement for enhanced lettuce growth under red and blue-light emitting diodes.” HortScience. 2004. 39(7). 1617-1622.
6. Wikipedia Entry – Lentil
7. Narici, M, and de Boer, M. “Disuse of the musculo-skeletal system in space and on earth.” Eur J Appl Physiol. 2011. 111(3):403-20.
8. Fitts, R, Riley, D, and Widrick, J. “Functional and structural adaptations of skeletal muscle to microgravity.” J Exp Biol. 2001. 204(18):3201-8.
9. Schollmeyer, J. “Role of Ca2+ and Ca2+-activated protease in myoblast fusion.” Exp Cell Res. 1986. 162(2):411-22.
10. Barnoy, S, Glaser, T, and Kosower, N. “Calpain and calpastatin in myoblast differentiation and fusion: effects of inhibitors.” Biochim Biophys Acta. 1997. 1358(2):181-8.
11. Haddad, F, et Al. “Atrophy responses to muscle inactivity. I. Cellular markers of protein deficits.” J Appl Physiol. 2003. 95(2):781-90.
12. Kanazawa, S, et Al. “Space agriculture for habitation on Mars with hyper-thermophilic aerobic composting bacteria.” Space Agriculture Task Force.
Wednesday, May 28, 2014
Saturday, May 10, 2014
The Psychology of Lying and Capitalism
Supporters of capitalism tend to espouse its greatness in large part by intertwining it with the morality and fairness of meritocracy. Frequent are the statements that the rich are rich because they deserve it and the poor are poor because they deserve it. Unfortunately these supporters forget that in society a meritocracy does not function in a vacuum and one of the most critical elements of a genuine meritocracy is the accurate and transparent communication of information. In short actors in a society must be honest and truthful with one another otherwise natural skill is muted by misinformation and the liars and cheaters prosper.
How can an individual expect to make the best possible decisions, not only for that individual, but all parties involved when certain pieces of information are purposely omitted and/or falsely represented? The ability to effectively cooperate with one another is also dependent on all of the information being as accurate as possible when considering the specific circumstances else all parties will lie due to the belief that all other parties are lying to improve their own positions and probability of gain. All of this mistrust and false information will lead to the optimal solution rarely being identified and implemented. Sadly honesty in modern times is significantly lacking, which damages the notion that capitalism has some meaningful quality of fairness based in merit achievement. So to redeem capitalism one must ask the question of why people lie in the first place and then address those causes to reduce the probability that people lie.
The overall rationality for lying is somewhat mysterious. While initially there may be rational reasons for lying, when actually examining those reasons the rationality falls apart in all but one situation. The first reason someone lies is in order to impress someone. Typically the need to impress someone is derived from the lack of self-confidence in one's own abilities and status, for those who are confident care little of the negative opinions of others that are not rooted in fact. Lying about one's accomplishments to impress someone for the purpose of gaining his/her respect is also irrational because as cliché as it sounds if one must lie to gain an individual’s respect that respect has not been genuinely gained, for respect must be earned; to gain genuine respect through deceit is not possible. Unfortunately too many people have come to portray this fraudulent version of respect for the genuine article.
Similar to respect lying to attain someone’s admiration is equally irrational because as idealistic as it sounds the admiration of no individual is worth the honor that is lost when lying to attain it. If one lives life according to his/her own beliefs and ethics that individual has no reason to lie about the way he/she lives life. Unfortunately in modern times talk about relying on honor and integrity to drive behavior seems foolish, especially with the level of money and power typically at stake. These elements are what drive most people to misrepresent themselves to others, thus the best way to stop it is to produce punishment that will not only take these opportunities away, but also make it more difficult to acquire future opportunities for money and power. Society is too lenient on those who lie intentionally and too willing to offer unconditional second, third, fourth, etc. chances. Later the logic behind the appropriate responses to dissuade lying will be discussed.
The second reason someone lies is in order to evade responsibility for specific actions. The motivation to evade responsibility stems from two causes. First, no one likes to be punished, thus misrepresentation of the facts is carried out to evade punishment. Second, some people believe that taking responsibility for an action that does not produce a favorable result diminishes their reputation in the eyes of those with some level of influence in the personal or professional prospects of that individual. This loss of respect may reduce the number of available opportunities for personal or professional advancement. Immaturity and lack of personal pride are significant causes of such behavior. However, there is another element to how individuals judge success that needs to be addressed because it is relevant to this motivation for lying.
Most individuals, incorrectly, judge the success of an action based solely on its outcome instead of the thought and methodology that went into the creation and execution of the action. A primary example of this reasoning is frequently demonstrated in sporting events. Suppose a football game is in the mid-fourth quarter with Team A leading Team B 10 to 6, but Team B has the ball on their own 42-yard line and it is fourth down and 6 to go. In one instance the coach elects to go for it by faking a punt and picks up the first down. What is the typical response of the television commentators? “What a gutsy call by Coach Stan. He knew that his team needed a spark and that Team A would be unprepared. That is just trusting your team to go get you the first down….” In a parallel world assume the exact same conditions, individuals and play except for one linebacker makes an assignment mistake and fills the A gap instead of the B gap resulting in him making the tackle and preventing a successful conversion. Now what do the television commentators say? “What a stupid call by Coach Stan. There is plenty of time left in the quarter and your defense is playing exceptionally well, the smart play is to punt it pinning Team B deep in their zone. That gives your defense an excellent chance to hold netting you better field position and a better opportunity to get that touchdown….”
So the exact same situation and action results in two different outcomes due to a single minute deviation that could not be accounted for by Coach Stan creates two dramatically different judgments by the television commentators and probably the causal fan as well. It is not reasonable to characterize the decision by Coach Smith to go for it as good or bad based principally on the outcome, yet the result of an action is the primary, if not only, criteria most people use. Instead society needs to be reasonable in its evaluation of decisions and look at the methodology and information available that went into making the decision instead of the result.
A majority of the time the result of a decision can be predicted when looking at the available information at the time of the decision and how the decision-makers interpreted and blended the relevant pieces of that information together to form a strategy. Therefore, outside of the rare occurrence when an unpredictable disturbance arises in the decision-making process, it is significantly more appropriate and advantageous to analyze this process instead of the result when judging action. Understanding the correct and the incorrect portions of the analysis can create precedence to what thought-processes typically lead to positive outcomes and what ones typically lead to negative outcomes, so one can learn not to repeat mistakes and emphasize successful strategies.
Changing this perception and habit in evaluating decision-making should reduce both the effectiveness and the rationality behind lying to avoid punishment for a bad result. Instead punishment will be issued for poor decision-making and analysis because these are the elements that are completely controllable by the individual, as long as information is honestly and transparently available. Also if methodology is judged versus result then the evaluation method has more parameters that can be tracked making it more difficult for someone to get away with lying about their role in the decision making process.
Finally, an individual may lie in order to protect another person either physically or emotionally. Most of the time these types of lies are classified as “little white lies” based on their overall perceived significance. From an emotional standpoint while it may seem that sparing feelings or creating an ego boost is a good idea, it is not because lying in this situation is both morally and rationally incorrect. It perpetrates a false reality for the individual asking the question and does not create motivation to solve the underlying problem doing a disservice to that individual. It would be wiser to give an honest opinion to the other individual with an associated reason for such an opinion, so he/she would have the opportunity to rectify any procedure or action which brought about the negative opinion or receive genuine confidence from a genuine positive opinion.
The only lie that can be regarded as appropriate is one that prevents unjustified severe physical harm to an individual. The most prominent example of such a lie could be seen from individuals in Nazi-Germany when hiding Jewish individuals. When asked by German authorities whether or not they were harboring Jews, these individuals lied and answered “no” thereby saving the lives of those Jewish individuals they were harboring. If some honor can be ascribed to the act of lying this would be the reason. However, the severity of this situation must be respected, for it is quite rare and most people, especially those living in the developed world, will never experience this type of justification for a lie. For example it is not appropriate to lie to prevent someone from receiving a citation for marijuana possession.
In addition to being truthful, it is the duty of all individuals to seek the truth. Seeking the truth is not so dramatic as uncovering the meaning of life or the origins of the universe, but instead seeking the truth involves accumulating as much relevant information as possible and using it to determine the what actually happened, what should happen when and if certain decisions are made.
One of the problems with lying is that it appears to be inherent in the human condition. Even without instruction young children lie to conceal actions they believe are wrong or not in line with the desires of authority figures. Not surprisingly as they age children become even more sophisticated liars. While this behavior is troubling, some parties believe that it can be rectified through simple story telling. Researchers in Toronto utilized different moral messages from three different stories, Pinocchio, The Boy Who Cried Wolf and the George Washington and the Cherry Tree myth, to attempt to modify moral behavior in children ages three to seven if they listened to one of these stories before completing a task where cheating was made easy and rewarding.1 The results of this study concluded that the negative consequences associated with lying were unable to modify behavior regardless of its severity; however, the positive praise message that young Washington receives after admitting to his alleged transgression does lead to a higher probability that children would admit to cheating.
The researchers concluded that the Washington story emphasized the virtues of honesty and that telling the truth leads to positive outcomes and consequences. This belief was further supported by no change in behavior when the outcome of the Washington story was changed from a positive one to a negative one similar to the Wolf and Pinocchio stories. However, there is a glaring concern with this study relative to genuine morality. The researchers did not look at how the children would respond after hearing a story where an individual is praised for telling the truth regarding the commission of a negative transgression, but also punished for the admitted transgression; this is the outcome that happens in reality.
For example in the Washington-Cherry Tree story Washington is not punished for chopping down the cherry tree. It is not surprising that children would resonate with the message that if you do something wrong all you have to do is admit to it and there will be no further punishment and you will receive praise for being honest. Therefore, it is unclear how children would respond if these above beliefs about honesty were shattered when they are punished for what they admit to doing. Therefore, children should be taught that there is virtue to admitting transgressions, but also to expect an appropriate punishment for those transgressions. One thing that both children and adults understand is fairness.
The Toronto study directly addressed whether or not a child could be influenced morally by stories and change behavior accordingly, not whether or not honest admissions to wrongdoings was the best moral strategy. Clearly it would be best if children were instructed not to commit wrongdoings in the first place. While the above sentence is easy to say it is more difficult to put into practice. There are a multitude of moral philosophies that many individuals in society practice that conflict with each other. Even the supposed “no brainers” of morality, no slavery and sufficient access to resources, are not agreed upon.
So what can be done? The problem sounds very difficult, but the solution is very simple. Society as a whole should stress the importance of honesty in both words and actions and support that importance through significant punishment for those who are not honest. While cliché the inherent nature of lying is that individuals do so because they realize that the truth is detrimental to their standing in some manner. If one’s personal moral philosophy is regarded as superior and/or logical there should be no need to lie to defend it. Therefore, those who lie should be regarded as weak individuals who do not have confidence in their beliefs because they must distort those beliefs when interacting with society.
Society must also strengthen its resolve against dishonestly. One example of the current disparity in society regarding the importance of honesty is the ridiculously lenient perjury laws in basically all countries other than Australia. While in the United States the maximum sentence for perjury can be up to 5 years, jail sentences rarely exceed 2 years, thus what stops an individual from simply lying to avoid the numerous other criminal violations that have harsher penalties? The sad fact is nothing, which is why so many individuals who are guilty of criminal offenses pled not guilty and then lie about their involvement. Even if the lie is later discovered after the lying party has won his/her trial the penalty is so pathetic, usually ending up as time served, that the consequences are minimal and because of the 5th Amendment nothing can be done about the not guilty verdict that was achieved through deceit. Therefore, perjury must have a much stiffer criminal penalty if honesty is going to have any real relevance in the justice system.
Another example, one that too frequently occurs, is that when it is revealed that someone lies on his/her resume, the only typical outcome is termination from that job. This punishment does not support the value of honesty when the only consequence is losing the job, a job that that fired individual originally believed he/she was not qualified to compete for, thus creating the motivation for lying on the resume in the first place. When the penalty for lying is nothing but the expected outcome that would have resulted from honesty in the first place, then there is great motivation to lie. Therefore, a company that fires an employee for lying on their resume about something significant should be able to legally recoup all of the salary and benefits paid to that employee over the period that he/she worked at that company.
Overall modern society has regrettably accepted the philosophy that the ends justify the means, which allow individuals to lie and cheat as long as it produces a favorable result for those individuals. Unfortunately lying and cheating mitigates the alleged elements of meritocracy that comprise capitalism for it is not skill that allows the individual to achieve greatness, but the ability to cheat the system. Therefore, for capitalism to even approach that of a meritocracy society must demand more honest accountability from its citizens and elements of authority. Dishonesty must be punished accordingly rather than lightly scolded then quickly forgotten. Society will never attain anything close to its full potential until it legitimately begins to accept “honesty is the best policy”.
--
Citations –
1. Herbert, Wray. “Chopping the Cherry Tree: How Kids Learn Honesty.” Huffingtonpost. http://www.huffingtonpost.com/wray-herbert/chopping-the-cherry-tree_b_5240579.html
How can an individual expect to make the best possible decisions, not only for that individual, but all parties involved when certain pieces of information are purposely omitted and/or falsely represented? The ability to effectively cooperate with one another is also dependent on all of the information being as accurate as possible when considering the specific circumstances else all parties will lie due to the belief that all other parties are lying to improve their own positions and probability of gain. All of this mistrust and false information will lead to the optimal solution rarely being identified and implemented. Sadly honesty in modern times is significantly lacking, which damages the notion that capitalism has some meaningful quality of fairness based in merit achievement. So to redeem capitalism one must ask the question of why people lie in the first place and then address those causes to reduce the probability that people lie.
The overall rationality for lying is somewhat mysterious. While initially there may be rational reasons for lying, when actually examining those reasons the rationality falls apart in all but one situation. The first reason someone lies is in order to impress someone. Typically the need to impress someone is derived from the lack of self-confidence in one's own abilities and status, for those who are confident care little of the negative opinions of others that are not rooted in fact. Lying about one's accomplishments to impress someone for the purpose of gaining his/her respect is also irrational because as cliché as it sounds if one must lie to gain an individual’s respect that respect has not been genuinely gained, for respect must be earned; to gain genuine respect through deceit is not possible. Unfortunately too many people have come to portray this fraudulent version of respect for the genuine article.
Similar to respect lying to attain someone’s admiration is equally irrational because as idealistic as it sounds the admiration of no individual is worth the honor that is lost when lying to attain it. If one lives life according to his/her own beliefs and ethics that individual has no reason to lie about the way he/she lives life. Unfortunately in modern times talk about relying on honor and integrity to drive behavior seems foolish, especially with the level of money and power typically at stake. These elements are what drive most people to misrepresent themselves to others, thus the best way to stop it is to produce punishment that will not only take these opportunities away, but also make it more difficult to acquire future opportunities for money and power. Society is too lenient on those who lie intentionally and too willing to offer unconditional second, third, fourth, etc. chances. Later the logic behind the appropriate responses to dissuade lying will be discussed.
The second reason someone lies is in order to evade responsibility for specific actions. The motivation to evade responsibility stems from two causes. First, no one likes to be punished, thus misrepresentation of the facts is carried out to evade punishment. Second, some people believe that taking responsibility for an action that does not produce a favorable result diminishes their reputation in the eyes of those with some level of influence in the personal or professional prospects of that individual. This loss of respect may reduce the number of available opportunities for personal or professional advancement. Immaturity and lack of personal pride are significant causes of such behavior. However, there is another element to how individuals judge success that needs to be addressed because it is relevant to this motivation for lying.
Most individuals, incorrectly, judge the success of an action based solely on its outcome instead of the thought and methodology that went into the creation and execution of the action. A primary example of this reasoning is frequently demonstrated in sporting events. Suppose a football game is in the mid-fourth quarter with Team A leading Team B 10 to 6, but Team B has the ball on their own 42-yard line and it is fourth down and 6 to go. In one instance the coach elects to go for it by faking a punt and picks up the first down. What is the typical response of the television commentators? “What a gutsy call by Coach Stan. He knew that his team needed a spark and that Team A would be unprepared. That is just trusting your team to go get you the first down….” In a parallel world assume the exact same conditions, individuals and play except for one linebacker makes an assignment mistake and fills the A gap instead of the B gap resulting in him making the tackle and preventing a successful conversion. Now what do the television commentators say? “What a stupid call by Coach Stan. There is plenty of time left in the quarter and your defense is playing exceptionally well, the smart play is to punt it pinning Team B deep in their zone. That gives your defense an excellent chance to hold netting you better field position and a better opportunity to get that touchdown….”
So the exact same situation and action results in two different outcomes due to a single minute deviation that could not be accounted for by Coach Stan creates two dramatically different judgments by the television commentators and probably the causal fan as well. It is not reasonable to characterize the decision by Coach Smith to go for it as good or bad based principally on the outcome, yet the result of an action is the primary, if not only, criteria most people use. Instead society needs to be reasonable in its evaluation of decisions and look at the methodology and information available that went into making the decision instead of the result.
A majority of the time the result of a decision can be predicted when looking at the available information at the time of the decision and how the decision-makers interpreted and blended the relevant pieces of that information together to form a strategy. Therefore, outside of the rare occurrence when an unpredictable disturbance arises in the decision-making process, it is significantly more appropriate and advantageous to analyze this process instead of the result when judging action. Understanding the correct and the incorrect portions of the analysis can create precedence to what thought-processes typically lead to positive outcomes and what ones typically lead to negative outcomes, so one can learn not to repeat mistakes and emphasize successful strategies.
Changing this perception and habit in evaluating decision-making should reduce both the effectiveness and the rationality behind lying to avoid punishment for a bad result. Instead punishment will be issued for poor decision-making and analysis because these are the elements that are completely controllable by the individual, as long as information is honestly and transparently available. Also if methodology is judged versus result then the evaluation method has more parameters that can be tracked making it more difficult for someone to get away with lying about their role in the decision making process.
Finally, an individual may lie in order to protect another person either physically or emotionally. Most of the time these types of lies are classified as “little white lies” based on their overall perceived significance. From an emotional standpoint while it may seem that sparing feelings or creating an ego boost is a good idea, it is not because lying in this situation is both morally and rationally incorrect. It perpetrates a false reality for the individual asking the question and does not create motivation to solve the underlying problem doing a disservice to that individual. It would be wiser to give an honest opinion to the other individual with an associated reason for such an opinion, so he/she would have the opportunity to rectify any procedure or action which brought about the negative opinion or receive genuine confidence from a genuine positive opinion.
The only lie that can be regarded as appropriate is one that prevents unjustified severe physical harm to an individual. The most prominent example of such a lie could be seen from individuals in Nazi-Germany when hiding Jewish individuals. When asked by German authorities whether or not they were harboring Jews, these individuals lied and answered “no” thereby saving the lives of those Jewish individuals they were harboring. If some honor can be ascribed to the act of lying this would be the reason. However, the severity of this situation must be respected, for it is quite rare and most people, especially those living in the developed world, will never experience this type of justification for a lie. For example it is not appropriate to lie to prevent someone from receiving a citation for marijuana possession.
In addition to being truthful, it is the duty of all individuals to seek the truth. Seeking the truth is not so dramatic as uncovering the meaning of life or the origins of the universe, but instead seeking the truth involves accumulating as much relevant information as possible and using it to determine the what actually happened, what should happen when and if certain decisions are made.
One of the problems with lying is that it appears to be inherent in the human condition. Even without instruction young children lie to conceal actions they believe are wrong or not in line with the desires of authority figures. Not surprisingly as they age children become even more sophisticated liars. While this behavior is troubling, some parties believe that it can be rectified through simple story telling. Researchers in Toronto utilized different moral messages from three different stories, Pinocchio, The Boy Who Cried Wolf and the George Washington and the Cherry Tree myth, to attempt to modify moral behavior in children ages three to seven if they listened to one of these stories before completing a task where cheating was made easy and rewarding.1 The results of this study concluded that the negative consequences associated with lying were unable to modify behavior regardless of its severity; however, the positive praise message that young Washington receives after admitting to his alleged transgression does lead to a higher probability that children would admit to cheating.
The researchers concluded that the Washington story emphasized the virtues of honesty and that telling the truth leads to positive outcomes and consequences. This belief was further supported by no change in behavior when the outcome of the Washington story was changed from a positive one to a negative one similar to the Wolf and Pinocchio stories. However, there is a glaring concern with this study relative to genuine morality. The researchers did not look at how the children would respond after hearing a story where an individual is praised for telling the truth regarding the commission of a negative transgression, but also punished for the admitted transgression; this is the outcome that happens in reality.
For example in the Washington-Cherry Tree story Washington is not punished for chopping down the cherry tree. It is not surprising that children would resonate with the message that if you do something wrong all you have to do is admit to it and there will be no further punishment and you will receive praise for being honest. Therefore, it is unclear how children would respond if these above beliefs about honesty were shattered when they are punished for what they admit to doing. Therefore, children should be taught that there is virtue to admitting transgressions, but also to expect an appropriate punishment for those transgressions. One thing that both children and adults understand is fairness.
The Toronto study directly addressed whether or not a child could be influenced morally by stories and change behavior accordingly, not whether or not honest admissions to wrongdoings was the best moral strategy. Clearly it would be best if children were instructed not to commit wrongdoings in the first place. While the above sentence is easy to say it is more difficult to put into practice. There are a multitude of moral philosophies that many individuals in society practice that conflict with each other. Even the supposed “no brainers” of morality, no slavery and sufficient access to resources, are not agreed upon.
So what can be done? The problem sounds very difficult, but the solution is very simple. Society as a whole should stress the importance of honesty in both words and actions and support that importance through significant punishment for those who are not honest. While cliché the inherent nature of lying is that individuals do so because they realize that the truth is detrimental to their standing in some manner. If one’s personal moral philosophy is regarded as superior and/or logical there should be no need to lie to defend it. Therefore, those who lie should be regarded as weak individuals who do not have confidence in their beliefs because they must distort those beliefs when interacting with society.
Society must also strengthen its resolve against dishonestly. One example of the current disparity in society regarding the importance of honesty is the ridiculously lenient perjury laws in basically all countries other than Australia. While in the United States the maximum sentence for perjury can be up to 5 years, jail sentences rarely exceed 2 years, thus what stops an individual from simply lying to avoid the numerous other criminal violations that have harsher penalties? The sad fact is nothing, which is why so many individuals who are guilty of criminal offenses pled not guilty and then lie about their involvement. Even if the lie is later discovered after the lying party has won his/her trial the penalty is so pathetic, usually ending up as time served, that the consequences are minimal and because of the 5th Amendment nothing can be done about the not guilty verdict that was achieved through deceit. Therefore, perjury must have a much stiffer criminal penalty if honesty is going to have any real relevance in the justice system.
Another example, one that too frequently occurs, is that when it is revealed that someone lies on his/her resume, the only typical outcome is termination from that job. This punishment does not support the value of honesty when the only consequence is losing the job, a job that that fired individual originally believed he/she was not qualified to compete for, thus creating the motivation for lying on the resume in the first place. When the penalty for lying is nothing but the expected outcome that would have resulted from honesty in the first place, then there is great motivation to lie. Therefore, a company that fires an employee for lying on their resume about something significant should be able to legally recoup all of the salary and benefits paid to that employee over the period that he/she worked at that company.
Overall modern society has regrettably accepted the philosophy that the ends justify the means, which allow individuals to lie and cheat as long as it produces a favorable result for those individuals. Unfortunately lying and cheating mitigates the alleged elements of meritocracy that comprise capitalism for it is not skill that allows the individual to achieve greatness, but the ability to cheat the system. Therefore, for capitalism to even approach that of a meritocracy society must demand more honest accountability from its citizens and elements of authority. Dishonesty must be punished accordingly rather than lightly scolded then quickly forgotten. Society will never attain anything close to its full potential until it legitimately begins to accept “honesty is the best policy”.
--
Citations –
1. Herbert, Wray. “Chopping the Cherry Tree: How Kids Learn Honesty.” Huffingtonpost. http://www.huffingtonpost.com/wray-herbert/chopping-the-cherry-tree_b_5240579.html
Wednesday, April 30, 2014
Changing the way drugs are patented
One of the big problems with drug research is the conflict between producing low cost drugs that can manage or treat various medical conditions and allowing drug manufactures the ability to remain in business by ensuring a profit. These conflicting objectives are further stressed in third-world and developing countries that do not have robust middle class populations that can afford high price medication or insurance that will cover this medication. However, it also must be understood on the issue of drug company profits that drug development successes have become more difficult in modern times and these successes must pay for the research and development of both the successes and the failures.
Originally fully functional patents, not provisional patents, offered market exclusivity sans negotiated licenses, for 17 years. However, patents issued after June 8, 1995 now have an original operation period of 20 years. In addition there are various options for extending the intellectual property protection of a patent based on how long it takes for the FDA to approve the drug for marketing and sale or if the drug falls into a specific category of treatment.
Drug companies also have other “less genuine” strategies for extending market exclusivity of a drug most notably “evergreening”. Evergreening typically involves making minute changes in drug formulation like the inclusion of chirality (left-handed and right-handed isomers), different inactive components or specific hydrate forms. Both the initial time period and evergreening have been widely criticized by generic drug advocates looking to hasten the emergence of lower cost options in the marketplace. Evergreening is viewed as especially troublesome because it is frequently thought of as gaming the system and a form of patent trolling by producing a weak secondary patent or change to support a nearly expired primary patent.
Unfortunately for generic drug advocates it is difficult to expect a significant change in the general length or structure of patent protections without a trade-off because of the research and development costs associated with drug development. Generic drug manufactures do not have to absorb the costs associated with drug discovery typically involving high-throughput active compound analysis and lengthy clinical trials. While the general costs associated with drug development are incredibly controversial with some estimates ranging from the upper hundreds of millions of dollars to even billions of dollars and other estimates ranging in the lower hundreds of millions of dollars, no rational person disputes that it costs at least tens to hundreds of millions of dollars to produce a new drug that fails in a phase 3 clinical trial and hundreds of millions of dollars to produce a successful new drug. Therefore, drug manufactures must have sufficient opportunity to neutralize those costs and other general overhead with revenue from their successes. While the principle creator of a drug can license production to another company, the lower prices that emerge in combination of the time limit associated with the patent greatly limit revenue and overall profitability.
Notwithstanding the fairness element in drug manufacturing profitability, waiting 15+ years for lower cost options of various new drugs is too long, especially in a world where bacteria resistance to existing antibiotics and understanding of biological methodologies associated with neurological diseases have advanced at a rapid pace. Therefore, a compromise needs to be reached that will allow drug companies to recoup their R&D losses as well as produce sufficient revenue for future research, yet still hasten the time taken for low cost generics to appear in the marketplace. The best strategy may be to create a mandatory compulsory license in the patent system for drugs that automatically triggers after a shorter market exclusivity period.
For example instead of a 20 year exclusivity period what if drug patents were 5 years of market exclusivity followed by a 10% royalty in perpetuity derived from price on all generics based off that patent. This strategy would allow lower cost generics to enter the marketplace typically 15 years earlier than they do now. While this arrangement will work better for those who want access to pharmaceuticals for poorer individuals both in developed and developing countries, does it allow pharmaceutical companies to cover the costs of R&D and continued expansion? This is a difficult question to answer in all situations because there are various moving parts, but a general idea to the efficacy of this idea can be demonstrated through a general example.
Suppose company A produces an effective suppressive treatment, drug A, for condition A. Note that this treatment must be taken continuously over a period of time, i.e. it is not viewed as a “cure”. This characterization is not surprising because a vast majority of pharmaceutical drugs that are commonly consumed in modern society have this feature, i.e. one prescription of statins do not permanently reduce cholesterol. Therefore, because drug A has to be taken constantly, multiple prescriptions will be filled over the course of a year; for this example the number of prescriptions per year for drug A will be 4. However, it has also been noted that due to the cost of non-generic drugs certain consumers “split pills” lengthening the total time required to fully consume a full prescription. Due to this behavior the example will assume that 3.5 prescriptions will cover a single year for drugs under patent. The price of drug A while on patent is 150 dollars per prescription with a manufacturing cost of 20 dollars yielding a profit of 130 dollars per prescription.
Typically due to the large prices of patent drugs the marketplace is limited to the more developed world. In this example the marketplace for drug A will be 900 million individuals and if condition A has an occurrence rate of 2% that would create a potential customer base of 18 million individuals. Over time this customer base will increase due to other individuals coming down with condition A and current consumers remaining on drug A. This increase will equal 1% of the current customer base (i.e. last year’s customer base). Starting as a new drug it makes sense that drug A will not acquire a full market share on the first year of its introduction. The market share will be 5% for the first year increasing to 10% for year 2, 20% for year 3, 40% for year 4 and leveling off at 70% for year 5 and beyond. For the current on patent method it will be assumed that no other company will be licensed to manufacture drug A and that after the patent expires in 20 years drug A will be pushed out of the marketplace entirely by lower price generics.
In the new suggested patent idea after 5 years on patent a 10% royalty for all generics would activate. Access to generics would open the entire rest of the world to treatment by drug A for condition A (another approximately 6.1 billion individuals). Since a number of chronic drugs are influenced by excess food consumption and less exercise it stands to reason that the occurrence rate for condition A in these developing and third-world countries will be lower. For the purpose of this example the occurrence rate for these new potential consumers will equal 50% the occurrence rate in the developed world (i.e. 1%). This scenario will produce an additional 30.5 million potential consumers for drug A. Generic prices are significantly lower than on patent drug prices, thus for this example the average generic price, which will service all individuals after 5 years even those in the developed world, will be 80% lower (i.e. 30 dollars per prescription). Due to the lower price, consumers will not feel the need to “split pills” thus the recommended 4 prescriptions per year will be observed. It is assumed that charitable organizations and NGOs will assist low-income consumers with purchases in poor countries. However, while NGOs and the like will help due to continuing income gaps, existing alternative strategies that have been utilized while drug A was under patent and distribution concerns in various countries where market stability is in question the global marketplace penetration of generic drug A will be 50%.
Table 1 below summarizes the major features of each strategy:
Table 1 – Important Initial Condition Elements
Current Patent Method:
Initial Customer Base – 18 million;
On Patent Years - 20
New Customers – 1% growth per year;
Price - $150;
Occurrence Rate – 2%;
New Patent Method:
Initial Customer Base – 18 million;
On Patent Years - 5
New Customers – 30.5 million after year 5 and 1% growth;
Price - $30;
Occurrence Rate – 1%;
After analysis of the above scenario the major results are summarized in the below table:
Table 2 – Important Financial Results
Current Method:
Total Revenue after 20 years - $9,952,552,601
Total Revenue after 40 years - $9,952,552,601
Total Consumers per year after 20 years - 4,754,302
Total Consumers per year after 40 years - 0
New Method:
Total Revenue after 20 years - $5,211,049,656
Total Revenue after 40 years - $10,260,542,661
Total Consumers per year after 20 years - 74,872,229
Total Consumers per year after 40 years - 93,272,020
Breakeven Year - 39
Not surprisingly after 20 years the current method produces a larger profit for company A versus the new proposed method. However, after 40 years the new proposed method creates more profit. After both 20 and 40 years the number of consumers that are aided by the new method is significantly larger. The most debilitating controllable factor preventing greater yearly revenue from the new method is market penetration. In the above example the 50% market penetration was viewed as conservative. If market penetration were 70% the breakeven year would be between year 30 and year 31. Of course it must be acknowledged that the nature of this example is heavily influenced by various factors, thus this result cannot be taken as the typical result in all situations.
Clearly the new method is superior for drugs that do not have an initial high price point like antibiotics. Another advantage of using this method for antibiotics is that while there exists the small probability that human beings will one day no longer suffer from the chronic conditions (high cholesterol, high blood pressure, erectile dysfunction, etc.) heavily reducing the need for these types of drugs, it is very unlikely that humans, sans becoming cyborgs, will be able to escape pathogenic infection and the need for antibiotics. Therefore, this new method should work better for antibiotic research in long standing companies versus the current system.
It should be noted that at least two important elements were excluded from the above analysis that should have significant economy impact due to a lack of specifics and pertinent information. First, the principle pharmaceutical developer, company A, typically will spend millions of dollars in direct to consumer advertisement and hundreds of millions marketing to physicians. It is rational to expect that a disproportionate amount of these advertising dollars will be spent in the initial launch of the drug to create interest and market share. However, it stands to reason that at least 50% of the total money spent on advertisement will occur between patent years 6 to 20. Under the new system this money would not be spent on advertisement and could be directed towards other activities like future research.
Second, as discussed above various pharmaceutical companies devote significant resources both in researcher labor and money to increase the length of their patents through the process of evergreening. Under this new system pharmaceutical companies would not have the ability to extend the patent, thus would not devote financial capital and man-hours to trying. The total value of these resources is unknown, but it stands to reason that an appropriate estimate is in the millions of dollars in direct capital with the additional costs born of the unknown opportunity costs of devoting valuable research staff to try to save the patent on an expiring drug rather than research a new drug.
Another side benefit from this system should be a reduction of the counterfeit drug market. The principle reason that counterfeit drugs are a desirable criminal enterprise is the high per unit profit margins. However, with only five years of patent protection the longevity of the counterfeit marketplace is significantly eroded making it financially risky for individuals to attempt to create a supply chain to forge these types of pharmaceuticals. Even if individuals do counterfeit these types of drugs the overall potential for harm is significantly lessened because of the small patent window before safe generics can enter the market replacing the counterfeit drugs.
However, one of the biggest concerns with this new suggestion is that while it will work for “blockbuster” drugs and should provide a boon to critical new antibiotic research, smaller marketplace drugs, most notably for orphan conditions, would be hurt due to the limited volume profitability potential. Both blockbuster drugs and antibiotics work in this new system because of the volume of individuals who will take these drugs over the decades long lifespan. Drugs for orphan conditions by definition do not have a large volume of potential consumers. This lack of a customer base is why orphan conditions have typically been neglected for so long in general practice. Those companies that do attempt to create drugs for these conditions are motivated largely by the ability to corner a market with small volume but large per unit profit margins. Cutting patent protection for these orphan drugs by 75% would be devastating for their profitability, which would lead companies not to attempt to discover them in the first place. Therefore, if the above suggestion is incorporated into new patent enforcement then a special condition must be made for orphan condition research.
Changing the operational nature of patents, especially in such a volatile market, due to the critical nature of pharmaceuticals in human longevity, cannot be taken lightly. The suggestion above attempts to address the two conflicting forces in the field by allowing drug developers the ability to take advantage of their successes to produce the necessary revenue to produce additional successes while also ensuring a humanitarian morality that drives the desire to place quality pharmaceuticals into the hands of those in need at affordable prices. The above financial analysis identifies the 5-year lifetime 10% royalty patent idea as a viable alternative to the current patent structure. However, as noted above an exception clause must be made for drugs that are being developed for orphan conditions lest those diseases may continue to be forsaken due to their characteristics as a financial loser. Overall while there are some financial elements that were only generally covered in the above analysis due to a lack of information and differential situations, it currently stands to reason from a logical perspective that changing the patent enforcement rules for pharmaceutical drugs could be a win-win for both global consumers and pharmaceutical companies.
Originally fully functional patents, not provisional patents, offered market exclusivity sans negotiated licenses, for 17 years. However, patents issued after June 8, 1995 now have an original operation period of 20 years. In addition there are various options for extending the intellectual property protection of a patent based on how long it takes for the FDA to approve the drug for marketing and sale or if the drug falls into a specific category of treatment.
Drug companies also have other “less genuine” strategies for extending market exclusivity of a drug most notably “evergreening”. Evergreening typically involves making minute changes in drug formulation like the inclusion of chirality (left-handed and right-handed isomers), different inactive components or specific hydrate forms. Both the initial time period and evergreening have been widely criticized by generic drug advocates looking to hasten the emergence of lower cost options in the marketplace. Evergreening is viewed as especially troublesome because it is frequently thought of as gaming the system and a form of patent trolling by producing a weak secondary patent or change to support a nearly expired primary patent.
Unfortunately for generic drug advocates it is difficult to expect a significant change in the general length or structure of patent protections without a trade-off because of the research and development costs associated with drug development. Generic drug manufactures do not have to absorb the costs associated with drug discovery typically involving high-throughput active compound analysis and lengthy clinical trials. While the general costs associated with drug development are incredibly controversial with some estimates ranging from the upper hundreds of millions of dollars to even billions of dollars and other estimates ranging in the lower hundreds of millions of dollars, no rational person disputes that it costs at least tens to hundreds of millions of dollars to produce a new drug that fails in a phase 3 clinical trial and hundreds of millions of dollars to produce a successful new drug. Therefore, drug manufactures must have sufficient opportunity to neutralize those costs and other general overhead with revenue from their successes. While the principle creator of a drug can license production to another company, the lower prices that emerge in combination of the time limit associated with the patent greatly limit revenue and overall profitability.
Notwithstanding the fairness element in drug manufacturing profitability, waiting 15+ years for lower cost options of various new drugs is too long, especially in a world where bacteria resistance to existing antibiotics and understanding of biological methodologies associated with neurological diseases have advanced at a rapid pace. Therefore, a compromise needs to be reached that will allow drug companies to recoup their R&D losses as well as produce sufficient revenue for future research, yet still hasten the time taken for low cost generics to appear in the marketplace. The best strategy may be to create a mandatory compulsory license in the patent system for drugs that automatically triggers after a shorter market exclusivity period.
For example instead of a 20 year exclusivity period what if drug patents were 5 years of market exclusivity followed by a 10% royalty in perpetuity derived from price on all generics based off that patent. This strategy would allow lower cost generics to enter the marketplace typically 15 years earlier than they do now. While this arrangement will work better for those who want access to pharmaceuticals for poorer individuals both in developed and developing countries, does it allow pharmaceutical companies to cover the costs of R&D and continued expansion? This is a difficult question to answer in all situations because there are various moving parts, but a general idea to the efficacy of this idea can be demonstrated through a general example.
Suppose company A produces an effective suppressive treatment, drug A, for condition A. Note that this treatment must be taken continuously over a period of time, i.e. it is not viewed as a “cure”. This characterization is not surprising because a vast majority of pharmaceutical drugs that are commonly consumed in modern society have this feature, i.e. one prescription of statins do not permanently reduce cholesterol. Therefore, because drug A has to be taken constantly, multiple prescriptions will be filled over the course of a year; for this example the number of prescriptions per year for drug A will be 4. However, it has also been noted that due to the cost of non-generic drugs certain consumers “split pills” lengthening the total time required to fully consume a full prescription. Due to this behavior the example will assume that 3.5 prescriptions will cover a single year for drugs under patent. The price of drug A while on patent is 150 dollars per prescription with a manufacturing cost of 20 dollars yielding a profit of 130 dollars per prescription.
Typically due to the large prices of patent drugs the marketplace is limited to the more developed world. In this example the marketplace for drug A will be 900 million individuals and if condition A has an occurrence rate of 2% that would create a potential customer base of 18 million individuals. Over time this customer base will increase due to other individuals coming down with condition A and current consumers remaining on drug A. This increase will equal 1% of the current customer base (i.e. last year’s customer base). Starting as a new drug it makes sense that drug A will not acquire a full market share on the first year of its introduction. The market share will be 5% for the first year increasing to 10% for year 2, 20% for year 3, 40% for year 4 and leveling off at 70% for year 5 and beyond. For the current on patent method it will be assumed that no other company will be licensed to manufacture drug A and that after the patent expires in 20 years drug A will be pushed out of the marketplace entirely by lower price generics.
In the new suggested patent idea after 5 years on patent a 10% royalty for all generics would activate. Access to generics would open the entire rest of the world to treatment by drug A for condition A (another approximately 6.1 billion individuals). Since a number of chronic drugs are influenced by excess food consumption and less exercise it stands to reason that the occurrence rate for condition A in these developing and third-world countries will be lower. For the purpose of this example the occurrence rate for these new potential consumers will equal 50% the occurrence rate in the developed world (i.e. 1%). This scenario will produce an additional 30.5 million potential consumers for drug A. Generic prices are significantly lower than on patent drug prices, thus for this example the average generic price, which will service all individuals after 5 years even those in the developed world, will be 80% lower (i.e. 30 dollars per prescription). Due to the lower price, consumers will not feel the need to “split pills” thus the recommended 4 prescriptions per year will be observed. It is assumed that charitable organizations and NGOs will assist low-income consumers with purchases in poor countries. However, while NGOs and the like will help due to continuing income gaps, existing alternative strategies that have been utilized while drug A was under patent and distribution concerns in various countries where market stability is in question the global marketplace penetration of generic drug A will be 50%.
Table 1 below summarizes the major features of each strategy:
Table 1 – Important Initial Condition Elements
Current Patent Method:
Initial Customer Base – 18 million;
On Patent Years - 20
New Customers – 1% growth per year;
Price - $150;
Occurrence Rate – 2%;
New Patent Method:
Initial Customer Base – 18 million;
On Patent Years - 5
New Customers – 30.5 million after year 5 and 1% growth;
Price - $30;
Occurrence Rate – 1%;
After analysis of the above scenario the major results are summarized in the below table:
Table 2 – Important Financial Results
Current Method:
Total Revenue after 20 years - $9,952,552,601
Total Revenue after 40 years - $9,952,552,601
Total Consumers per year after 20 years - 4,754,302
Total Consumers per year after 40 years - 0
New Method:
Total Revenue after 20 years - $5,211,049,656
Total Revenue after 40 years - $10,260,542,661
Total Consumers per year after 20 years - 74,872,229
Total Consumers per year after 40 years - 93,272,020
Breakeven Year - 39
Not surprisingly after 20 years the current method produces a larger profit for company A versus the new proposed method. However, after 40 years the new proposed method creates more profit. After both 20 and 40 years the number of consumers that are aided by the new method is significantly larger. The most debilitating controllable factor preventing greater yearly revenue from the new method is market penetration. In the above example the 50% market penetration was viewed as conservative. If market penetration were 70% the breakeven year would be between year 30 and year 31. Of course it must be acknowledged that the nature of this example is heavily influenced by various factors, thus this result cannot be taken as the typical result in all situations.
Clearly the new method is superior for drugs that do not have an initial high price point like antibiotics. Another advantage of using this method for antibiotics is that while there exists the small probability that human beings will one day no longer suffer from the chronic conditions (high cholesterol, high blood pressure, erectile dysfunction, etc.) heavily reducing the need for these types of drugs, it is very unlikely that humans, sans becoming cyborgs, will be able to escape pathogenic infection and the need for antibiotics. Therefore, this new method should work better for antibiotic research in long standing companies versus the current system.
It should be noted that at least two important elements were excluded from the above analysis that should have significant economy impact due to a lack of specifics and pertinent information. First, the principle pharmaceutical developer, company A, typically will spend millions of dollars in direct to consumer advertisement and hundreds of millions marketing to physicians. It is rational to expect that a disproportionate amount of these advertising dollars will be spent in the initial launch of the drug to create interest and market share. However, it stands to reason that at least 50% of the total money spent on advertisement will occur between patent years 6 to 20. Under the new system this money would not be spent on advertisement and could be directed towards other activities like future research.
Second, as discussed above various pharmaceutical companies devote significant resources both in researcher labor and money to increase the length of their patents through the process of evergreening. Under this new system pharmaceutical companies would not have the ability to extend the patent, thus would not devote financial capital and man-hours to trying. The total value of these resources is unknown, but it stands to reason that an appropriate estimate is in the millions of dollars in direct capital with the additional costs born of the unknown opportunity costs of devoting valuable research staff to try to save the patent on an expiring drug rather than research a new drug.
Another side benefit from this system should be a reduction of the counterfeit drug market. The principle reason that counterfeit drugs are a desirable criminal enterprise is the high per unit profit margins. However, with only five years of patent protection the longevity of the counterfeit marketplace is significantly eroded making it financially risky for individuals to attempt to create a supply chain to forge these types of pharmaceuticals. Even if individuals do counterfeit these types of drugs the overall potential for harm is significantly lessened because of the small patent window before safe generics can enter the market replacing the counterfeit drugs.
However, one of the biggest concerns with this new suggestion is that while it will work for “blockbuster” drugs and should provide a boon to critical new antibiotic research, smaller marketplace drugs, most notably for orphan conditions, would be hurt due to the limited volume profitability potential. Both blockbuster drugs and antibiotics work in this new system because of the volume of individuals who will take these drugs over the decades long lifespan. Drugs for orphan conditions by definition do not have a large volume of potential consumers. This lack of a customer base is why orphan conditions have typically been neglected for so long in general practice. Those companies that do attempt to create drugs for these conditions are motivated largely by the ability to corner a market with small volume but large per unit profit margins. Cutting patent protection for these orphan drugs by 75% would be devastating for their profitability, which would lead companies not to attempt to discover them in the first place. Therefore, if the above suggestion is incorporated into new patent enforcement then a special condition must be made for orphan condition research.
Changing the operational nature of patents, especially in such a volatile market, due to the critical nature of pharmaceuticals in human longevity, cannot be taken lightly. The suggestion above attempts to address the two conflicting forces in the field by allowing drug developers the ability to take advantage of their successes to produce the necessary revenue to produce additional successes while also ensuring a humanitarian morality that drives the desire to place quality pharmaceuticals into the hands of those in need at affordable prices. The above financial analysis identifies the 5-year lifetime 10% royalty patent idea as a viable alternative to the current patent structure. However, as noted above an exception clause must be made for drugs that are being developed for orphan conditions lest those diseases may continue to be forsaken due to their characteristics as a financial loser. Overall while there are some financial elements that were only generally covered in the above analysis due to a lack of information and differential situations, it currently stands to reason from a logical perspective that changing the patent enforcement rules for pharmaceutical drugs could be a win-win for both global consumers and pharmaceutical companies.
Labels:
Drugs,
economy,
Generic Drugs,
Globalization,
Patents,
Research and Development
Tuesday, April 22, 2014
Restoring the Arctic
There are numerous environmental concerns surrounding the progression of human-derived global warming. One of the most pressing is the persistent loss of Arctic ice. Due to a vast majority of global warming related heat being absorbed by the ocean all oceanic temperatures have increased, regardless of location, with the Arctic receiving the greatest temperature increase due to its lower base temperature. This increase has been significant enough that the ice extent at the summer minimum, which consistently occurs in September, has resulted in a net loss of 11% per decade since 1979 with a loss of 1.1 meters of mean ice thickness between 1980 and 2000.1,2 This loss of thickness has produced a general shift in the ice type from older multi-year ice to new single year ice resulting in an overall replacement of about 40% of the thick and old multi-year ice with single year ice.3 Coinciding with this empirical evidence various global and regional climate models have predicted that the situation will only get worse in the future.4
The chief purpose of ice in the Arctic, from a global warming standpoint, is to increase ocean albedo due to its reflective surface versus the darker surface of the water itself. When sunlight strikes the transparent/white surface of ice a vast majority of it is reflected back into the atmosphere. When sunlight strikes the dark blue, sometimes black, surface of Arctic water a vast majority of the light and its associated heat content is absorbed by the ocean rather than reflected back into the atmosphere. On a general level this heat absorption is a positive feedback effect where the more heat absorbed the more ice melts leading to even more heat absorbed, etc. Normally the ocean and its system of currents operate as a heat sink to control surface and atmospheric temperatures; however, this new massive heat absorption reduces sink efficiency allowing more heat to remain in the atmosphere increasing the detrimental effects associated with global warming. A secondary effect is that greater amounts of ice melt will increase global sea level rise in the future placing more coastal and even slightly inland cities at risk as well as negatively affecting Arctic wildlife by eliminating “land” surfaces for hunting and habitation.
With these near-future negative environmental events born from a lack of Arctic ice one would reason that it is important to find and execute a methodology that would increase Arctic ice volume and longevity. The most obvious means of increasing Arctic ice would be to eliminate the human derived excess heat, which would restore typical Arctic ocean temperatures seen in the 50s and 60s and even further past. One means of accomplishing this goal is to simply reverse the actions that lead to the heating. While reducing global carbon emissions is an important and critical step in addressing global warming, the realistic timetable for cooling the Arctic through carbon mitigation then reliance on natural processes is still decades if not even over a century away. Based on the rate of melting a more immediate solution will be required.
Recalling the albedo-heat feedback cycle from above, one method to break that cycle would be to increase the albedo of the ocean. Not surprisingly it is nearly impossible to change the natural color of the ocean due to its size and natural mixing, thus changing ocean albedo will require human intervention to change the surface albedo of the Arctic ocean. The easiest method is to mimic nature itself and increase surface ice by enhancing ice formation. Obviously enhancing ice formation will require large amounts of water; fortunately meeting this supply requirement is not a problem for water can be taken from the ocean itself and re-deposited on existing ice.
One of the principle reasons this strategy works is that ice is a quality thermal insulator, which can increase the speed of water freezing. In addition nucleation may also play a role in this ice formation enhancement where ice-forming nucleus tend to trigger freezing of under-cooled water droplets at higher temperatures when in solid contact versus liquid immersion.5-7 While the reason for this enhancement is unknown it is suspected that there are thermodynamically favorable interactions at the air-water interface8,9 leading to contact nucleation as a manifestation of an enhanced surface nucleation rate.5 Basically the liquid environment reduces the uniformity of the air-water interface reducing the efficiency of nucleation. Another important influencing factor may be that nucleation near the surface is greater because of a greater freedom of motion, thus the kinetic rate coefficient is larger at the surface than in the bulk (regardless of that bulk being solid or liquid); this change is important because the change in activation energy between phase changes is exponential.5 Overall the important point to take home is that water sprayed on to the surface of ice has a higher probability of freezing into new ice versus that water remaining adjacent or beneath the ice (all things being equal).
However, increasing ice formation will require managing the temperature increases that have lead to the reduced ice in the first place. There are two chief methods for addressing this temperature question. The first method is to take the water from the ocean and run it through a heat exchanger to remove a sufficient amount of heat to produce an appropriate freezing probability. The chief drawbacks to this method are the energy required to operate the heat exchanger and what to do with the heat absorbed from the water. The heat exchanger needs to be operated with an energy medium that has a very small carbon footprint otherwise the negative aspect of the added CO2 to the atmosphere through this method will more than likely exceed the benefits of adding more Arctic ice. In addition the heat removed from the water must be stored properly because if it is released to the environment it will either enter the atmosphere or the ocean, either result would largely mitigate any advantage to increasing Arctic ice.
The second method involves drawing ocean water not from the surface, but from deeper water near the bottom of the thermocline where the average temperature is much lower. The weakness of the first method is the reliance on the heat exchanger and its energy demands. Unfortunately while the second method eliminates the heat exchanger it cannot eliminate the need for additional energy usage because instead of using a heat exchanger a pump is required. The unknown question is which method will require more energy. Overall unless the first method is significantly more energy efficient, the second method should be favored because there is no excess heat to manage. While the power requirements for the pump and eventual energy consumption are easy to calculate experimentation will have to be conducted to identify the appropriate pumping rate, spray volume, and spray angle.
An important secondary question is what should be done about the salt in the supply water? One possibility would involve removing the salt because salt “decreases” the freezing point of water making it more difficult to form ice and could even result in ice sheet perforation. An alternative strategy would involve retaining the salt, which would strengthen down-welling currents when the ice melts. The best means to determine the best strategy would simply be to test this ice formation methodology and closely observe how the rate of secondary ice formation changes depending on the current temperature and time of year without any salt removal. If the formation rate is not sufficient then the salt will need to be removed.
If water cannot be used due to energy requirements the other major option for creating a change in the ocean surface albedo in an environmental neutral method is cover the water surface with bubbles. One of the chief advantages of this second option is that bubbles require little energy to create, thus the operational costs for such a system are low.10,11 Bubbles increase ocean surface albedo by increasing the reflective solar flux by providing voids that backscatter light.10 In addition modeling the reflective behavior of bubbles is similar to aerosol water drops because light backscattering is cross-sectional versus being mass or volume dependent and the spherical voids in the water column have the same refractive index characteristics.10 Note that ocean surface albedo varies with angle of solar incidence. Common values are less than 0.05 at 12:00, below 0.1 at 65 degrees solar zenith angle and a maximum albedo, which range from 0.2 to 0.5, at solar zenith angle 84 degrees.12-15 Based on this comparison information the principle formula governing brightening is:
DeltaF = DeltaA * Io * So * (1-Cf) * Tu * Td
where DeltaF = change in brightening; DeltaA = change in albedo on water surface; Io = solar irradiance; So = cosine of solar zenith angle; Cf = fraction of cloud cover; Tu = upwelling transmissive; Td = down-welling transmissive;10
Experiments have already demonstrated the creation of hydrosols from the expansion of air saturated water moving through vortex nozzles, which applies the appropriate level of shearing forces creating a swirling jet of water.11 Also by using an artificial two-phase flow smaller microbubbles can be created, which can even result in interfacial films through ambient fluid pressure reduction.12 Microbubbles can possibly form these films because they typically last longer than visible whitecap bubbles, which rise and bust in seconds. Note that whitecaps are froth created from breaking waves and can increase ocean albedo up to 0.22 from the common 0.05-0.1 values.16
While whitecaps from waves and wakes do provide increased surface albedo, the effect is ephemeral. Microbubble lifespan can be influenced by local surfactant concentration and fortunately the Arctic has limited natural surfactant concentration that would influence this lifespan, thus granting more control in the process of creating those bubbles (less outside factors that could unduly influence bubble lifespan). For example, if these bubbles are created through technological means additional elements can be added to the reactant water like a silane surfactant that could add hours to the natural lifespan.17 Bubble lifespan is probably the most important characteristic for this form of ocean albedo increase both from an economic and efficiency standpoint. However, while most surfactants and other agents like glycerin are typically not environmentally detrimental, the massive amounts required for increasing bubble longevity may make its use economically and environmentally unsustainable.
Another method for creating microbubbles comes from biomedical engineering where microfluidic procedures and sonication are used to enhance surfactant monolayers to stabilize microbubble formation.18 However, there are two common concerns about this method. First, it is used primarily in a laboratory largely for diagnostic and therapeutic applications, not in the field; therefore there may be questions about transition, especially for the dramatic increase in production scale that will be required for Arctic use. Second, while sonication increases stabilizing time, it limits control of microbubble size distribution, which could limit the total reflectiveness of the bubbles.19,20
An expanded and newer laboratory technique, electrohydrodynamic atomization, generates droplets of liquids and applies coaxial microbubbling to facilitate control over microbubble size. Unfortunately one concern with this technique is that as mentioned above ideal bubble size is in microns, but this technique is currently only able to create single digit millimeter sized bubbles.18 However, the increased size may be offset by the increased stability of the bubble (less overall reflection, but longer residence time). Comparison testing will be required to make the appropriate judgment.
The final method for increasing ice formation involves devising a piece of technology that can absorb excess heat from the Arctic Ocean. At first thought such an idea seems unlikely due to the size of the Arctic Ocean and its environmental inputs. However, it may not be as far-fetched as it seems. The key to making such a strategy viable is efficiency and scale within the utilized technology.
Scale is achieved through a design that is small enough that it can be produced at reasonable cost with a reasonable level of speed. Efficiency is typically achieved through producing a device that is self-cycling and thereby producing an autonomous operation. If human involvement is required beyond “pushing the start button” then efficiency is significantly compromised. Tie that efficiency loss in a single unit and multiply it by the units required for scale and the result can be devastating in both the terms of cost and viability.
If the objective is to withdraw heat from the ocean the most important element in the device is what agent will be utilized to accomplish this task. Ironically water is one of the best insulators of heat, which is why it is used for cooling purposes in power plants, thus removing heat could prove difficult. Fortunately there is promising research that supports the idea of incorporating zeolite as the heat absorbent material. Zeolite is a mineral make up of SiO2-, various AlO2 groups and alkali-ions and is capable of absorbing gaseous molecules including water due to its crystalline structure. When zeolite absorbs a gas it retains heat due to the absorption enthalpy.21 In addition because zeolite is commonly produced synthetically for use as molecular sieves and washing detergents it is cheap (50 – 75 cents /kg) and environmentally neutral.21
A good example of how zeolite is used in heat absorption is seen through their use in absorption refrigerators. Absorption refrigerators consist of two connected but independent vessels, the evaporator and absorber. The evaporator vessel acts as a quasi-vacuum containing only the vapor pressure of a liquid, which is usually water. When the valve connecting the two vessels is opened the water vapor moves into the absorption vessel and is absorbed by the zeolite reducing the vapor pressure. The loss of pressure causes a phase change as the water become liquid. Eventually the zeolite becomes saturated ceasing the heat transfer between the zeolite and the water. In the refrigerator model at a later time the zeolite is superheated condensing the absorbed water vapor and returning it to the evaporator vessel.
However, the secondary functionality of the above refrigerator design, zeolite recovery through heating, is not applicable in an oceanic environment. The water and resultant heat must be released from the zeolite so it can be reused, but this release will produce excess heat, which is similar to the problem of using a heat exchanger in the first strategy, there is no good place on the open ocean to store the heat without avoiding environmental release. One strategy to address this issue with a small movable device is when the zeolite becomes “full” the device can return, via a small battery powered motor, to a “mother” ship of sorts where the zeolite heat release process can be conducted. After restoring the zeolite to its rest state the device can return to the Arctic to withdraw more heat. After sufficient time the “mother” ship will be “full” of heat and would return to a land base, most likely Iceland due to its geothermal reserves as an energy source and well, to properly off-load the heat stores. Granted this method will place some limits on overall efficiency due to the trips between the Arctic and heat releasing stop over points, but necessary to manage the heat problem.
In the end the positive feedback associated with the warming-albedo reduction relationship is a legitimate threat to carbon mitigation and remediation strategies as a whole. Therefore, society needs to appreciate the time discrepancies associated with restoring colder temperatures to the Arctic Ocean in effort to preserve Arctic ice, especially during the summer. A technology-based solution will be required. Three possible strategies have been presented above in general detail to attempt to break this warming-albedo reduction relationship. One of the advantages of all of these strategies is that they can be experimentally explored with little overall detriment due to their ephemeral nature. Basically if the results are not similar to what is anticipated the experiments can be stopped with little environmental or economic damage. Overall something needs to be done about increased rate of warming in the Arctic and the dramatically increased rate of ice lost if global carbon mitigation strategies are going to be fully effective at reducing the detrimental effects of global warming.
Citations –
1. Perovich, D, and Richter-Menge, A. “Loss of sea ice in the Arctic.” Annu. Rev. Mar. Sci. 2009. 1:417–441.
2. Rothrock, D, Percival, D, and Wensnahan, M. “The decline in Arctic sea-ice thickness: Separating the spatial, annual, and interannual variability in a quarter century of submarine data.” J. Geophys. Res. 2008. 113:C05003.
3. Kwok, R. “Observational assessment of Arctic Ocean sea ice motion, export, and thickness in CMIP3 climate simulations.” J. Geophys. Res. 2011. 116:C00D05.
4. Bjork, G, Stranne, C, and Borenas, K. “The sensitivity of the Arctic Ocean sea ice thickness and its dependence on the surface albedo parameterization.” Journal of Climate. 2013. 26:1355-1370.
5. Shaw, R, Durant, A, and Mi, Y. “Heterogeneous surface crystallization observed in undercooled water.” Journal of Physical Chemistry B Letters. 2005. 109:9865-9868.
6. Vali, G. In Nucleation and Atmospheric Aerosols; Kulmala, M., Wagner, P., Eds.; Pergamon: New York, 1996.
7. Pruppacher, H, and Klett, J. Microphysics of Clouds and Precipitation, 2nd ed.; Kluwer Academic Pub.: Norwell, MA, 1997. Chapters 7 and 9.
8. Djikaev, Y, et Al. “Thermodynamic conditions for the surface-stimulated crystallization of atmospheric droplets.” J. Phys. Chem. A. 2002. 106:10247. doi:10.1021/jp021044s.
9. Tabazadeh, A, Djikaev, Y, and Reiss, H. “Surface crystallization of supercooled water in clouds.” PNAS. 2002. 99(25):15873-15878.
10. Seitz, F. “On the theory of the bubble chamber.” Physics of Fluids. 1958. 1: 2-10.
11. Seitz, F. “Bright Water: hydrosols, water conservation and climate change.” 2010.
12. Evans, J.R.G, et Al. “Can oceanic foams limit global warming?” Clim. Res. 2010. 42:155-160.
13. Davies, J. “Albedo measurements over sub-arctic surfaces.” McGill Sub-Arctic Res Pap. 1962. 13:61–68.
14. Jin, Z, et Al. “A parameterization of ocean surface albedo.” Geophys Res Letters. 2004. 31:L22301.
15. Payne, R. “Albedo of the sea surface.” J Atmos Sci. 1972. 29:959–970.
16. Moore, K, Voss, K, and Gordon, H. “Spectral reflectance of whitecaps: Their contribution to water-leaving radiance.” J. Geophys. Res. 2000. 105:6493-6499
17. Johnson, B, and Cooke, R. “Generation of Stabilized Microbubbles in Seawater.” Science. 1981. 213:209-211
18. Farook, U, Stride, E, and Edirisinghe, J. “Preparation of suspensions of phospholipid-coated microbubbles by coaxial electrohydrodynamic atomization.” J.R. Soc. Interface. 2009. 6:271-277.
19. Wang, W, Moser, C, and Weatley, M. “Langmuir trough study of surfactant mixtures used in the production of a new ultrasound contrast agent consisting of stabilized microbubbles.” J. Phys. Chem. 1996. 100:13815–13821.
20. Borden, M, et Al. “Surface phase behaviour and microstructure of lipid/PEG emulsifier monolayer-coated microbubbles.” Colloids Surf. B: Biointerfaces. 2004. 35:209–223.
21. Kreussler, S, and Bolz, D. “Experiments on solar adsorption refrigeration using zeolite and water.”
The chief purpose of ice in the Arctic, from a global warming standpoint, is to increase ocean albedo due to its reflective surface versus the darker surface of the water itself. When sunlight strikes the transparent/white surface of ice a vast majority of it is reflected back into the atmosphere. When sunlight strikes the dark blue, sometimes black, surface of Arctic water a vast majority of the light and its associated heat content is absorbed by the ocean rather than reflected back into the atmosphere. On a general level this heat absorption is a positive feedback effect where the more heat absorbed the more ice melts leading to even more heat absorbed, etc. Normally the ocean and its system of currents operate as a heat sink to control surface and atmospheric temperatures; however, this new massive heat absorption reduces sink efficiency allowing more heat to remain in the atmosphere increasing the detrimental effects associated with global warming. A secondary effect is that greater amounts of ice melt will increase global sea level rise in the future placing more coastal and even slightly inland cities at risk as well as negatively affecting Arctic wildlife by eliminating “land” surfaces for hunting and habitation.
With these near-future negative environmental events born from a lack of Arctic ice one would reason that it is important to find and execute a methodology that would increase Arctic ice volume and longevity. The most obvious means of increasing Arctic ice would be to eliminate the human derived excess heat, which would restore typical Arctic ocean temperatures seen in the 50s and 60s and even further past. One means of accomplishing this goal is to simply reverse the actions that lead to the heating. While reducing global carbon emissions is an important and critical step in addressing global warming, the realistic timetable for cooling the Arctic through carbon mitigation then reliance on natural processes is still decades if not even over a century away. Based on the rate of melting a more immediate solution will be required.
Recalling the albedo-heat feedback cycle from above, one method to break that cycle would be to increase the albedo of the ocean. Not surprisingly it is nearly impossible to change the natural color of the ocean due to its size and natural mixing, thus changing ocean albedo will require human intervention to change the surface albedo of the Arctic ocean. The easiest method is to mimic nature itself and increase surface ice by enhancing ice formation. Obviously enhancing ice formation will require large amounts of water; fortunately meeting this supply requirement is not a problem for water can be taken from the ocean itself and re-deposited on existing ice.
One of the principle reasons this strategy works is that ice is a quality thermal insulator, which can increase the speed of water freezing. In addition nucleation may also play a role in this ice formation enhancement where ice-forming nucleus tend to trigger freezing of under-cooled water droplets at higher temperatures when in solid contact versus liquid immersion.5-7 While the reason for this enhancement is unknown it is suspected that there are thermodynamically favorable interactions at the air-water interface8,9 leading to contact nucleation as a manifestation of an enhanced surface nucleation rate.5 Basically the liquid environment reduces the uniformity of the air-water interface reducing the efficiency of nucleation. Another important influencing factor may be that nucleation near the surface is greater because of a greater freedom of motion, thus the kinetic rate coefficient is larger at the surface than in the bulk (regardless of that bulk being solid or liquid); this change is important because the change in activation energy between phase changes is exponential.5 Overall the important point to take home is that water sprayed on to the surface of ice has a higher probability of freezing into new ice versus that water remaining adjacent or beneath the ice (all things being equal).
However, increasing ice formation will require managing the temperature increases that have lead to the reduced ice in the first place. There are two chief methods for addressing this temperature question. The first method is to take the water from the ocean and run it through a heat exchanger to remove a sufficient amount of heat to produce an appropriate freezing probability. The chief drawbacks to this method are the energy required to operate the heat exchanger and what to do with the heat absorbed from the water. The heat exchanger needs to be operated with an energy medium that has a very small carbon footprint otherwise the negative aspect of the added CO2 to the atmosphere through this method will more than likely exceed the benefits of adding more Arctic ice. In addition the heat removed from the water must be stored properly because if it is released to the environment it will either enter the atmosphere or the ocean, either result would largely mitigate any advantage to increasing Arctic ice.
The second method involves drawing ocean water not from the surface, but from deeper water near the bottom of the thermocline where the average temperature is much lower. The weakness of the first method is the reliance on the heat exchanger and its energy demands. Unfortunately while the second method eliminates the heat exchanger it cannot eliminate the need for additional energy usage because instead of using a heat exchanger a pump is required. The unknown question is which method will require more energy. Overall unless the first method is significantly more energy efficient, the second method should be favored because there is no excess heat to manage. While the power requirements for the pump and eventual energy consumption are easy to calculate experimentation will have to be conducted to identify the appropriate pumping rate, spray volume, and spray angle.
An important secondary question is what should be done about the salt in the supply water? One possibility would involve removing the salt because salt “decreases” the freezing point of water making it more difficult to form ice and could even result in ice sheet perforation. An alternative strategy would involve retaining the salt, which would strengthen down-welling currents when the ice melts. The best means to determine the best strategy would simply be to test this ice formation methodology and closely observe how the rate of secondary ice formation changes depending on the current temperature and time of year without any salt removal. If the formation rate is not sufficient then the salt will need to be removed.
If water cannot be used due to energy requirements the other major option for creating a change in the ocean surface albedo in an environmental neutral method is cover the water surface with bubbles. One of the chief advantages of this second option is that bubbles require little energy to create, thus the operational costs for such a system are low.10,11 Bubbles increase ocean surface albedo by increasing the reflective solar flux by providing voids that backscatter light.10 In addition modeling the reflective behavior of bubbles is similar to aerosol water drops because light backscattering is cross-sectional versus being mass or volume dependent and the spherical voids in the water column have the same refractive index characteristics.10 Note that ocean surface albedo varies with angle of solar incidence. Common values are less than 0.05 at 12:00, below 0.1 at 65 degrees solar zenith angle and a maximum albedo, which range from 0.2 to 0.5, at solar zenith angle 84 degrees.12-15 Based on this comparison information the principle formula governing brightening is:
DeltaF = DeltaA * Io * So * (1-Cf) * Tu * Td
where DeltaF = change in brightening; DeltaA = change in albedo on water surface; Io = solar irradiance; So = cosine of solar zenith angle; Cf = fraction of cloud cover; Tu = upwelling transmissive; Td = down-welling transmissive;10
Experiments have already demonstrated the creation of hydrosols from the expansion of air saturated water moving through vortex nozzles, which applies the appropriate level of shearing forces creating a swirling jet of water.11 Also by using an artificial two-phase flow smaller microbubbles can be created, which can even result in interfacial films through ambient fluid pressure reduction.12 Microbubbles can possibly form these films because they typically last longer than visible whitecap bubbles, which rise and bust in seconds. Note that whitecaps are froth created from breaking waves and can increase ocean albedo up to 0.22 from the common 0.05-0.1 values.16
While whitecaps from waves and wakes do provide increased surface albedo, the effect is ephemeral. Microbubble lifespan can be influenced by local surfactant concentration and fortunately the Arctic has limited natural surfactant concentration that would influence this lifespan, thus granting more control in the process of creating those bubbles (less outside factors that could unduly influence bubble lifespan). For example, if these bubbles are created through technological means additional elements can be added to the reactant water like a silane surfactant that could add hours to the natural lifespan.17 Bubble lifespan is probably the most important characteristic for this form of ocean albedo increase both from an economic and efficiency standpoint. However, while most surfactants and other agents like glycerin are typically not environmentally detrimental, the massive amounts required for increasing bubble longevity may make its use economically and environmentally unsustainable.
Another method for creating microbubbles comes from biomedical engineering where microfluidic procedures and sonication are used to enhance surfactant monolayers to stabilize microbubble formation.18 However, there are two common concerns about this method. First, it is used primarily in a laboratory largely for diagnostic and therapeutic applications, not in the field; therefore there may be questions about transition, especially for the dramatic increase in production scale that will be required for Arctic use. Second, while sonication increases stabilizing time, it limits control of microbubble size distribution, which could limit the total reflectiveness of the bubbles.19,20
An expanded and newer laboratory technique, electrohydrodynamic atomization, generates droplets of liquids and applies coaxial microbubbling to facilitate control over microbubble size. Unfortunately one concern with this technique is that as mentioned above ideal bubble size is in microns, but this technique is currently only able to create single digit millimeter sized bubbles.18 However, the increased size may be offset by the increased stability of the bubble (less overall reflection, but longer residence time). Comparison testing will be required to make the appropriate judgment.
The final method for increasing ice formation involves devising a piece of technology that can absorb excess heat from the Arctic Ocean. At first thought such an idea seems unlikely due to the size of the Arctic Ocean and its environmental inputs. However, it may not be as far-fetched as it seems. The key to making such a strategy viable is efficiency and scale within the utilized technology.
Scale is achieved through a design that is small enough that it can be produced at reasonable cost with a reasonable level of speed. Efficiency is typically achieved through producing a device that is self-cycling and thereby producing an autonomous operation. If human involvement is required beyond “pushing the start button” then efficiency is significantly compromised. Tie that efficiency loss in a single unit and multiply it by the units required for scale and the result can be devastating in both the terms of cost and viability.
If the objective is to withdraw heat from the ocean the most important element in the device is what agent will be utilized to accomplish this task. Ironically water is one of the best insulators of heat, which is why it is used for cooling purposes in power plants, thus removing heat could prove difficult. Fortunately there is promising research that supports the idea of incorporating zeolite as the heat absorbent material. Zeolite is a mineral make up of SiO2-, various AlO2 groups and alkali-ions and is capable of absorbing gaseous molecules including water due to its crystalline structure. When zeolite absorbs a gas it retains heat due to the absorption enthalpy.21 In addition because zeolite is commonly produced synthetically for use as molecular sieves and washing detergents it is cheap (50 – 75 cents /kg) and environmentally neutral.21
A good example of how zeolite is used in heat absorption is seen through their use in absorption refrigerators. Absorption refrigerators consist of two connected but independent vessels, the evaporator and absorber. The evaporator vessel acts as a quasi-vacuum containing only the vapor pressure of a liquid, which is usually water. When the valve connecting the two vessels is opened the water vapor moves into the absorption vessel and is absorbed by the zeolite reducing the vapor pressure. The loss of pressure causes a phase change as the water become liquid. Eventually the zeolite becomes saturated ceasing the heat transfer between the zeolite and the water. In the refrigerator model at a later time the zeolite is superheated condensing the absorbed water vapor and returning it to the evaporator vessel.
However, the secondary functionality of the above refrigerator design, zeolite recovery through heating, is not applicable in an oceanic environment. The water and resultant heat must be released from the zeolite so it can be reused, but this release will produce excess heat, which is similar to the problem of using a heat exchanger in the first strategy, there is no good place on the open ocean to store the heat without avoiding environmental release. One strategy to address this issue with a small movable device is when the zeolite becomes “full” the device can return, via a small battery powered motor, to a “mother” ship of sorts where the zeolite heat release process can be conducted. After restoring the zeolite to its rest state the device can return to the Arctic to withdraw more heat. After sufficient time the “mother” ship will be “full” of heat and would return to a land base, most likely Iceland due to its geothermal reserves as an energy source and well, to properly off-load the heat stores. Granted this method will place some limits on overall efficiency due to the trips between the Arctic and heat releasing stop over points, but necessary to manage the heat problem.
In the end the positive feedback associated with the warming-albedo reduction relationship is a legitimate threat to carbon mitigation and remediation strategies as a whole. Therefore, society needs to appreciate the time discrepancies associated with restoring colder temperatures to the Arctic Ocean in effort to preserve Arctic ice, especially during the summer. A technology-based solution will be required. Three possible strategies have been presented above in general detail to attempt to break this warming-albedo reduction relationship. One of the advantages of all of these strategies is that they can be experimentally explored with little overall detriment due to their ephemeral nature. Basically if the results are not similar to what is anticipated the experiments can be stopped with little environmental or economic damage. Overall something needs to be done about increased rate of warming in the Arctic and the dramatically increased rate of ice lost if global carbon mitigation strategies are going to be fully effective at reducing the detrimental effects of global warming.
Citations –
1. Perovich, D, and Richter-Menge, A. “Loss of sea ice in the Arctic.” Annu. Rev. Mar. Sci. 2009. 1:417–441.
2. Rothrock, D, Percival, D, and Wensnahan, M. “The decline in Arctic sea-ice thickness: Separating the spatial, annual, and interannual variability in a quarter century of submarine data.” J. Geophys. Res. 2008. 113:C05003.
3. Kwok, R. “Observational assessment of Arctic Ocean sea ice motion, export, and thickness in CMIP3 climate simulations.” J. Geophys. Res. 2011. 116:C00D05.
4. Bjork, G, Stranne, C, and Borenas, K. “The sensitivity of the Arctic Ocean sea ice thickness and its dependence on the surface albedo parameterization.” Journal of Climate. 2013. 26:1355-1370.
5. Shaw, R, Durant, A, and Mi, Y. “Heterogeneous surface crystallization observed in undercooled water.” Journal of Physical Chemistry B Letters. 2005. 109:9865-9868.
6. Vali, G. In Nucleation and Atmospheric Aerosols; Kulmala, M., Wagner, P., Eds.; Pergamon: New York, 1996.
7. Pruppacher, H, and Klett, J. Microphysics of Clouds and Precipitation, 2nd ed.; Kluwer Academic Pub.: Norwell, MA, 1997. Chapters 7 and 9.
8. Djikaev, Y, et Al. “Thermodynamic conditions for the surface-stimulated crystallization of atmospheric droplets.” J. Phys. Chem. A. 2002. 106:10247. doi:10.1021/jp021044s.
9. Tabazadeh, A, Djikaev, Y, and Reiss, H. “Surface crystallization of supercooled water in clouds.” PNAS. 2002. 99(25):15873-15878.
10. Seitz, F. “On the theory of the bubble chamber.” Physics of Fluids. 1958. 1: 2-10.
11. Seitz, F. “Bright Water: hydrosols, water conservation and climate change.” 2010.
12. Evans, J.R.G, et Al. “Can oceanic foams limit global warming?” Clim. Res. 2010. 42:155-160.
13. Davies, J. “Albedo measurements over sub-arctic surfaces.” McGill Sub-Arctic Res Pap. 1962. 13:61–68.
14. Jin, Z, et Al. “A parameterization of ocean surface albedo.” Geophys Res Letters. 2004. 31:L22301.
15. Payne, R. “Albedo of the sea surface.” J Atmos Sci. 1972. 29:959–970.
16. Moore, K, Voss, K, and Gordon, H. “Spectral reflectance of whitecaps: Their contribution to water-leaving radiance.” J. Geophys. Res. 2000. 105:6493-6499
17. Johnson, B, and Cooke, R. “Generation of Stabilized Microbubbles in Seawater.” Science. 1981. 213:209-211
18. Farook, U, Stride, E, and Edirisinghe, J. “Preparation of suspensions of phospholipid-coated microbubbles by coaxial electrohydrodynamic atomization.” J.R. Soc. Interface. 2009. 6:271-277.
19. Wang, W, Moser, C, and Weatley, M. “Langmuir trough study of surfactant mixtures used in the production of a new ultrasound contrast agent consisting of stabilized microbubbles.” J. Phys. Chem. 1996. 100:13815–13821.
20. Borden, M, et Al. “Surface phase behaviour and microstructure of lipid/PEG emulsifier monolayer-coated microbubbles.” Colloids Surf. B: Biointerfaces. 2004. 35:209–223.
21. Kreussler, S, and Bolz, D. “Experiments on solar adsorption refrigeration using zeolite and water.”
Labels:
Arctic,
Environment,
Geoengineering,
global warming
Tuesday, April 8, 2014
Unions and College Athletes – What Happens Next
On March 26, 2014 the Chicago office of the National Labor Relations Board (NLRB) ruled that the football players of Northwestern University are employees of the university not simply student-athletes, thus they have the ability to form a union and have the general protections afforded to all employees under federal law. While there are numerous hurdles left for college athletes to climb before officially having the ability to join a union long-term, this post will not deal with whether or not this ruling is legally valid and will survive NCAA appeal or the methodology behind their formation and operation of the future union(s), but will instead ask what steps a union should take to enrich the lives of college athletes.
The chief reason behind the desire of college athletes to unionize is that currently they have no effective power to participate in the decisions and operations of the NCAA governance on any level. For workers one of the major advantages of a union is it coordinates focus and awareness across and between participating parties. This focus is critical to creating scale power because workers in any industry have little power if only able to act on their own or in small groups. Unfortunately for college athletes this scale power critical for maximizing bargaining ability from this ruling is only limited to private universities in states with NLRBs that rule similarly to the Chicago office; public universities are governed by existing state law, so there will be other obstacles for unionization for these universities, especially due to the fact that 24 states have active right-to-work legislation restricting unionization. However, ignoring this concern for a moment what would college athletes require of universities with the new power to form a union?
The most public complaint/driving force used by Northwestern athletes is a concern regarding medical coverage. In 2005 the NCAA mandated that athletes must be covered by health insurance in some form with limited restrictions on the provider (basically the insurance could be from the university, individually purchased, from the athlete’s parents, etc.). In addition the NCAA operates a “catastrophic injury” insurance policy through the Mutual of Omaha when an injured athlete has medical costs that typically exceed $90,000 born from a single injury event (although it can be $75,000 for universities that participate in the NCAA Group Basic Accident Medical Program).
While many universities provide medical insurance to athletes as part of the scholarship, the chief problems with this structure is a lack of legal requirement (most do it out of a form of social responsibility), a lack of transparency and a lack of uniformity as various universities have various types of insurance coverage. Most athletes receive proper medical attention when injured, but these three above problems catalyze the probability of athletes entering a state of “medical limbo” with regards to their treatment. Not surprisingly these are the “horror” stories that major media periodically latches on to; however, the problem is that these types of stories are not unique to athletes, but afflict non-athletes as well, thus are not an inherent problem in the college system.
Clearly the current system of medical coverage does have its holes, but holes that are easily repaired especially in the face of new legal protections. Note that for football players it is difficult, despite the “certainty” of concussion proponents, to directly link participating in football to brain damage that occurs decades later. Understandably it is reasonable to suggest that there is an increased probability for future brain damage from playing football, but to suggest that any element of damage can be derived exclusively from playing football is incredibly difficult. Therefore, while it makes logical sense to extend medical coverage for college athletes beyond their playing days, this extension should have a valid time limit. There are two strategies for negotiation.
The first strategy would be to focus on a simple flat time period that would be applied to all athletes and extend beyond the individual’s playing career. For example a good time period appears to be five years, which is also used by the NFL. Therefore, suppose an athlete stops playing sports for University A on March 28, 2015 under such a system this individual would be covered by the university’s healthcare program until March 28, 2020 regardless of whether or not they are still a student. This strategy appears fair because it allows all athletes to have sufficient time to recover from all major physical and short-term mental injuries acquired while playing sports for a particular university.
However, some may view a flat rate as inappropriate because it treats all athletes as equal despite the amount of time these athletes may have actually participated in the given sport. Therefore, a second strategy would be to focus on an extension tied directly to the length of time a sport was played. For example one could create a system where an athlete is covered by the university’s healthcare program for an additional two years after the playing career is concluded for each year the individual played. So if an individual played ice hockey for two years and stopped playing on April 3, 2015 under such a system this individual would be covered until April 3, 2019. This system would operate on the mindset that the longer a person played the higher the probability of acquiring an longer-term injury, thus the longer an individual plays the longer that individual should have extended health coverage.
While the exact details of such a system would have to be developed through negotiation between the union and each particular university or possibly the NCAA directly, it stands to reason that this healthcare coverage would be secondary coverage in that it would fill any gaps in principle coverage that the individual receives from their employer. If the individual does not receive health insurance from their job then this university-affiliated coverage would apply. However, the time period on this coverage would be concurrent with any employer insurance. Basically if an individual stopped playing on June 30, 2015 got a job that provided health insurance on July 15, 2015 and was laid off on April 17, 2018 under a five year fixed time program their coverage with the university would still end on June 30, 2020 despite not using that coverage for almost 3 years due to the coverage provided by the job.
Obviously the university should cover an athlete in some way until the NCAA catastrophic policy would take over and this university coverage policy must be transparent to the point where potential recruits can actually see what is covered and what is not covered. Additionally there should be a minimum level of coverage mandated by the NCAA to ensure appropriate medical treatment. One could argue that this legal mandate is addressed by the Affordable Care Act (ACA) and while true, the ACA may not be permanent due to the zeal of certain members of Congress to repeal it, thus the need for a separate required NCAA mandate. Finally another element could be negotiated would be healthcare substitution. Suppose an athlete wants more coverage than the university is willing to offer, the university could include an additional healthcare stipend at equal monetary value of the standard university healthcare coverage to help pay the athlete pay for the other more desired policy.
Staying in the medical area one raises the question could a union actually change the number of games and/or when those games are played in a particular sport? Over the last few decades the number of college games that various sports have scheduled has increased significantly due to increased travel options and most notably the expansion of incentive to play these games (television money). Clearly the probability of injury increases and the probability of academic success decreases when the number of games an individual plays increase; therefore, could a union attempt to actually reduce the number of games that their particular sport plays? While this idea may be an interesting one, success would be difficult simply because of the money involved with playing each game in these high value college sports.
The principle mission of colleges is to provide an advanced level of education that further prepares individuals to become productive members of society. Unfortunately that principle mission and being an athlete has come into some level of conflict in recent years with the added workload attributed to participating in college athletics. Due to the extensive practice, travel and game schedules the available academic options of athletes at a number of universities have been compromised. In some situations athletes have been confronted with the choice of majoring in subject x or playing sport y because of the inability to schedule and/or attend the required classes.
One of the chief elements driving this conflict is that despite a 1991 decree by the NCAA that limited the number of required countable athletically related activities (CARA) to four hours per day and twenty hours per week, almost all institutions have worked around those restrictions by allow coaches to organize “voluntary” practices. Of course the secret that is not a secret is these “voluntary” practices are not really voluntary; at least not for non-star players who if they do not attend typically find themselves with reduced playing time. It is through these “voluntary” practices and workouts along with travel time that the NLRB could cite an average workload for football players at Northwestern at 40-50 hours per week despite the 1991 limitation. This designation by the NLRB is somewhat controversial because some argue that participation in additional practices behooves the athlete because it enhances their playing ability, similar to non-athletes like musicians and actors, thus these practices should not be controversial relative to the 20 hour CARA limit. However, the controversy stems from the team organized nature of these activities versus the athletes simply putting in the work lifting weights, conditioning, etc. by their lonesome.
In addition to this extended workload for the average week the length of time over a calendar year that athletes have to invest is significant. For example for football players the regular season begins around Labor Day (typically on the preceding Thursday) and depending on the conference ends on the second Saturday or Sunday in December with bowl games starting anywhere from two weeks to six weeks later. During the off-season football players begin preparing for the coming season through an extensive conditional program that involves multiple practices per week that typically starting early in the morning. In general for most sports the conflict between educational opportunities can be broken down as such – during the regular season afternoon classes are off-limits because of practice and game priorities and during the off-season a selection of morning classes are off-limits because of practice and conditioning priorities. How is an individual supposed to pursue their academic and athletic dreams if these conflicts exist?
A union could address this conflict by using expanded legal protections for those who wish to treat “voluntary” practices as exactly that voluntary. Any changes in the playing status of an individual who only abides by the required practice hours would force the authority structure (typically the head coach) to explain the demotion, which would become significantly more difficult with a union behind the scenes protecting players. In addition to practice hours, unions could also address the “big brother” type system that most universities create to “help” athlete time management including types of classes taken, where one sits, how much study hall is attended, personal travel arrangements, where the athlete lives, acquisition of money from family members, etc. Additionally a union could organize “vacation” time for athletes that could be used during the off-season for recuperation purposes. Finally more flexibility could be added to the practice system in the off-season allowing athletes to attend either a morning or an afternoon conditioning session allowing greater class selection ability for their education.
Another popular idea for unions would be to establish new policy governing athletic scholarships. Skipping the period where athletic scholarships were controversial due to their non-academic and possibly non-amateur nature, the first “generation” of athletic scholarships covered four years and had sufficient certainty in that it was rather difficult to cancel the scholarship even if an athlete struggled with injury. Even when these four-year scholarships fell out of favor, early on in the one-year renewal system a university scholarship committee, not athletic directors or athletic coaches, made decisions regarding renewal. Unfortunately due to Proposition 39 in 1973 both four-year scholarships and scholarship committees became rare replaced by single year scholarships renewed year-by-year by the head coach. While Proposition 39 was later rescinded in 2011 allowing universities to offer multiple year scholarships once again, most universities have retained the one-year renewal model.
It may be too much and not appropriate to attempt to go back to the four year guaranteed athlete scholarship, but a union could ask for increased scholarship allowances for injured players as well as a return to scholarship committees, removing a significant element of power from head coaches to “encourage” athletes to devote more time to athletics. In addition the expansion of scholarships after the conclusion of a playing career based on the total time of performance could serve as a valuable tool for the acquisition of a degree. One example of this idea would be for every year an individual plays for a university team that individual would receive an additional half year scholarship, thus playing for four years would yield that individual an additional two years on a specialized scholarship not related directly to the athletic program. Clearly before any scholarship idea is administered it would have to be applied separately from other scholarships because it would not be appropriate to trade one scholarship from a financial need student to an athlete.
The ability to transfer between universities without eligibility penalty would also be a point of interest for negotiation. Currently the transfer rules are rather restrictive towards athletes. The biggest problem with transfer rules is the lack of uniformity. Too many rules depending on type of school, conference, and sport, but the chief component to almost all of the transfer rules, especially for those transfers between major programs (4-year schools), is that the athlete has to sit out at least one year and take a full class load for both semesters (not summer) to establish academic “residence” before he/she is able to play.
In addition these rules have been viewed as rather hypocritical in that coaches routinely breach their contracts to leave for another “better” university job while athletes do not have that same freedom. Realistically it would make more practical sense that an athlete should be allowed to transfer retaining all remaining eligibility at any point during the off-season with the ability to play immediately pursuant to their existing academic eligibility. The university the athlete is departing from should have no ability to prevent the transfer through legal means. However, similar to its current prohibition it would not be appropriate to allow athletes to transfer during their playing season.
Of course the elephant in the room regarding the potential new employee statues of college athletes is whether or not they should be paid in financial capital that is not simply earmarked for educational expenses. This blog has addressed this issue before in the following post [http://www.bastionofreason.blogspot.com/2011/03/paying-college-athletes.html] and a vast majority of the argument still holds up regardless of whether or not college athletes are regarded as employees or students. However, there is an interesting angle that exists within the gap between amateur and professional status.
One could make the argument that it is still possible for college athletes to be regarded legally as employees and retain their amateur status, although the importance of this distinction is somewhat foggy. Maintenance of amateur status could be achieved through requesting a form of stipend that could be used to cover college-based expenses outside the scope of the scholarship. Most scholarships cover tuition, room and board, and direct educational materials like books and software, but do not cover common “everyday” expenses like personal travel expenses, non-team associated food, and other miscellaneous expenses. The stipend should fill this gap with the exact amount negotiated based on a general uniformity across all universities with an effective cost of living adjustment based on where the university is located.
A secondary advantage for the NCAA as an organization to providing this stipend is that it could offer protection against anti-trust litigation. Some argue that capping scholarships at the cost of attendance constitutes unlawful restraints on commercial activity. While this argument is suspect because the NCAA is not a monopoly nor is it required for future employment in the NBA, there does exist the possibility that a court could rule against the NCAA on this issue. However, agreeing to stipend restrictions through a collective bargaining processes should offer sufficient non-statutory labor exemption protection from anti-trust litigation mitigating one avenue for players to sue in an attempt to acquire a form of revenue sharing.
While revenue sharing is unlikely and a stipend is uncertain, college athlete unions could negotiate a payment structure for athletes when the university or third parties make additional funds from direct usage of their likeness or name. The one significant drawback to this possibility is that this very issue is currently moving through the courts via the Ed O’Bannon trial and could come to a conclusion before the union issue has resolved. However, if the union issue is resolved before the Ed O’Bannon case then both sides may be in favor of negotiating a settlement structure on this issue.
Unfortunately lost in the controversy of the decision by the NLRB ruling is that despite the ability to form a union, college athletes at private universities may not have sufficient power to make any real changes. The chief problem is the issue of scarcity. The difference in skill level between the Top 50% and the Top 10% of college athletes is small and with customer loyalty at the college level firmly behind the university versus the athletes that participate for that university the power of a strike in effort to enforce demands is limited. Universities only have a limited amount of scholarships and there would be more than enough individuals of similar talent and willingness to play by the rules of the current system to fill in for any striking athletes. In fact university would more than likely just have to sweep through the intramurals to replace a vast majority of the initial scholarship talent.
These “replacement” athletes would not produce any significant loss of revenue for the university because most of the money acquired from college football and basketball involve television contracts with the affiliated conferences, thus as long as the university fields a team, no matter how bad, the university will receive a vast majority of their planned revenue. The real power of a strike be the negative precedence created by striking athletes and how it will negatively influence future recruitment, thereby potentially hurting the bottom-line of the university through the continuation of a poor quality product that could eventually lead to dismissal from the conference and loss of the television contracts. However, would the first group of striking athletes be willing to act as sacrificial lambs to accomplish this goal because if they transfer to another university the power is lost and they would more than likely not receive a renewal of their athletic scholarships in the aftermath?
Therefore, the real power of the NLRB ruling may actually be the basic legal protections that come with recognizing student-athletes as employees. Overall while most of the above changes should be made solely because it allows the athletes to genuinely be student-athletes and it is morally right, a new college athlete union structure may have to pick its battles if it wants to produce change beyond the basic protections of the law.
The chief reason behind the desire of college athletes to unionize is that currently they have no effective power to participate in the decisions and operations of the NCAA governance on any level. For workers one of the major advantages of a union is it coordinates focus and awareness across and between participating parties. This focus is critical to creating scale power because workers in any industry have little power if only able to act on their own or in small groups. Unfortunately for college athletes this scale power critical for maximizing bargaining ability from this ruling is only limited to private universities in states with NLRBs that rule similarly to the Chicago office; public universities are governed by existing state law, so there will be other obstacles for unionization for these universities, especially due to the fact that 24 states have active right-to-work legislation restricting unionization. However, ignoring this concern for a moment what would college athletes require of universities with the new power to form a union?
The most public complaint/driving force used by Northwestern athletes is a concern regarding medical coverage. In 2005 the NCAA mandated that athletes must be covered by health insurance in some form with limited restrictions on the provider (basically the insurance could be from the university, individually purchased, from the athlete’s parents, etc.). In addition the NCAA operates a “catastrophic injury” insurance policy through the Mutual of Omaha when an injured athlete has medical costs that typically exceed $90,000 born from a single injury event (although it can be $75,000 for universities that participate in the NCAA Group Basic Accident Medical Program).
While many universities provide medical insurance to athletes as part of the scholarship, the chief problems with this structure is a lack of legal requirement (most do it out of a form of social responsibility), a lack of transparency and a lack of uniformity as various universities have various types of insurance coverage. Most athletes receive proper medical attention when injured, but these three above problems catalyze the probability of athletes entering a state of “medical limbo” with regards to their treatment. Not surprisingly these are the “horror” stories that major media periodically latches on to; however, the problem is that these types of stories are not unique to athletes, but afflict non-athletes as well, thus are not an inherent problem in the college system.
Clearly the current system of medical coverage does have its holes, but holes that are easily repaired especially in the face of new legal protections. Note that for football players it is difficult, despite the “certainty” of concussion proponents, to directly link participating in football to brain damage that occurs decades later. Understandably it is reasonable to suggest that there is an increased probability for future brain damage from playing football, but to suggest that any element of damage can be derived exclusively from playing football is incredibly difficult. Therefore, while it makes logical sense to extend medical coverage for college athletes beyond their playing days, this extension should have a valid time limit. There are two strategies for negotiation.
The first strategy would be to focus on a simple flat time period that would be applied to all athletes and extend beyond the individual’s playing career. For example a good time period appears to be five years, which is also used by the NFL. Therefore, suppose an athlete stops playing sports for University A on March 28, 2015 under such a system this individual would be covered by the university’s healthcare program until March 28, 2020 regardless of whether or not they are still a student. This strategy appears fair because it allows all athletes to have sufficient time to recover from all major physical and short-term mental injuries acquired while playing sports for a particular university.
However, some may view a flat rate as inappropriate because it treats all athletes as equal despite the amount of time these athletes may have actually participated in the given sport. Therefore, a second strategy would be to focus on an extension tied directly to the length of time a sport was played. For example one could create a system where an athlete is covered by the university’s healthcare program for an additional two years after the playing career is concluded for each year the individual played. So if an individual played ice hockey for two years and stopped playing on April 3, 2015 under such a system this individual would be covered until April 3, 2019. This system would operate on the mindset that the longer a person played the higher the probability of acquiring an longer-term injury, thus the longer an individual plays the longer that individual should have extended health coverage.
While the exact details of such a system would have to be developed through negotiation between the union and each particular university or possibly the NCAA directly, it stands to reason that this healthcare coverage would be secondary coverage in that it would fill any gaps in principle coverage that the individual receives from their employer. If the individual does not receive health insurance from their job then this university-affiliated coverage would apply. However, the time period on this coverage would be concurrent with any employer insurance. Basically if an individual stopped playing on June 30, 2015 got a job that provided health insurance on July 15, 2015 and was laid off on April 17, 2018 under a five year fixed time program their coverage with the university would still end on June 30, 2020 despite not using that coverage for almost 3 years due to the coverage provided by the job.
Obviously the university should cover an athlete in some way until the NCAA catastrophic policy would take over and this university coverage policy must be transparent to the point where potential recruits can actually see what is covered and what is not covered. Additionally there should be a minimum level of coverage mandated by the NCAA to ensure appropriate medical treatment. One could argue that this legal mandate is addressed by the Affordable Care Act (ACA) and while true, the ACA may not be permanent due to the zeal of certain members of Congress to repeal it, thus the need for a separate required NCAA mandate. Finally another element could be negotiated would be healthcare substitution. Suppose an athlete wants more coverage than the university is willing to offer, the university could include an additional healthcare stipend at equal monetary value of the standard university healthcare coverage to help pay the athlete pay for the other more desired policy.
Staying in the medical area one raises the question could a union actually change the number of games and/or when those games are played in a particular sport? Over the last few decades the number of college games that various sports have scheduled has increased significantly due to increased travel options and most notably the expansion of incentive to play these games (television money). Clearly the probability of injury increases and the probability of academic success decreases when the number of games an individual plays increase; therefore, could a union attempt to actually reduce the number of games that their particular sport plays? While this idea may be an interesting one, success would be difficult simply because of the money involved with playing each game in these high value college sports.
The principle mission of colleges is to provide an advanced level of education that further prepares individuals to become productive members of society. Unfortunately that principle mission and being an athlete has come into some level of conflict in recent years with the added workload attributed to participating in college athletics. Due to the extensive practice, travel and game schedules the available academic options of athletes at a number of universities have been compromised. In some situations athletes have been confronted with the choice of majoring in subject x or playing sport y because of the inability to schedule and/or attend the required classes.
One of the chief elements driving this conflict is that despite a 1991 decree by the NCAA that limited the number of required countable athletically related activities (CARA) to four hours per day and twenty hours per week, almost all institutions have worked around those restrictions by allow coaches to organize “voluntary” practices. Of course the secret that is not a secret is these “voluntary” practices are not really voluntary; at least not for non-star players who if they do not attend typically find themselves with reduced playing time. It is through these “voluntary” practices and workouts along with travel time that the NLRB could cite an average workload for football players at Northwestern at 40-50 hours per week despite the 1991 limitation. This designation by the NLRB is somewhat controversial because some argue that participation in additional practices behooves the athlete because it enhances their playing ability, similar to non-athletes like musicians and actors, thus these practices should not be controversial relative to the 20 hour CARA limit. However, the controversy stems from the team organized nature of these activities versus the athletes simply putting in the work lifting weights, conditioning, etc. by their lonesome.
In addition to this extended workload for the average week the length of time over a calendar year that athletes have to invest is significant. For example for football players the regular season begins around Labor Day (typically on the preceding Thursday) and depending on the conference ends on the second Saturday or Sunday in December with bowl games starting anywhere from two weeks to six weeks later. During the off-season football players begin preparing for the coming season through an extensive conditional program that involves multiple practices per week that typically starting early in the morning. In general for most sports the conflict between educational opportunities can be broken down as such – during the regular season afternoon classes are off-limits because of practice and game priorities and during the off-season a selection of morning classes are off-limits because of practice and conditioning priorities. How is an individual supposed to pursue their academic and athletic dreams if these conflicts exist?
A union could address this conflict by using expanded legal protections for those who wish to treat “voluntary” practices as exactly that voluntary. Any changes in the playing status of an individual who only abides by the required practice hours would force the authority structure (typically the head coach) to explain the demotion, which would become significantly more difficult with a union behind the scenes protecting players. In addition to practice hours, unions could also address the “big brother” type system that most universities create to “help” athlete time management including types of classes taken, where one sits, how much study hall is attended, personal travel arrangements, where the athlete lives, acquisition of money from family members, etc. Additionally a union could organize “vacation” time for athletes that could be used during the off-season for recuperation purposes. Finally more flexibility could be added to the practice system in the off-season allowing athletes to attend either a morning or an afternoon conditioning session allowing greater class selection ability for their education.
Another popular idea for unions would be to establish new policy governing athletic scholarships. Skipping the period where athletic scholarships were controversial due to their non-academic and possibly non-amateur nature, the first “generation” of athletic scholarships covered four years and had sufficient certainty in that it was rather difficult to cancel the scholarship even if an athlete struggled with injury. Even when these four-year scholarships fell out of favor, early on in the one-year renewal system a university scholarship committee, not athletic directors or athletic coaches, made decisions regarding renewal. Unfortunately due to Proposition 39 in 1973 both four-year scholarships and scholarship committees became rare replaced by single year scholarships renewed year-by-year by the head coach. While Proposition 39 was later rescinded in 2011 allowing universities to offer multiple year scholarships once again, most universities have retained the one-year renewal model.
It may be too much and not appropriate to attempt to go back to the four year guaranteed athlete scholarship, but a union could ask for increased scholarship allowances for injured players as well as a return to scholarship committees, removing a significant element of power from head coaches to “encourage” athletes to devote more time to athletics. In addition the expansion of scholarships after the conclusion of a playing career based on the total time of performance could serve as a valuable tool for the acquisition of a degree. One example of this idea would be for every year an individual plays for a university team that individual would receive an additional half year scholarship, thus playing for four years would yield that individual an additional two years on a specialized scholarship not related directly to the athletic program. Clearly before any scholarship idea is administered it would have to be applied separately from other scholarships because it would not be appropriate to trade one scholarship from a financial need student to an athlete.
The ability to transfer between universities without eligibility penalty would also be a point of interest for negotiation. Currently the transfer rules are rather restrictive towards athletes. The biggest problem with transfer rules is the lack of uniformity. Too many rules depending on type of school, conference, and sport, but the chief component to almost all of the transfer rules, especially for those transfers between major programs (4-year schools), is that the athlete has to sit out at least one year and take a full class load for both semesters (not summer) to establish academic “residence” before he/she is able to play.
In addition these rules have been viewed as rather hypocritical in that coaches routinely breach their contracts to leave for another “better” university job while athletes do not have that same freedom. Realistically it would make more practical sense that an athlete should be allowed to transfer retaining all remaining eligibility at any point during the off-season with the ability to play immediately pursuant to their existing academic eligibility. The university the athlete is departing from should have no ability to prevent the transfer through legal means. However, similar to its current prohibition it would not be appropriate to allow athletes to transfer during their playing season.
Of course the elephant in the room regarding the potential new employee statues of college athletes is whether or not they should be paid in financial capital that is not simply earmarked for educational expenses. This blog has addressed this issue before in the following post [http://www.bastionofreason.blogspot.com/2011/03/paying-college-athletes.html] and a vast majority of the argument still holds up regardless of whether or not college athletes are regarded as employees or students. However, there is an interesting angle that exists within the gap between amateur and professional status.
One could make the argument that it is still possible for college athletes to be regarded legally as employees and retain their amateur status, although the importance of this distinction is somewhat foggy. Maintenance of amateur status could be achieved through requesting a form of stipend that could be used to cover college-based expenses outside the scope of the scholarship. Most scholarships cover tuition, room and board, and direct educational materials like books and software, but do not cover common “everyday” expenses like personal travel expenses, non-team associated food, and other miscellaneous expenses. The stipend should fill this gap with the exact amount negotiated based on a general uniformity across all universities with an effective cost of living adjustment based on where the university is located.
A secondary advantage for the NCAA as an organization to providing this stipend is that it could offer protection against anti-trust litigation. Some argue that capping scholarships at the cost of attendance constitutes unlawful restraints on commercial activity. While this argument is suspect because the NCAA is not a monopoly nor is it required for future employment in the NBA, there does exist the possibility that a court could rule against the NCAA on this issue. However, agreeing to stipend restrictions through a collective bargaining processes should offer sufficient non-statutory labor exemption protection from anti-trust litigation mitigating one avenue for players to sue in an attempt to acquire a form of revenue sharing.
While revenue sharing is unlikely and a stipend is uncertain, college athlete unions could negotiate a payment structure for athletes when the university or third parties make additional funds from direct usage of their likeness or name. The one significant drawback to this possibility is that this very issue is currently moving through the courts via the Ed O’Bannon trial and could come to a conclusion before the union issue has resolved. However, if the union issue is resolved before the Ed O’Bannon case then both sides may be in favor of negotiating a settlement structure on this issue.
Unfortunately lost in the controversy of the decision by the NLRB ruling is that despite the ability to form a union, college athletes at private universities may not have sufficient power to make any real changes. The chief problem is the issue of scarcity. The difference in skill level between the Top 50% and the Top 10% of college athletes is small and with customer loyalty at the college level firmly behind the university versus the athletes that participate for that university the power of a strike in effort to enforce demands is limited. Universities only have a limited amount of scholarships and there would be more than enough individuals of similar talent and willingness to play by the rules of the current system to fill in for any striking athletes. In fact university would more than likely just have to sweep through the intramurals to replace a vast majority of the initial scholarship talent.
These “replacement” athletes would not produce any significant loss of revenue for the university because most of the money acquired from college football and basketball involve television contracts with the affiliated conferences, thus as long as the university fields a team, no matter how bad, the university will receive a vast majority of their planned revenue. The real power of a strike be the negative precedence created by striking athletes and how it will negatively influence future recruitment, thereby potentially hurting the bottom-line of the university through the continuation of a poor quality product that could eventually lead to dismissal from the conference and loss of the television contracts. However, would the first group of striking athletes be willing to act as sacrificial lambs to accomplish this goal because if they transfer to another university the power is lost and they would more than likely not receive a renewal of their athletic scholarships in the aftermath?
Therefore, the real power of the NLRB ruling may actually be the basic legal protections that come with recognizing student-athletes as employees. Overall while most of the above changes should be made solely because it allows the athletes to genuinely be student-athletes and it is morally right, a new college athlete union structure may have to pick its battles if it wants to produce change beyond the basic protections of the law.
Subscribe to:
Posts (Atom)