Wednesday, December 15, 2010

Tackling Obesity One Step at a Time

Obesity has steadily become a significant problem in modern society, especially in the United States. Note that it is clear obesity is a growing problem even without resorting to statistics which is good because the BMI stat which typically drives obesity identification is flawed in its inability to differentiate between fat weight and muscle weight. While there are a number of reasons postulated for this rampant increase in over-weight individuals, the most disconcerting issue is despite the existence of viable solutions obesity rates continue to climb. It is rational to conclude that there are two chief elements behind this continued expansion of obesity.

The first element involves the psychological reality that most people shy away from the physical exertion required to stay fit. Sadly for these individuals biochemistry has not advanced to the point where weight can be selectively and effectively controlled by simply taking some drug. Even common surgical options like lap bands and gastro-bypass surgeries have not stemmed the problems and have caused there own set of problems. The only proven methodology to avoid becoming obese is proper diet and exercise; however, these elements especially the latter demand effort and time. Note that while there is mounting evidence to suggest the involvement of stomach bacteria in calorie absorption and its role in increasing weight, this research is still in its infancy and has not yielded any therapeutic strategy, thus it is useless to scapegoat obesity with stomach bacteria.

The second element involves proper direction. Unfortunately capitalism has provided an unnecessary obstacle in preventing obesity in that the social environment has been flooded with strategies for dieting and weight control that run the gamut from eating almost nothing but carbohydrates to eating almost no carbohydrates. Most of these various strategies have only a small level of biochemical backing and instead are pushed in effort to make money for the individuals sponsoring or backing the particular diet methodology. The lack of clarity in most of these plans leads to ‘higher than should-be’ failure rates which foster greater frustration in those that fail reducing the probability that they invest their time, money and psyche in future attempts at weight loss.

Another problem related to this second element is availability. More often than is realized even if an individual would prefer to eat healthy food, that food is either supply unavailable or economically unavailable. One of the most noted issues is that of ‘food deserts’, regions (rural or urban) which lack a variety of food selection largely because of a lack of supermarkets or farmer’s markets. While there have been some noteworthy start-up efforts to address this issue, the problem of food deserts seems too large for small entrepreneur-driven individuals to solve without government assistance. If individuals want to get serious about dealing with food deserts and even domestic hunger then state governments need to conduct audits in their states to identify where these deserts are and how to divert food items from more plentiful regions.

Returning to the first element one would inherently think that pride should be a driving element in warding off obesity, but there is evidence to suggest that a number of individuals who are obese or even just over-weight do not view their weight as a problem.1 In those periods when they are concerned the concern tends to manifest as disgust rather than an affront to pride; this disgust can lead to rash short-term crash dieting instead of long-term change in behavior, which commonly results in long-term failure. However, if weight is not viewed as a problem any form of self-motivation to address weight becomes less likely including issues surrounding overall health and physical well-being, thus such a reality demands a secondary strategy. For most a form of familial intervention is also not a likely option based on this same psychological premise. With these two effective options no longer available a new incentive must be provide to drive motivation to live a healthier life.

The best form of incentive is typically some form of monetary award. Unfortunately indirect or future awards, largely those that can be calculated from healthy behavior, do not motivate effectively. The uncertainty of the future forces probabilistic arguments instead of direct yes/no arguments. For example one can make the rational argument that eating a certain assortment of food may reduce the probability of acquiring cancer by x%, but most people want a more definitive response, ‘if food x is eaten then I will or will not get cancer’.

Another problem is the incompatibility with probability figures in a deterministic reality. Getting cancer is a deterministic yes/no issue, so one seems to save the same amount regardless of what their personal cancer percentage turns out to be. Basically one does not receive more money for a lower cancer percentage, just a greater probability of receiving any money (through savings by not having to treat the cancer); this concept is rather confusing and further makes it difficult to see the benefit from a financial perspective. Finally the fact that this savings occurs over the course of a lifetime and not in an immediate lump-sum further reduces its usefulness as an incentive. Note that when does a lottery winner ever take the lifetime annuity option over the lump-sum option? With these incompatibilities with typical human psychology it is not difficult to understand why people still have difficulties undertaking healthy actions even when the resources to facilitate them are available.

With the ineffectiveness of arguing ‘you should do this because it will reduce your probability of getting cancer, macrodegeneration, osteoporosis, etc.’ a more direct incentive is required. Typically most argue that the most effective incentive is cash. Not only is the distribution of cash immediate, but it is also flexible. However, that flexibility is also a problem. Most people would like to assume that individuals would use capital in an effective manner that most helps their existence, but if such a contention were true a vast majority of the people that are in debt would not be in debt. The inability to predict what an individual will do with a monetary award creates inherent complexity with such an incentive program, especially when individuals have a wide variety of resources available from which to select. For example it is easier to predict what a person in Somalia will do with 30 dollars than a person in the United States. Therefore, distributing cash in any type of incentive program with the single target goal of improving societal physical health through weight loss seems inefficient.

With the elimination of cash as a possible option, the award mechanism for incentive will most likely take the form of a ‘gift’ card. However, to ensure a restricted flexibility, the ‘gift’ card would only be useable at certain retailers. In fact if one were willing the best possibility would be to establish a retailer designed specifically for interaction with these ‘gift’ cards. By establishing a specific retailer, the government can control the role of supply and production. For example if so desired the supply of merchandize could only include items that are manufactured in the U.S. by U.S. companies. The interaction medium for this retailer(s) must include both an online and an off-line component because not everyone who would take advantage of the program would have online access. The off-line component can be something as simple as mail order involving the U.S. Post Office.

Now that the general incentive agent has been established, the next element is how the individual would acquire this incentive. The overall goal of this program is to stem and hopefully in time reverse the growing rate of obesity in the U.S. The hope is that eventually the program would pay for itself by reducing the amount of money spent in healthcare by increasing overall societal health. The spending reduction should be seen in both Medicare payouts and private insurance payouts as well as through increased tax revenues by increasing production through reduction of sick days and other health related circumstances that lead to missed or unproductive work days. There are two chief elements that influence overall health which can be affected at a reasonable certainty and level of effort, food consumption and exercise. Due to the reward element of this program as well as lingering consistency questions involving the availability of food supplies, using exercise as the defining element seems to make more sense.

One may argue that exercise in a vacuum is not an appropriate strategy to drive good overall health. On its face this opposition is understandable, but there is value in exercise regardless of food consumption on two levels. First, the obvious benefit is that any amount of exercise can neutralize some of the ill effects from improper eating. Second, the less obvious benefit is biological memory. While still in its infancy there are theories which suggest that individuals who frequently exercise have a higher level of something (maybe fat-burning enzyme activity), even beyond simple muscle mass correlations, which control weight gain. On a related side note it is possible that this theory and the role of stomach bacteria may be associated with each other in that greater exercise increases efficiency of energy use, which decreases the demand to absorb calories which selects for stomach bacteria that absorb fewer calories. Thus the more exercise an individual performs the better his/her body is able to control weight even while resting. Finally the application of exercise is important because something needs to be done to address the overall weight problem.

If exercise will be the evaluation medium what method will be used for the evaluation? In the past such a program would commonly demand a participating individual to travel to a specific location, a special gym for example, where activity could be tracked by volunteers. However, computers have eliminated this change of venue demand where more simple aspects of exercise such as distance traveled, rpms, heart rate, etc. can be measured, tracked and saved where ever the device is being used. Not having to travel should significantly increase the probability of both participation in the program and continuation with the program. The requirement of this vital stat information demands the use of some form of machine. The machine in question needs to be simple while also involving a methodology that allows for ample heart rate increase. In effort to limit stress on the body it seems that an elliptical machine would be superior to a treadmill.

The elliptical machine used in this program would be specially designed with a microchip that documents the use of the device and would be used to determine rewards. The rules governing the payout need to be transparent and clearly stated. Three salient factors would demand a clear fixed unambiguous incentive price, an age limit and personal identification. The following is one possible example describing the use of the device:

Acquisition of the device is dependent on receiving a physical from a participating physician. The reason for this element is that it would be unfair to set a standard baseline on the device without taking into consideration the current physical health of the participant. For example to hold someone that is 350 pounds to the same exercise demand as a 185 pound individual is counter productive. The standard of measurement will use a rpm floor and because of the potential volume of participants there needs to be a ceiling on how much exercise will be counted towards the overall program over a given time period. The ceiling is required to control the total cost of the program as well as protect individuals from overzealous exercising which would be detrimental to overall health in attempt to acquire more monetary rewards. The machine should visually and audibly inform the user when this limit is reached and should reset at 12:00 am each day. One possibility for the limit would be 45 minutes at or above the rpm floor in a 24 hr period. One point of discussion would be whether the individual would have to maintain the appropriate speed at or above the price floor for the entire 60 seconds or if just the average rpm over that 60 second period would have to be at or above the rpm floor to be given credit for 1 minutes of exercise under the incentive program.

The rpm floor is designed to ensure appropriate benefit from the exercise so the monetary incentives awarded are driving the accomplishment of the overall goal. Basically one should have to actually physically push his/her body for the time spent exercising to count towards incentive, no ‘dogging it’; otherwise the entire point of the incentive program is meaningless because once again it only resorts to an individual’s pride as the driving factor and as stated that strategy is clearly not working. Establishing the correct rpm floor is one of the principle reasons a physical is required before an individual can acquire this device. Pursuant to this rpm floor the device will have a kill-switch after some amount of time (10-12 months) where an individual will have to have another physical in order to recalibrate the floor. As inefficient as it would be to expect a 350 pound individual to meet the rpm requirements assigned to a 185 pound individual, the same goes for an individual that was once 350 pound continuing to meet that rpm requirement even though he/she now only weighs 290 pounds.

The incentive price should be tied directly to the total minutes exercised at or above the specific assigned rpm floor. Initial blind thought sets an incentive price of 2 cents per minute. This rate would establish a reward of 90 cents a day and $328.50 a year. Some may argue on its face that such an incentive is too small to facilitate meaningful exercise. The counterargument is that the overall level of work required to acquire these funds is so insignificant that the seemingly small value can be viewed as appropriate. For example because the device is at the participant’s home there is an improved probability of multi-tasking in that one could be exercising and watching television, reading a book, listening to music or even having a conversation. The primary reason for establishing a meager value is concern regarding the sheer volume potential of the program. Suppose 200 million elect to participate in this program and hit 50% of their total potential as a group, such a scenario would result in 32.85 billion dollars in incentives per year.

Finally each individual should have a specific ID code that recognizes that individual, especially if only one elliptical device is distributed per household. One could argue that there could be unscrupulous behavior regarding this ID code where a healthier individual could exercise at an easier rpm floor. While true, a ‘safety measure’ could be established where if an individual did not noticeably improve while in the program after two physicals that individual would be blacklisted from the program permanently.

Overall while the idea presented above still have some more specific details to flesh-out, it is obvious that something needs to be done about the growing weight problem in the U.S. and individual pride clearly is not enough of a driving factor.

--
1. Powell, T, et, Al. “Body Size Misperception: A Novel Determinant in the Obesity Epidemic.” Arch Intern Med. 2010; 170(18): 1695-1697.

Monday, December 6, 2010

A Brief Introduction to Low-Level Ozone

Somewhat hidden among the concern regarding greenhouse gases and global warming is the steady increase in tropospheric ozone concentration. The increase in this low-level ozone (recall that the traditional ozone layer is located much higher in the atmosphere at the stratosphere) comes indirectly from most of the pollutants that are driving global warming. Typically sunlight reacts with hydrocarbons and nitrogen oxides forming ozone and other products. Since increased concentrations of hydrocarbons, nitrogen oxides and other volatile organic compounds (VOCs) are being released into the atmosphere ozone concentrations are increasing as well. The funny thing about this change in ozone concentration is the disparity in the reaction it receives. Most, even those that are aware of global warming, do not realize and/or care that low-level ozone is increasing and the potential danger it poses; however, some that are aware of it tend to completely and utterly overreact believing that the increasing ozone is in some way worse than global warming as a whole.

One important reason that ozone poses such a threat, beyond the obvious ‘it is toxic to various life forms’, is that the lifecycle of low tropospheric ozone tends to peak during the growing season due to the requirement of sunlight to drive the formation of new ozone from various pollutants. Although ozone concentrations peak during the growing season the overall concentration relative to the last year is still typically higher. The best way to understand how ozone concentrations are increasing is to visualize an oscillatory curve with a positive slope, a monthly x-axis, a peak for a given cycle during the growing season (May-September) and a trough during the Winter months (December-March) similar to the below figure. Note that the below figure is only designed to illustrate the pattern of ozone increase not to demonstrate any specific concentration change.

While it is true that increasing levels of tropospheric ozone damages various crops the scope of this damage is important to consider. For example a recent report out by NASA concluded that almost 2 billion dollars worth of damage to this year’s soybean crop could be attributed to tropospheric ozone.1 Following certain crops like soybean and tobacco is relevant because these crops, for currently undetermined reasons, have a greater level of sensitivity to ozone than other crops like corn and sorghum.

On its face 2 billion dollars may seem like a lot of money, and it is, but when one takes into consideration that the total value of the U.S. soybean crop over the last few years has been over 27 billion dollars,2 the severity of the situation loses momentum. In addition even though ozone pollution has increased over the last decade, soybean crops have also increased both in yield and monetary value. These realities demonstrate the importance of properly accessing the overall damage. Simply put to worry about ozone pollution over global warming is inappropriate.

With that said to assume that the damage potential of ozone pollution will remain static or only increase insignificantly over time is also inappropriate. Fortunately for all parties the simplest way to derail any potential significant problem with ozone pollution is the same strategy to prevent further environmental damage through global warming, significantly reduce greenhouse gas emissions. Unfortunately with the current political climate in the United States the probability that these reductions proceed at a timely pace is highly improbable. Some have suggested that including the dangers of ozone and its wide toxic breadth in the discussion of the threat provided by excess carbon emissions would facilitate greater urgency, but such a contention seems far-fetched.

The problem is that including ozone in the problems catalyzed by carbon emissions is similar to suggesting to an individual that they are going to be decapitated, set on fire, injected with cyanide and pierced through the heart instead of just decapitated, injected with cyanide and pierced through the heart, either way the person is going to die; adding one additional method is meaningless. Thus, with limited ability for action through reducing the source of the problem once again society must look towards technology to provide a means to stem the tide until more permanent action can be taken.

Under natural processes most ozone is eliminated in one of two ways. First, the ozone reacts with nitric oxides or hydrocarbons like aldehydes in the atmosphere. Normally the interacting agent is a nitric oxide creating a hydroxyl free radical that typically leads to the formation of peroxyacyl nitrates. Second, the ozone drops out of the air and is grounded to typically be absorbed by nearby fauna or soil. At lower concentrations this absorption is rather meaningless because the absorbing structure has sufficient recovery time; however, at higher concentrations absorption occurs at a higher turnover reducing the ability of the absorbing structure to recover. A general review regarding the general chemistry of tropospheric ozone can be found here.

Developing a device that would collect and store ozone from the atmosphere for later ground-based neutralization has a theoretically high level of application difficulty. Therefore, the best strategy may be to neutralize the ozone in the air itself. While such a statement may also seem to be quite difficult, there may be a simple work-a-round by tapping into a previously detrimental element. The Montreal Protocol was one of the most successful international treaties ever heavily limiting the utilization of and eventual release of Chlorofluorocarbons (CFCs) into the upper atmosphere to prevent further ozone layer degradation. The reason CFCs were so destructive to the ozone layer is that they chemically react as a free radical catalysis, which allows a single CFC molecule to dissociate thousands of ozone molecules, eventually forming one O- free radical and one molecule of oxygen. What if a device could be designed to utilize CFCs in a controlled fashion to facilitate the dissociation of low atmosphere ozone?

To begin the development of such a device a few important points need to be considered. First, it would be best if this device operated at worst with a trace emission power supply and at best produced no emissions in its operation. The reason for this objective is rather obvious because if the device produces air pollutants through the combustion of fossil fuels there is a high probability that it will end up producing more ozone than removing. Second, a further understanding of how CFCs react with ozone must be discussed to ensure an efficient and effective ozone interaction design is created.

When CFCs react with ozone the entire chemical reaction proceeds in 3 steps. The first reaction involves stripping the CFC molecule of one chloride ion after the CFC molecule is struck by UV radiation. The second reaction occurs between the newly freed chloride ion and ozone forming chlorine oxide and oxygen. The third reaction reforms the chloride ion through the reaction of the chlorine oxide molecule and free radical oxygen recreating another chloride ion and oxygen. The general treatment of the chloride ion as a destructive catalyst of ozone is one of the reasons why CFCs were so destructive to the stratospheric ozone layer.

Looking at the reaction scheme the important element to CFCs driven ozone destruction is the chloride ion. Therefore, actually utilizing CFCs is not necessary, instead only chloride ions are required. Due to the catalytic nature of these chloride ions relative to ozone molecules it is important to avoid directly releasing them into the atmosphere where they could migrate to higher altitudes and cause damage to stratospheric ozone. So the idea is to keep the chloride ions in an isolated environment away from the general atmosphere, but available to react with low atmospheric ozone. One means to accomplish this goal is to construct a blimp, which can move through the low atmosphere and carry a chloride ion storage methodology.

A number of storage possibilities exist for the chloride ions. For example one method would involve creating a ‘wind tunnel’ where air could pass through a portion of the blimp in only one direction. In this ‘wind tunnel’ one could place a number of gas-permeable membranes doped with chloride ions which theoretically could react with ozone in the air stream as it passes through the ‘wind tunnel’ converting the ozone to oxygen. Testing would have to be done to see if the formation of the chlorine oxide intermediate would dislodge it from the membrane, but if the doping was appropriate this concern should be minute.

Although more exotic methods could involve a spinning drum method using liquid chloride which would flow down with gravity, the problem with this particular methodology is that if the device was damaged liquid chloride would prove to be more detrimental if it make contact with anything. Overall, regardless of the design, technological ideas to address increasing low atmospheric ozone concentrations need to be theorized and small-scale tests need to be put into the field. At a future time this concept will be investigated further on this blog.

--
1. http://www.spaceref.com/news/viewpr.rss.html?pid=28288

2. http://www.soystats.com/2009/page_11.htm

Friday, December 3, 2010

Taking a Step Back in Education Reform

The education reform movement has existed for decades and yet when using the trend in national test scores for 8th and 12th graders vs. their international peers as an evaluation tool the movement has been a failure. Ignoring the irony of using these particular test scores to evaluate the education reform movement, the movement seems unaffected by this reality and continues to plow ahead blaming any ‘failure’ on bad teachers and inflexible unions that they claim are destroying any reasonable chance to educate the nation’s children. Sadly the very methodology of the education reform movement is more to blame for their failure than any teacher’s union, yet a sense of groupthink and cognitive dissidence has allowed that flawed methodology to continue unabated. So before any real widespread education reform can occur, the flaws must be removed from the rationalities used by education reformers.

The chief flaw facilitating the existence of all of the remaining flaws is that ‘problems’ in schools are typically judged on a relative level over an absolute level. What does this mean? Starting off with a question: why do most people believe the education system is broken? That belief typically springs from the simple comparison that international students perform better on a series of tests than U.S. students. If the current education system were exactly the same with the only difference being that U.S. students performed better than students from all other countries would there be any significant outcry regarding a ‘Crisis in Education’… it would be highly doubtful. Unfortunately this competitive comparison attitude also impacts the reform movement.

Most members of the reform movement do not look at specific education elements as problems because they are ineffective on an absolute level, but instead because they are not similar to what a foreign country may do. Basically x is a problem not because it has been determined that it detracts from the educational experience, but because country x does not do it and country x scores higher on international tests than the U.S. One may question the problem behind a strategy that involves copying elements from successful education environments to replace elements in less successful education environments. What is bad about replacing supposedly ineffective educational strategies with test supported more efficient strategies?

There are some significant concerns with this mindset. The most important element is that students are not static interchangeable parts, different methodologies work differently at educating different students. Ironically this chameleon element is frequently cited as a trait for quality teachers, yet while the teachers are supposed to be chameleons most reformers seem to believe that the education structure in which they work should be static and one-size fits all. A second problem also relates to the question of similarity. One cannot simply plug in a solution without determining how that change will affect other elements of the education environment.

Therefore, effective school reform must come from examination of whether or not an element works or even if it is necessary for a particular school. Thus, more incentive needs to be placed at the local level to identify these problems in an accurate and objective manner. Some might argue that such an ideal was embedded in ‘Race to the Top’, but the evaluation method applied to judge ‘Race to the Top’ applications seems to demonstrate that no in-depth ‘problem’-‘solution’ linkage was required. Without first having significant analysis of problems, generating effective solutions is difficult and instead one is left with generic broad solution suggestions that look great on paper, but their ability to actually solve the problem is unknown because they are not formulated to address a given problem in a specific situation. The educational reform movement is so intent on finding national solutions to the perceived education crisis that they will more than likely continue to make limited progress.

Scapegoating of teachers has eliminated the most important voice from the conversation. To assume that a vast majority of teachers are not committed or interested in ensuring a meaningful and effective education for U.S. children and teens is illogical and just plain wrong. Thus, where are mass public surveys to the nation’s teachers asking what they feel are the biggest obstacles to facilitating the instruction of a high quality education? This disconnect between those with front line experience and those that sit in the ‘general’s tent’ far behind the lines is an important reason why most of the proposed solutions are in conflict with what could actually work.

For example when asked ‘What is the one thing you could change in your school’ how many teachers would honestly answer ‘Well, our school is so hopeless we need to nuke it and start over by making it a charter school.’? It would be surprising to find any teacher who would give such an answer and yet such action is one of the primary responses of education reformers. Some critics would argue that of course teachers are not going to make such a suggestion because their number one priority is to protect their job; however, such cynicism does not make sense. Such cynicism stems from the general lack of respect individuals give the teaching profession. One wonders how teaching went from being a highly respected aspect of U.S. society to garnering almost no respect at all? Any one that believes a majority of teachers want to teach in an environment that is not striving to maximize its potential is simply a fool. Such a lack of respect and empathy for what teachers have to do is another aspect to why a number of educational reforms fail.

Overall one of the most important aspects of education reform has not been addressed, what those directly responsible for the education of U.S. children and teens, teachers and education administrators, believe to be the problems. Due to the lack of involvement of the principle actors in education in the future evolution of the education system one is not surprised by the gridlock that exists between the so-called reformers and the unions. To facilitate true education reform the next action must involve taking a step back and understanding that there is a standard deviation on those ‘mediocre’ U.S. test scores which generates an average for student performance. Not all schools perform as poorly as those scores indicate while not all schools perform as well. After this realization the path to real education reform can then begin with the following steps:

1. Each state should compose a survey for all of its teachers asking for input regarding education reforms; some sample questions should address (but not be limited to) the following issues:

- What do you believe is the best thing that your school does to support education?
- What do you believe is the biggest obstacle to improving the educational environment in your school?
- What do you believe is the most important thing that will change and/or influence the educational environment in the next 5 years?
- How is the general student attitude when it comes to learning and why does that attitude exist?

2. The federal government should require (pursuant to receiving any federal funding) each state to do a complete and objective audit of their educational budget to identify any points of overhead waste or general inefficiency in the distribution of funds. Note that this audit is not to make any judgments regarding whether program x should be viewed as wasteful or not, but instead simply account for all of the funds, track how they are distributed and determine whether or not those funds could be distributed more effectively.

3. Each school district should hold a conference to discuss the results of the surveys and develop a plan of action, including a prospective new financial budget for addressing the obstacles pointed out in the surveys; each district then bundles a monetary request to solve these obstacles into two categories: ‘need’ and ‘want’ which will be presented to the state;

While the above points cannot guarantee solutions to the problems with education there is a higher probability to develop solutions with the above method than the current strategy being executed now. Removing the ‘national’ element from the solution set should allow for greater flexibility and precision in applying solutions at the local level appropriate to the environment. Granted while the focus of solutions on a local level is important, it would be unwise to completely eliminate the federal government from the process. The development of national education standards should be the prerogative of the federal government. However, although the federal government should decide what U.S. students need to know, it should not dictate the methodology used by schools to teach that requisite knowledge.

Wednesday, December 1, 2010

What to do about Coral Reefs

Although coral reefs have had periodic moments of distress and bleaching in the past, these events were frequently isolated to a given weather anomaly, most often El Nino. Unfortunately these events have become increasingly more common in the recent decade forming a troubling, yet predictable trend that can no longer be ignored. The importance of coral and the reefs they form cannot be understated; the mass death of coral will not simply increase the probability of oceanic bio-diversity loss, but rather guarantee its loss; a loss that will be a death knell for the oceans themselves.

It is important to understand the elements of the problem before suggesting a course of action. In large part coral is comprised of a colony of genetically identical polyps. After a significant amount of growth these polyps extend vertical calices that sometimes form a new basal plate; the formation of enough basal plates give rise to coral reefs. While coral can procreate either through asexual reproduction or sexual reproduction, sexual reproduction is typically favored. The release of gametes characterized by sexual reproduction fosters faster new colony construction whereas asexual reproduction typically strengths colony foundation and maintenance through coral head expansion.

Coral has the capacity to catch small animals like fish and plankton, but most coral have evolved to form a symbiotic relationship with zooxanthellae algae. This symbiotic relationship is why most coral are found in very shallow water (no less than 60 meters, but normally 2-10 meters) so that sunlight can be utilized by the zooxanthellae in order to undergo photosynthesis providing food for the algae and coral. Also while the calcium carbonate skeletons usually give coral a chalk white color on their own, their resident algae host a wide variety of colors which give coral its noteworthy color arrangement. Note that coral can also demonstrate a non-white color scheme on its own based on its protein synthesis pattern. Not all corals share a relationship with algae, but most of these non-algae coral specimens also do not commonly associate with reef formation and are typically found at much greater oceanic depths.

There are many different ways that coral can die, but the two reasons that seem to be the most associated with the recent bleaching trend are higher ocean temperatures and an increase in ocean acidity. Higher temperatures produce excess thermal stress on the coral which cause them to eject their algae in effort to reduce overall stress. While the algae has helpful attributes it also lives within the coral creating additional stress, thus when faced with a change in environmental conditions that increases stress further, the coral acts to lower stress to increase survival percentage and the easiest way to do that is to remove the algae. If the environmental change is only temporary then once environmental conditions return to ‘normal’ levels the coral typically reacquires the algae. Higher ocean acidity results in the inability of the coral to form and maintain their calcium carbonate exoskeletons lead to skeleton deformation and collapse. Coral responses to a reduced ability to form calcium carbonate structures is still unclear beyond the fact that such a situation eventually leads to premature coral death.

Unfortunately there appears that little can be done about increasing ocean acidity beyond dramatic reductions in human derived carbon emissions. One of the bigger problems with ocean acidity is that it is fairly uniformly distributed relative to ocean temperature (ocean temperature is important because the temperature of a liquid directly determines the solubility of a gas like CO2). Although an idea for a technological stop-gap for ocean acidity has been proposed on this blog here, addressing localized temperature flux may be a more effective strategy in the near-term if the goal is to increase coral lifespan within detrimental environmental conditions.

Local environmental cooling may be possible because of the natural uneven distribution in water temperatures relative to depth. Most of the threatened coral species are near the surface of the water exposed to higher temperatures than those creatures that live at a greater depth. Based on that principle if a device could be developed which could ferry water from a lower depth and deposit it closer to coral reefs cooling the region near the coral could be possible. Note that because the colder deep water would not be able to effectively mix with the warmer surface water the overall system could be thought of as similar to a fountain. Water is pumped up from a specific depth and then that water is released near the surface in close proximity to the coral reef where conduction briefly lowers the surface water temperature before the water descends once again.

One possible design for such a device could be to develop a buoy which would float on the surface of the water almost directly above the coral bed. Within the base structure of the buoy would be a pump and associated tubing that descends about 800 ft. below the surface. The reason 800 ft is selected is to focus on water collection beyond the thermocline ensuring a significant temperature differential between the collected water and the water surrounding the coral. Water would be pumped up to the buoy where it would be released back into the water near the surface. The release methodology should probably favor sparse droplets instead of a stream to avoid unnecessary damage to the coral due to excess water pressure/force. Power could be provided to the pump and any other electrical elements through a solar power panel and lithium-ion battery storage.

The biggest problem with this strategy is that a huge volume of water will need to be displaced to generate any significant cooling. At first glace the difference in total volume of displacement seems insurmountable, but the continued incessant movement of water over the course of months and on into years could generate a meaningful change. Remember that the purpose of this device is not to regionally eliminate the temperature increases threatening the coral, but slow the increase to buy time for humans to reduce carbon emissions and positively reverse the ocean from a sink to a source temporarily (over a few decades) returning ocean CO2 load to recent generational normalcy (CO2 concentration in the 1700s). A useful attribute of this system is that it is testable, especially in conjunction in the Argo float systems, can be isolated to a single environment without damaging other non-related environments and does not require any special systems.

While determining an exact price for a single system is difficult, estimation of cost seems to be reasonable. The overall pressure change between the surface of the water and the final location of the pipe at 800 ft. should be about 352.8 psi, which is not so excessive that a special material will be required. However, it may be necessary to include some form of filter on the receiving end of the tube to prevent certain lifeforms from clogging the tube and/or killing those lifeforms. As previously mentioned, the release end of the tube would probably have some form of spray attachment to break the water stream up into droplets. The solar cell/photovoltaic system only has to power a single pump plus any other necessary electronics to draw up the water, so it would probably be smaller than the ‘for home use’ systems that are currently available. Finally the battery is probably the most expensive addition to the standard buoy, but one could state that with the coming popularization of electrical vehicles, battery prices should drop slightly reducing the overall cost of lithium-ion batteries for other devices such as this one.

In closing whether or not the above system is effective at reducing local water temperatures to theoretically increase coral survival time is not the main issue. Although it would be excellent if it did, the overall point is the realization that human effort to curtail carbon emissions is not progressing faster enough to have any real level of confidence that a vast majority of coral will survive the coming decades. To save the coral it is becoming more probable that humans will have to deploy a non-emission reduction strategy. While it is not guaranteed, such a strategy will probably involve some form of technological intervention, so it is important to begin with both the research and the testing as soon as possible.