Friday, August 28, 2009

Brief Analysis of Merit Pay

One aspect of correcting what some perceive as a broken education system in the United States is the mass application of a merit pay system for teacher compensation. Proponents believe that utilizing a merit pay system will both better identify low-quality teachers that should not longer be in the profession and high-quality teachers that should be better rewarded for their efforts. Also there is a belief that changing the compensation structure for teachers from seniority to output will create a more ‘result-driven’ environment forcing teachers to focus on continually improving results instead of relying on their reputations. These economic factors would then make education a more attractive environment for higher achieving individuals, thus producing higher quality teachers leading to an even greater number of higher achieving individuals. Also merit-pay based systems are viewed as more flexible because administrators are better able to respond to performance changes and labor market alterations.

Merit pay is not a new issue as in the late 19th century a majority of teachers were compensated based on the results of their performance rather than their seniority or skills. However, as the 20th century wore on the number of teachers in public schools compensated through merit pay dropped to 48% in 1918, 20% in 1939 and 4% in 1953.1 Due to a recent surge in interest in merit pay, merit pay compensation has increase to 5-10%.2 Most believe that the rise of teacher’s unions is the primary reason for the decrease in merit pay compensation in the 20th century.2,3

Overall merit pay has once again become a hot issue in public schools (technically the issue has never really gone away, but has been on the back burner until recently) because of floundering test scores and poor performance of students, especially in the higher-grade levels when compared against their international peers. Would people really care about the state of the education system even if it were exactly the same if U.S. students were first or second vs. international students instead of in the low teens? For some merit pay is viewed as a ‘silver bullet’ for education reform solving most of the current problems they believe plague school and teacher performance. Most proponents cite the successes of merit pay in private industry and private schools in an attempt to verify the superiority claim of merit pay, without actually realizing that public school is quite a different ‘beast’ not effectively emulated in either private industry or private schools. In a large number of industries, merit pay rarely influences employee salary for employees below a certain tier in the corporate hierarchy, the cutoff usually being a supervisory role. So to apply the same type of internal structure to a school, most teachers would not qualify for merit pay unless they were a department head (most senior teacher) or an administrator.

As previously alluded to, it is difficult to transfer examples from private industry or private/charter schools as a measure to justify the positive changes that will result from the application of merit pay in a public school environment. In most environments it is easy to evaluate the reason why a certain project succeeds or fails because there are predictable and controllable inputs. However, in the classroom there are a multitude of complexities that influence test results. Natural student intelligence, student work ethic, instructor style/method, instructor motivation, institutional environment, parental guidance, available resources within a given school, etc. all have a level of influence on the ability of students to perform and all have significant layers of complexities. Most opponents of merit pay attack the validity of merit pay on these grounds, that it is too difficult to separate the influence of the teacher from these other influencing elements;1 however, the more appropriate question is why focus so much attention on the influence of the teacher? The role of the teacher is important, but it seems that proponents of merit pay wish to limit administrative and parental responsibility in the role of education.

Another problem with comparing the application of a merit pay system in a private school to one in a public school is private schools have the ability, if so desired, to remove a student from the population pool because the student is not living up to the standards of the school whether it be intellectually or behaviorally. In contrast it takes extraordinary circumstances to remove someone from a public school. Even outside of removing a student from the school outright, the code of conduct governing a private school can be more severe creating an underlying motivational driving force to learn because certain actions disruptive to learning result in both greater certainty and severity of consequence.

The ability to pick and choose the characteristics of the student body also creates a more homogeneous population for teachers in private schools than teachers in public schools. These homogeneous populations limit the unique influencing factors that affect the instructional process whereas the more heterogeneous populations in public schools have no such advantages. Another particular aspect that is present in private schools at a higher probability than a public school, which influences performance, is direct parental involvement aiding motivation in the teaching process. Also regardless of inadequate funding period (some like to cite an average dollars per student figures for public school ranging from $8,000 – 15,000 per student ignoring standard deviation) or just misappropriation/incompetence by administrators, public schools do not appear to have the actionable depth of financial resources that private schools have which can be used to ensure that the necessary and appropriate tools for learning exist in the classroom. Finally the smaller populations of private schools not only reduce discipline problems, but also offer teachers a greater number of opportunities to engage students in more one-on-one time, which facilitates improved learning. So to simply say Smithville Private School has a merit pay system and look at how successful it is, Smithville Public School should have one too, illustrates an individual ignoring a plethora of relevant information to why Smithville Private School would outpace Smithville Public School beyond just having a merit pay program.

Unfortunately another potential problem with merit pay at the current time is collusion and corruption. As seen in the No Child Left Behind (NCLB) program when the judgment of meritorious service is dependent solely on peer/superior/student evaluations or standardized test scores there is motivation to falsify results or use a ‘I’ll scratch your back if you scratch my back’ mentality in order to maintain funding or acquire more salary/funding by exceeding benchmark goals. That type of corruption is small, but still exists, for some/most (depending on one’s personal optimism regarding the human condition) people tend to weaken in the morality department when money is involved. In a merit pay system it would be probable to expect the temptation for some teachers/administrators, especially those that would be negatively affected by a merit pay system, to ‘cheat’ the system in order to acquire a larger salary to increase. Many studies have been conducted highlighting corruption in evaluation systems.4,5,6,7,8

This is not to say that there is more corruption in teaching than other industries; there is corruption in almost all industries and occupations. A few bad apples should not create a misconstrued portrait of reality in the teaching profession. However, one of the arguments for a merit pay system is that it creates fair playing field where individuals are allegedly judged on quality of performance rather than seniority and politics, but if the evaluation system is not designed properly then corruption mitigates this ‘fairness’. One could argue that under such circumstances the system becomes detrimental for the more scrupulous participants. In addition focusing too much on test scores creates an environment where obtaining a certain average score on a certain standardized test becomes the primary motivation of the class rather than proper education and knowledge to create a quality and rational citizen, but more on that later.

There is a question about how genuine the motivation factor is in a merit pay system? For example most merit pay systems within public school systems do result in higher overall average salaries, but the difference between the average salary in the average merit pay school vs. the average salary in the average non-merit pay school is less than 1,000 dollars or about 2.7% of the annual salary.3 Therefore, how much of a motivating factor is such a paltry increase in pay? The answer to the above question would largely depend on the evaluation criterion/criteria put forth to determine the positive or negative outcomes from any merit pay program. If the evaluation method is transparent and fair there would be a much higher probability for improved motivation despite the size of the reward (more money is more money) vs. if the evaluation method is singular in nature and disingenuous to the educational process.

Therefore, how do the current more popular evaluation methods fair under the above described attributes. Relying on simplistic student evaluations or grades is irrational as these surveys and simple measurable factors are difficult to evaluate as genuinely impartial in a school environment. Student evaluations of instructors at a pre-college level are largely based on the overall workload, grading criteria and instructor likeability. The use of such evaluations on the overall performance of the instructor should be questioned because rarely will the students actually judge the performance solely based on the instruction.

For example what is the probability that a student earning (remember students earn their grades, grades are not given) an F in a class is going to give the instructor a positive evaluation citing that the lessons were top-flight, homework and tests were designed to optimize both learning and evaluation of knowledge and that the teacher was fair when conducting the class? A favorable evaluation under such circumstances is not very likely. Depending on prior knowledge, any potential competition issues or personal feelings, peer or administrator-based evaluations could also be tainted with bias and offer little objectivity in lieu of their subjectivity for justifying a teacher being rewarded or penalized in a merit pay system, although these structured evaluations should prove to be more valid than student-based evaluations.

Evaluation of teaching performance based on student grades is almost as irrational as student evaluations due to how easily grades could be manipulated to achieve a favorable performance review either indirectly by appraising course work less harshly than one should or directly by simply changing curves and grades. Also the variance in student skill and intelligence would also need to be considered for such an analysis system. To do so properly would imply the suspension of a merit pay system for a number of years to generate a ‘grading background curve’ to neutralize such variances if such a system really wants to evaluate teaching performance. In addition this ‘background curve’ would have to be created before any mention or application of a merit pay system to verify its statistical authenticity.

One of the many flaws in using grades as an evaluation of teaching performance can be illustrated in the real-life example where a high school instructor handed out a syllabus at the beginning of the term outlining the specific requirements to attain a given grade in the class. At the end of the term none of the students had met the given criteria to pass the class so every student in the class received an F. Upon hearing that the entire class failed, the administrators called in the instructor and said that he could not fail the entire class and he would need to change some of the grades. In response to this ultimatum the instructor instead gave each student an A (note the word ‘gave’ as none of the students earned the A) and the administrators that previously chastised the instructor elected not to comment on these new grades. Clearly in this case and many others, both the school administrators and parents refuse to be honest about student performance and as long as this continues merit pay based on grade distribution in a class is irrational, foolish and disingenuous. The door needs to swing both ways for honest appraisal of teaching performance. In short parents need to realize that their child/children may not have the intellectual capacity to earn As in every subject.

Finally utilizing scores on standardized tests does not appear to be an effective means for evaluating teaching performance. The first and most obvious reason is that to judge the quality of 180 days of instructional performance on a single annual test with rotating non-constant/uniform psychological variant participants is irrational. A significant problem with the standardized tests is that they focus more on fact memorization than critical problem solving. Memorization by rote is becoming less and less meaningful with the continuous progression of easily stored and sorted information (Wikipedia, etc.) and new technological tools. There is little reason to memorize that the Battle of Hastings took place in 1066 when one can simply find the information from a reference source. A more important question would be ‘how did William the Conqueror defeat Harold Godwinson at the Battle of Hastings’ because it requires critical reasoning skills, the ability to formulate and test hypotheses and apply those skills from theoretical situations to real-world situations. Tying merit pay based evaluations to the results of these exams send a value signal to teachers that the results of these exams are an important element in the curriculum, which will influence the teacher to devote more time to teaching the material on the test vs. teaching the ability to deduce answers from available information. In a lot of respects such a shift has already occurred on some level.9,10,11

Another problem with using standardized tests as the measure to determine teaching proficiency is that individual schools do not have direct or really even indirect control over the content of the test. Therefore, these tests have a tendency to distort the actual education that the students are receiving and the genuine performance of the teachers. The somewhat cruel irony is that the United States is the only industrialized nation that applies significant emphasis on standardized tests of such nature and yet despite this specific focus almost all other competing industrial nations outperform the United States on these very tests. Clearly there must be a better evaluation criterion than standardized tests. Any school that utilizes standardized tests as the sole criterion for evaluation of anything is a failure.

One unfortunate issue is that most proponents of merit pay do not appear to either be aware of these evaluation methodology flaws or do not seem to care that they are flawed when pressing for the application merit pay. Instead of addressing flaws in the more popular evaluation methodologies, proponents focus on criticizing teacher unions as an impediment to the administration of merit pay in a wider number of schools. Ironically such a criticism could be valid if a more reasonable and less flawed evaluation system were proposed for a merit pay system. Such a scenario would then allow merit pay proponents the ability to differentiate between the rationality of unions protesting merit pay due to low quality, inappropriate and non-transparent evaluation methods vs. unions that are simply resisting to protect the jobs of low-quality teachers. Not surprisingly most merit pay proponents seem to only assume this latter reason for union opposition. Therefore, sufficient to say the key element in the debate regarding the application of a merit pay system is the development of a valid and appropriate evaluation system.

In light of the above criticism it would be prudent to identify the most notable public school applied merit pay systems in operation in the United States: the Professional Compensation System for Teachers (ProComp) in Denver, Governor’s Educator Excellence Award Programs (GEEAP) in Texas, Special Teachers Are Rewarded (STAR) in Florida and Quality Compensation (Q-Comp) in Minnesota.4 Overall it is difficult to effectively analyze either GEEAP or STAR because their recent approval, 2006 and 2007 respectively, only generates a small sample size from which to judge the positive and negative aspects of the program. Therefore, there will not be any discussion of GEEAP and STAR. An initial analysis of Q-Comp determined that it appears to have had a positive effect on schools and supporting teachers, but there was no statistical link for these trends.12

The longest running ‘merit pay’ based program in public schools, which is still active, is ProComp which initiated a pilot program back in 1999 and was approved by Denver voters for full application in the Denver school system in 2005.4 ProComp focuses on improving teacher performance and pay opportunities through four separate components: enhancement of knowledge and skills, quality professional evaluations, market incentives and improved student growth.4 Unfortunately for proponents of merit pay, ProComp is not a very strong piece of evidentiary support for the application of pure merit pay systems nationally because when breaking down the program the highest portion of pay incentive is derived from the knowledge and skill component (43.2%) (basically what certification and degrees does the teacher have) not the improved student growth component (23.1%). This structure is interesting because most people still view merit pay in the context that a majority of the pay incentive is tied to improvement in student performance, where such a notion is a part of but not the direct focus of the most successful ‘merit-pay’ system in a public school district.

The fact that ProComp has been labeled a success, and there is no real reason to suggest otherwise, sets a powerful precedence to what perhaps should be the model for new merit pay programs. Such a strategy seems to focus on the acquisition of certified skill sets that are thought to improve teaching performance being the driving force for incentive pay not direct measurement of change in student performance. This system would limit the influence of evaluating student performance, while still increasing the probability of increasing student performance, for the newly acquired skills should allow teachers better strategies to improve the learning environment. However, if such a system is established it would handicap the ability to apply sufficient penalties for poor teaching performance to force out low-quality teachers, something that merit pay proponents believe is necessary, because the pay incentives for certification would be higher than any reasonable pay penalties for poor performance. The argument that poor performance could induce termination does not appear to change the status quo where under a fixed salary system a teacher can be fired for poor performance, thus the most successful empirically tested system does not appear to have an effective means for rooting out low-quality teachers that is superior to the current system.

Unfortunately there are other potential complications with how ProComp distributes pay incentives. The most notable concern comes from the belief that the acquisition of certification, degrees and higher level skill sets do little to actually influence the teaching dynamic put forth by a given teacher and increase student performance.13,14 If these studies are to be believed then it appears that over 40% of the pay incentives put forth by ProComp do little to nothing to increase student achievement, a statistic that could very easily change the view of ProComp as a success to a failure. In addition if valid such a reality reduce the versatility of a merit pay system placing more stress back on the student performance evaluation methodology, its execution and honesty.

Regardless of whether or not higher credentials affect teacher performance, any new system for evaluating teacher performance must be transparent in its distinctions between why a particular teacher attained a certain level of standing and another did not. The more subjective the system the greater the potential for internal conflict and grievances which do nothing, but hurt the educational environment. Of course such a requirement is only required in a competitive merit pay based environment when there is only so much money allocated for pay incentives. If money is not the limiting factor then any internal conflict would be rare because most teachers, like most employees in general, would only be concerned about his/her own evaluation.

In addition any evaluation system must adequately test critical reasoning and problem solving skills, innovation, information communication and ability to work within a team, skills that actually prepare students to be productive and intelligent citizens. Finally the execution of any evaluation system for use in a merit pay system must be able to blend naturally into the construct of the learning environment; wasting class time by conducting evaluation after evaluation or unnecessary test after test will more than likely end up hurting the students more than helping them.

One of the trickiest issues when considering a merit pay program is defining the evaluation criteria between grades and subject matter and what type of scale differences, if any, would exist in such a program. For example should a merit pay program have the exact set of generic criteria for each grade and each subject and teachers obtain the same bonuses or penalties based on their attainment of these criteria? Is it fair to say teaching English is as hard as teaching Physics? Are there just different sets of skills required for both and any overall difference in difficulty is mitigated by such skill sets?

Say it is reasonable to suggest that certain subjects are more difficult to teach than others, but even if they are do they deserve more money? Suppose they do, will the additional salary be in the base-salary or will there be a higher ceiling in merit rewards for these individuals? What do you say to those with a lower merit reward ceiling with regards to the overall importance of the subject they teach? There are important questions that need to be addressed both in general and in any type of a merit program. ProComp deals with this question under the category component of ‘market incentives’ as an additional $989 is available to teachers that teach hard to staff or hard to serve subject matter (the $989 appears to be independently awarded thus two awards can go to the same teacher for a hard to staff and a hard to serve class).

Some argue that merit pay is meaningless as a driver for teacher improvement until a system is established that forces the school itself to improve. For example merit pay may create a situation that motivates a teacher to teach better, but if the educational capacity in the school environment itself has a low ceiling, no matter how good and/or motivated a teacher is, that low ceiling will tend to produce lower expectations and results. For example if the school does not do anything to recognize or value academic achievement there would be little student motivation, regardless of the teacher, for students to be interested in learning. Therefore, a truly effective merit pay system cannot exist without some level of motivation from the schools to improve the overall academic environment. Note that it is reasonable to anticipate the ceiling capacity to increase more as a bimodal structure, such that if the increase is from an initial low capacity there will a more linear change in the ability of a teacher to teach a student, but as the capacity increases the ability change will shift from being linear to logarithmic, think a Michaelis Menten curve. In essence the higher the capacity the less positive change occurs in teaching potential when increasing capacity.

Even if the evaluation portion of a merit pay system was designed properly, the problem of continuous and sustained funding still remains. Taxpayers are notorious for failing to pass school funding levies and bonds. Add that to the fact that anyone that is not a complete cynic regarding the nation’s public school systems would expect to see a majority of teachers, after a couple of years of adjustment, meeting the evaluation benchmarks, thus obtaining at least most of the prescribed pay incentives. Outside of applying a quota-curve system where only a certain number of teachers could be in a given merit classification region, which would breed competitiveness and possibly undermine the honesty of the evaluation system completely undermine the fairness of the system, such a program would experience significant budgetary expectations each year; Therefore, funding for the program would need to be available each and every year. Otherwise it would be similar to telling a student to be proud of the earned 96% in a given class, but unfortunately because twenty other students earned higher percentages the student in question will receive a B. Such a situation would basically be a lack of reward despite fulfilling the required objectives for the given reward.

So with millions of dollars needed to fund merit pay programs and public schools already strapped for cash or at least claiming that they are, where will the funding come from? Currently there appear to only be two options, either taxes would have to be raised for the citizens in the school’s given district, it is difficult to believe that most communities will accept this increase, or significant corruption/incompetence reform assuming that such reform will produce the necessary funds, which is unlikely. Add to that fact other funding problems for public schools that are just over the horizon (busing children to school is one of the big ones) and the question of funding becomes more pertinent.

Overall there are four explanations for poor student performance: first, the teachers are not properly trained or lack the skill to teach effectively; second, students come to school unprepared to learn or do not have significant levels of natural intelligence; third, the school does not have and/or provide adequate resources to facilitate high-quality learning and instruction; fourth, students and/or teachers are not sufficiently motivated; application of a merit pay system directly affects none of these elements and depending on the system indirectly affects the first and fourth explanations. Therefore, if a given U.S. school is going to compete on both a domestic and international environment on a relatively uniform level all of these elements will need to be addressed not just one or maybe even two.

With all that has been said, if individuals are satisfied with the structure and results provided through ProComp, despite the fact that it is not even close to what most people seem to envision as a merit pay system, then a significant amount of the work involving merit pay has already been accomplished and progressive tweaking is all that will be required. However, if ProComp is not viewed as long-term viable system or does not accomplish the goals of a merit pay system then there are four critical questions that must be asked regarding merit pay: first, what elements will make up the components of the pay incentives within the merit pay system? (Will student performance be the only factor; will certain subjects be handicapped with greater/lesser bonus potential; will credentials matter; etc.); second, if student performance is utilized as an evaluation criterion for pay incentives, what elements of performance will comprise the total of evaluated performance, how will school academic incentives aid/detract from teacher evaluation, etc? Third, how will teachers that are struggling under the merit pay system be evaluated, how much time and progression will be allotted before termination? Fourth, where will the money to fund the merit pay program come from? Until these questions are addressed in an objective, open and honest fashion further discussion regarding merit pay does not seem to be useful, instead it would be a sub-optimized waste of time.

--
1. Murname, R.J. & Cohen, D. “Merit pay and the evaluation problem: Why most merit pay plans fail and few survive.” Harvard Education Review. 1986. 56(1): 1-17.

2. Figlio, David, and Kenny, Lawrence. “Individual teacher incentives and student performance.” Journal of Public Economics. 2007. 91: 901–914.

3. Goldhaber, Dan, et, Al. “Why Do So Few Public School Districts Use Merit Pay?” Journal of Education Finance. 2008. 33(3): 262-289.

4. Podgursky, Michael, and Springer, Matthew. “Teacher Performance Pay: A Review.” National Center on Performance Incentives. United States Department of Education’s Institute of Education Sciences. (R305A06034).

5. Figlio, D, and Getzler, L. (2002). “Accountability, ability and disability: Gaming the system?” National Bureau for Economic Research Working Paper 9307. 2002. Cambridge: NBER.

6. Cullen, J.B, and Reback, R. “Tinkering toward accolades: School gaming under a performance accountability system.” NBER Working Paper #12286. 2006. Cambridge, MA: National Bureau for Economic Research.

7. Jacob, B. “Testing, accountability, and incentives: The impact of high-stakes testing in Chicago Public Schools.” Journal of Public Economics. 2005. 89: pp 5-6.

8. Peabody, Z, and Markley, M. “State May Lower HISD Rating; Almost 3,000 Dropouts Miscounted, Report Says.” Houston Chronicle. 2003. June 14, A1.

9. Goodnough, A. “Answers allegedly supplied in effort to raise test scores.” 1999. New York Times. December 8.

10. Koretz, D., Et al. “Perceived Effects of the Kentucky Instructional Results Information System (KIRIS).” 1999. Santa Monica, CA: RAND Corporation.

11. Jacob, B, Levitt, S. “Rotten apples: An investigation of the prevalence and predictors of teacher cheating.” Quarterly Journal of Economics. 2005. 118 (3).

12. “Quality Compensation for Teachers Summative Evaluation.” Hezel Associates, LLC. January 2009.

13. Kane, T.J., Rockoff, J.E., and Staiger, D.O. “Identifying effective teachers in New York City.” Paper presented at NBER Summer Institute. 2005.

14. Rivkin, S., Hanushek, E.A., and Kain, J.F. “Teachers, schools, and academic achievement.” Econometrica. 2005. 73(2): 417-458.

Wednesday, August 26, 2009

Permafrost and Carbon Stores

One of the biggest concerns with regards to the consequences of climate change is mass thawing of Arctic permafrost. Although average global temperatures have increased by 0.6 C in the last 100 years, most of which has occurred in the last decade,1,2 such a statement can be misleading when considering the problem of permafrost thawing. Initially one would be hard pressed to view an increase of 0.6 C as a significant threat to permafrost stability, but unfortunately average temperatures in Arctic regions are increasing at a much more rapid pace than the rest of the world with increases ranging from 3-7 C leading to permafrost active layers having already begun to increase in depth.3,4,5

For reference the active layer of permafrost is defined generally as the seasonally thawing layer overlying permafrost, basically the active layer thaws during the summer and refreezes during the winter. Measuring how the active layer in permafrost increases is important because most of the biochemical and hydrological processes that take place in permafrost regions occur within the active layer. In contrast very little to zero biological or chemical activity occur in non-active layer areas due to the sub-zero temperatures that keep the environment frozen.

Although modeling and empirically measuring permafrost thawing is important there are two significant problems. One of the problems with permafrost study is the continuing loss of palsas, which are telltale signs of permafrost in the discontinuous zone and sometimes regarded as the only real reliable evidence pertaining to permafrost existence in the discontinuous zone.6,7 This reduction makes measuring the influence of climate change on the discontinuous zone, both in extent and rate of disappearance, more difficult. Fortunately carbon stores in the discontinuous zone are less significant when compared against those in the continuous zone. The second problem is the same problem normal climate modeling suffers from, an incomplete understanding of positive and negative feedback loops within the climate itself. Unfortunately recent empirical evidence has demonstrated that current models have underestimated the rate of surface sea ice melt, sea level rise and the neutralization/masking influence of aerosols.8,9 These underestimations combined with only recent realizations that significant ice loss will have an accelerated effect on permafrost melt and there is a high probability that existing models regarding CO2 release from permafrost underestimate the total consequence.

Most of the empirical evidence that tracks permafrost thawing only does so on a regional level. However, because most of the progressive warming in the Arctic appears to be generally uniform, regional analysis can be applied at some level to unanalyzed regions to generate a ballpark understanding of universal thawing. One particular study from Sweden calculates an average permafrost thawing at between 0.7 and 1.3 cm/yr;10 however, unfortunately this rate has a significant standard deviation because in the past decade thawing increased to 2 cm/yr including an 81% loss of total permafrost from discontinuous zone sampling points.10 If one assumes that this rate increases at 5% per year due to continuing global warming (a conservative estimate) by 2030 an additional 0.78 meters permafrost will thaw.

The reason surface permafrost thawing is a concern is because most of the organic carbon from permafrost originated from plant photosynthesis and growth within a dynamic active layer over millions of years there are higher carbon concentrations near the surface of permafrost, thus the first half to full meter of thaw will probably be the most important with regards to the amount of carbon released over a given period of time. Basically the first ‘layer’ of thawing will be akin to destroying a dam as a significant amount of CO2 and methane will be released into the atmosphere, then as more thawing occurs a more constant, but lower amount of CO2 and methane will be continually released.

Mass thawing of permafrost is a significant consequence because over history permafrost has naturally sequestered anywhere from 98 petagrams (108 billion tons) to 1672 petagrams (1842 billion tons)11 of releasable carbon material or 1/6 to 2 times the total amount of carbon currently in the atmosphere locked in approximately 14.7 to 20.4 million square km of permafrost.11,12 The disparity in estimates is so significant because some estimates include cryogenic (freeze-thaw) mixing and sediment deposition at greater depths previously unconsidered.11 The larger estimate of 1672 petagrams was determined after the initial estimate of 98 – 100 petagrams by digging to a greater depth in Arctic permafrost (approximately 3 meters instead of 1 meter).11,13 Also estimations of sequestered carbon could be smaller than reality due to discounting current rates of ebullition of methane from Arctic lakes.2 Note that when discussing the influence that permafrost thawing will have on climate change Arctic centralized permafrost is far and away the most important because although the Southern Hemisphere has permafrost it typically has a much lower stored carbon count.13

Sadly these discrepancies are rather meaningless because even if the lowest estimate of 98 petagrams was correct (it is unlikely that it is) it would still result in the release of enough carbon-based material (both methane and CO2) into the atmosphere to facilitate a very high probability of serious and permanent detrimental climate change regardless of what steps humans take in the future. At least that scenario represents the common belief. There are some that acknowledge that there is a little more wiggle room as permafrost thawing may increase the probability of excess small flora growth, mostly shrubs, that could absorb some of the released CO2 from permafrost.14,15 Unfortunately this additional sink capacity is only short-lived and the total time between conversion from sink to source is unknown, although it seems that 5-20 years is a reasonable range.

Therefore, with the stakes so high, the first line of defense would be to reduce the emission of greenhouse gases into the atmosphere to reduce the probability of initializing the catalytic cascade of permafrost thawing. Unfortunately it does not appear, based on all available empirical evidence regarding the speed of climate change, psychological importance attributed to climate change by the mass population and general inaction by the governments of most major emitters, that this necessary reduction in emissions will occur in time to prevent major thawing. Therefore, it is important to develop strategies to reduce the influence of the carbon released from its permafrost cage on overall climate change.

There appear to be three stages of intervention for neutralizing the influence of carbon-based gases from thawing permafrost. The first stage involves refreezing the permafrost by changing the local environment to compensate for the average global temperature increase brought on by excessive greenhouse gases in the atmosphere. The second stage involves trapping the methane and CO2 once it escapes the permafrost, but before it can leave the localized environment of its origin. The third stage involves augmentation of technologies that would accelerate removal of these carbon-based gases from the atmosphere. Clearly the third strategy is simple air capture, but the problem with depending on air capture is that the potential release from permafrost stores would easily eliminate any economic viability (whatever may even exist now) of the air capture strategy due to the shear amount of methane and CO2 available for release. Therefore, the focus should be on the first two stages.

It is difficult to fathom successful execution in either of the first two stages because of the shear depth of action required. Recall that Arctic permafrost covers 14.7 to 20.4 square km of the planet’s surface (depending on the exact definition of permafrost utilized). To propose any type of program to reduce even 20% of the greenhouse gases that would be released from permafrost over the course of thawing seems insane. However, such a program must occur if humans are to maintain a climate that can carry a capacity of billions of humans. Unfortunately focus on the first stage does not appear to be a viable attack strategy because to compensate for the increased average global temperature technology will need to be incorporated to provide cooling. Realistically the immediate concern is that the energy and materials required to pursue such a strategy could be unavailable. Note that in this instance the economics for neutralization at this stage are rather meaningless because averting significant detrimental climate change provides more economic benefit than almost any strategy to reduce the probability of climate change. Instead the shear amount of material required may not physically be available.

For example there are two main strategies that can be implemented to cool the permafrost environment despite higher average global temperatures. The first option would involve developing some form of gargantuan air conditioner that would function over an area of who knows say 1 square mile. Recall the total surface area of permafrost in the Arctic and one quickly realizes that it would be borderline impossible to implement a mechanical-based direct heat exchanger type strategy using air conditioners or any other type of apparatus. The second option is rather exotic involving dispersal of a cooling gas, perhaps something like freon on to the surface of the permafrost to reduce temperatures. This option has a lot of similarity to the sulfur-dioxide atmospheric release geo-engineering option. Unfortunately not only would this strategy suffer from the same problems as the first option in the amount of resources required, but there also would be potential environmental damage due to freon or whatever else particles being transferred out of the permafrost region by wind gusts or consumption by wildlife. Thus, although possible, neither or these strategies appear probable.

Therefore, with thawing all but guaranteed on some level and any form of air capture extraordinarily unlikely to neutralize the carbon-based gases released from the thawing, it appears that the only available option is try to reduce the amount of gas released into the atmosphere by absorbing the gas prior to its escape from the local region of ejection. Basically a large swatch of permafrost needs to interact with a molecule or structure that can absorb or render the methane and CO2 released from the permafrost inert. There are a limited number, but multiple options that fit these criteria, but the most efficient means to accomplish this goal may be to revisit bio-char.

A vast majority of the publicity surrounding bio-char involves its use as a supplementary tool to reduce the amount of atmospheric CO2 through disallowing the respiration or decomposition-based release of CO2 previous absorbed by plant life. However, bio-char has other notable qualities aside from sequestering absorbed CO2: enhancement of crop yield, enrichment of soil and even the possibility to absorb nitrogen oxides and methane.16,17,18 Although there is not a significant amount of research regarding bio-char/charcoal and its interaction with methane, initial studies have concluded that charcoal (basically what bio-char is) completely suppresses methane emissions from both soybeans and B. humidicola at 20 g/kg of soil.19 Unfortunately there is a significant caveat to this analysis, the lingering question of whether or not plants produce a significant amount of methane in that bio-char actually acts as a genuine absorption medium. This lack of available information pertaining to the maximum absorption capacity of methane in bio-char raises the question of whether or not bio-char deployment in permafrost environments will make any significant difference in methane neutralization.

For the moment assume that bio-char does indeed absorb significant quantities of methane, thus making it a viable candidate for limiting permafrost carbon based release. The big advantage that bio-char has over other absorption technologies is that the energy and technological requirements to generate large quantities are significantly lower. Also bio-char can be produced at a large scale relatively quickly once the necessary infrastructure is established. Another advantage to using bio-char is even if its ability to absorb methane proves to be insignificant, its ability to enhance soil quality and increase crop yields would more than likely have some positive influence on the already reported increased shrub and other flora growth in thawed permafrost regions increasing their ability to act as a carbon sink.

So how much bio-char would be required to accomplish 100% Arctic permafrost coverage? First it is assumed that the pyrolysis process utilized to synthesize the bio-char is either 80% or 100% efficient and slow pyrolysis is used to maximize bio-char production from the pyrolysis process. An important lingering issue is what plants should be cultivated for use as feedstock in the pyrolysis process? Although it would be useful to convert large amounts of forest residue to bio-char because it is already available, such a strategy may produce erosion and growth problems; therefore it may be wiser to develop specific soil plots to grow feedstock for bio-char production. Bio-char synthesis through pyrolysis is known to be dependent on the lignin content of the particular feedstock.20 Therefore, it would be wise to grow high lignin content feedstock to maximize bio-char synthesis rates. However, it must be pointed out that the feedstock grown for bio-char production will consume land that would more than likely be otherwise utilized for the growth of food stocks. So if not enough land is available for both food production and bio-char production, additional bio-char feedstock plots will have to be transferred to non-ideal soil to compensate for the reduction in available high quality soil. To that end if an appropriate amount of land is available then a mixture of legumes (to maintain soil quality), grain husks and kernels would be appropriate; if land availability is deemed to be a limiting factor then a crop like switchgrass and grain husks would be useful as the feedstock.

Empirical evidence has identified an average bulk density for bio-char generated from various sources ranging from 0.30 to 0.43 g/cm3.21 From this range assume a bulk density of 0.35 g/cm3 as there is evidence to suggest that higher pyrolysis temperatures generate higher density bio-char, but also result in less total bio-char.21 Bio-char thickness will average 10 cm and will have a 42% synthesis rate from pyrolysis. Fortunately this analysis only has the simple goal of generating a realistic estimate of the required bio-char to cover Arctic permafrost, thus the assumption that the area of permafrost is in the shape of a square can be made (yes this assumption is irrational for an exact figure, but such an assumption will result in a meaningful ballpark figure). Using the above information and assumptions an 80% conversion efficiency would require 6.43125 x 10^17 to 8.925 x 10^17 grams of bio-char whereas a 100% conversion efficiency would require 5.145 x 10^17 to 7.14 x 10^17 grams. Both examples would require over 1 x 10^18 grams of feedstock.

Initially the bio-char demands seem to sink the idea. However, there are two important considerations that would decrease the requisite demand. First, the above mass demands are to cover all of the permafrost (more or less due to the square assumption) in the Arctic; however, to avert climate change due to the release of carbon stores from permafrost not all of this carbon needs to be neutralized. Unfortunately there is no good way of estimating what percentage of coverage is required largely because there is no evidence to assume whether or not carbon stores are divided equally within permafrost. Second, the required bio-char thickness for aiding flora growth and methane absorption may not have to be as large as 10 centimeters, although not a lot of relief should be expected from this consideration.

Overall it is imperative that action is taken against the potential climate altering release of CO2 and methane from permafrost, for it is highly likely that thawing will be significant enough that these reserves will play a role in the prospect of continuing climate change. It is unclear whether or not bio-char can be a temporary or even permanent solution to the permafrost release concern due to the shear amount that appears to be required. However, there is enough evidence to suggest that bio-char, where it is successfully deployed, would have a positive effect in helping limit the total release of CO2 and methane from permafrost stores, thus affording society more time to reduce existing and future emissions reducing the probability of further permafrost thawing. If the world is going to undertake an extensive bio-char production program, which most environmentalists believe it should, then it may be more beneficial to deposit the bio-char in the Arctic than in a neighboring field.


--
1. Romanovsky, V. E., et, Al. “Permafrost temperature records: indicators of climate change.” EOS. 2002. 83: 593–594.

2. Walter, Katey, Smith, Laurence, Chapin III, Stuart. “Methane Bubbling from northern lakes: present and future contributions to the global methane budget.” Phil. Trans. R. Soc. A. 2007. 365: 1657-1676.

3. “Annual Arctic Report Card Shows Stronger Effects of Warming.” October 16, 2008. http://www.noaanews.noaa.gov/stories2008/20081016_arcticreport.html.

4. Lachenbruch, A.H., and Marshall, B. V. “Changing climate: geothermal evidence from permafrost in the Alaskan Arctic.” Science. 1986. 234:689–696.

5. Nelson, F.E. “Geocryology: (Un)frozen in time.” Science. 2003. 299:1673–1675.

6. Lyon, S.W., et, Al. “Estimation of permafrost thawing rates in a sub-arctic catchment using recession flow analysis.” Hydrol. Earth Syst. Sci. 2009. 13: 595–604.

7. Nihlen, T. “Palsas in Harjedalen, Sweden: 1910 and 1998 compared.” Geogr. Ann. A. 2000. 82(1): 39–44.

8. Hawkins, Richard, et, Al. “In Case of Emergency.” Climate Safety. 2008. Public Interest Research Centre.

9. Myhre, Gunnar. “Consistency Between Satellite-Derived and Modeled Estimates of the Direct Aerosol Effect.” Science. June 18, 2009. DOI: 10.1126/science.1174461.

10. Akerman, H. J., and Johansson, M. “Thawing permafrost and thicker active layers in sub-arctic Sweden.” Permafrost Periglac. 2008. 19(3): 279–292.

12. Schuur, Edward, et, Al. “Vulnerability of Permafrost Carbon to Climate Change: Implications for the Global Carbon Cycle.” BioScience. 2008. 58 (8): 701-714.

13. Slanina, Sjaak. "Permafrost in the Arctic." Encyclopedia of Earth. October 11, 2007. International Arctic Science Committee.

14. Wagner, D., and Liebner, S. “Global Warming and Carbon Dynamics in Permafrost Soils: Methane Production and Oxidation.” Permafrost Soils. Soil Biology. 2009. 16: 219-236.

15. Schuur, Edward, et, Al. “The effect of permafrost thaw on old carbon release and net carbon exchange from tundra.” Nature. May 2009. 459: 556-559.

16. Glaser, B, et, Al. “The Terra Preta phenomenon – A model for sustainable agriculture in the humid tropics.” Naturwissenschaften. 2001. 88: 37–41.

17. Glaser, B. Lehmann, J, Zech, W. “Ameliorating physical and chemical properties of highly weathered soils in the tropics with charcoal - a review.” Biology and Fertility of Soils. 2008. 35: 4.

18. Lehmann, J., and Rondon, M. “Bio-char soil management on highly-weathered soils in the humid tropics.” Biological Approaches to Sustainable Soil Systems. 2005. Boca
Raton, CRC Press, in press.

19. Rondon, M.A., Ramirez, J. A., Lehmann, J. “Greenhouse Gas Emissions Decrease with Charcoal Additions to Tropical Soils.”

20. Amonette, Jim. “An Introduction to Biochar: Concept, Processes, Properties, and Applications.” Harvesting Clean Energy 9 Special Workshop. Billings, MT Jan 25, 2009.
21. Lehmann, Johannes, and Joseph, Stephen. Biochar for Environmental Management: Science and Technology. ISBN-10: 184407658X. Earthscan Publications Ltd. Mar. 2009. pp. 28-29.

Friday, August 14, 2009

Revisiting the Energy Gap - McKinsey Report Update

For the original energy investigation go to:
http://bastionofreason.blogspot.com/2009/07/emission-adherence-in-2020-and-2030.html

With the recent release of the McKinsey and Company report, “Unlocking Energy Efficiency in the U.S. Economy” regarding the total efficiency potential for energy savings and emission reduction in the United States, it would prove useful to apply information obtained from this report to the previously analyzed issue of electricity shortfalls when meeting the current emission standards set forth by the ACES.

The McKinsey report hypothesizes a maximum savings of 9.1 Quadrillion BTUs (2.667 billion MW-h) of total energy if all of the efficiency projects proposed are successfully undertaken and completed.1 These efficiency savings are estimated to reduce annual CO2 emissions by up to 1.1 to billion tons (gigatons).1 From that total approximately 40.87% of those savings are from the electricity sector. Most of the remaining savings are somewhat inconsequential from an electricity standpoint due to the fact that they are derived from sectors that are capped under the ACES, therefore, those reduction would occur anyways. The only real advantage in these sectors, but it is a big one, is increased efficiency will involve significantly lower costs than other reduction mechanisms. One could argue that efficiency also has an advantage in speed of reduction (overall emissions are reduced faster through efficiency measures instead of other avenues), but the overall significance of this speed would only account for at a small elimination of future atmospheric CO2 concentration.

The total savings from electricity can be divided into two different sectors, those that influence the down-slope of demand and those that influence the up-slope of demand. The down-slope of demand refers to total reduction in electricity use by existing infrastructure. The reason the term ‘down-slope’ is utilized is because the electricity curve itself is flipped with the application of efficiency, instead of going up it begins to go down. The up-slope of demand refers to the total reduction in electricity use required by future infrastructure. The reason the term ‘up-slope’ is utilized is because electricity required by future buildings at best can only be reduced to 0 MW-h (if all of the electricity is provided by a self-generated non-emitting source), but it is improbable to conclude that all future infrastructure will meet this condition. Therefore, no matter how great the efficiency improvement to the new buildings, there will be an increase in electricity demand; efficiency cannot change the direction of the demand slope for new infrastructure, it can only reduce the slope of the increase.

So assuming that all of the efficiency alterations are deployed by 2020 as suggested by the McKinsey report, it would result in a total savings of 1,020,000,000 MW-h on the down-slope of demand and 70,000,000 MW-h on the up-slope of demand. Note that assuming all efficiency alternations it is a very improbable assumption as pointed out by the McKinsey report, the breadth of the improvements cover over 100 million buildings of private, local, state and federal level and billions of appliances and electronic devices. Thus, this upper limit proposed by the McKinsey report must be regarded as the best possible, albeit remarkably unlikely scenario. However, even if this is the best-case scenario the probability of its occurrence at this point in time is so unlikely that it cannot be seriously projected. Instead it can be viewed as a target point for savings from all existing infrastructure to be attained at some point in the distant future. 50% of this best-case scenario was assigned as the 2020 efficiency and was applied to the current model. The point of this update is to identify how this 50% best-case scenario would influence the required electricity growth in trace/zero emission and natural gas providers instead of based efficiency savings on projected electricity estimates.

The only major change is the incorporation of the proposed electricity reduction due to efficiency upgrades. The 2007 to 2020 electricity demand curve remained the same as in the previous study with the addition of the predicted up-slope demand from McKinsey subtracted from each scenario. The 2020 to 2030 electricity demand curve was divided into the low, medium and high scenarios estimated by the EIA, similar to the previous study, with an additional 12% reduction, pertaining to the percentage of total electricity reduction in the total energy reduction. Basically because only 50% of the maximum efficiency reduction was assumed for existing infrastructure leading to 2020, the 12% is representative of further efficiency deployment in the existing infrastructure. All other assumptions and details not directly pertaining to efficiency remain the same as described in the advanced energy gap model posted here:

The results from the 2007 to 2020 analysis are shown below.



Obviously there are significant reductions in both natural gas growth rates and rates of coal loss due to the reduction in electricity demand brought on by efficiency increases vs. most of the results from the percentage efficiency investigation. In fact the reduction in electricity demand is so great when considering a low expectation demand increase and when utilizing 06-07 renewable growth rates that no natural gas increase is required to bridge the gap created by the loss of coal. Instead the coal loss comes straight from the reduction in electricity demand instead of the direct need to adhere to the emission cap. This result may initially be surprising as even if the entire reduction portfolio described in the McKinsey report is executed, the 2020 emission cap is not obtained, so how can only execution of ½ of that portfolio meet the cap? Recall that emission reduction is not solely attributed to the realm of electricity, but reductions will come from other capped industrial sectors and the transportation sector. Also the coal values are slightly smaller than can be attributed to the reduction in electricity demand because there is a natural transfer from coal to less emission intensive electricity providers that exists outside of electricity demands.

The results from the 2030 analysis are shown below.


The reduction in rate of natural gas from all of the possible scenarios is typically lower than the rate calculated from all efficiency percentages in the previous investigation with the exception of the 100% efficiency. The reason for this result seems more tied to the rate of efficiency application then the total amount of application. That is the results seen here are typically better than the results seen in the previous percentage investigation not because of the overall electricity savings, but because a large majority of the electricity savings due to efficiency is attained in by 2020 whereas in the previous investigation the efficiency deployment was linear instead of forward leaning. Also the required wind growth rates are still considerably large which is troubling because of the continuing dependency of wind deployment on government subsidies to drive growth. The reason required wind growth rates are higher in the analysis vs. the 100% efficiency previous investigation is simply because in the 100% investigation a larger reduction in electricity demand was realized.

One may suggest that the wind growth rate in a given situation can be reduced by increasing the natural gas growth rate because of the reduced electricity demand. However, this is not plausible because recall that in the model coal derived electricity production falls to 0 by 2030, thus there is no coal to neutralize the increase in emissions generated by natural gas. Overall by 2030 when not using excessive amounts of offsets, the emission cap becomes the limiting factor determining electricity production, not electricity demand.

Using the previously predicted renewable growth rates and transportation emission reductions in conjunction with the efficiency deployment of this investigation, the results for the 2020 and 2030 analysis are:


Clearly the results demonstrate that these efficiency savings exceed the anticipated efficiency savings of the previous anticipated investigation, with lower natural gas growth and decline rates. This result simply re-enforces the obvious point that the more efficiency projects that are incorporated into existing infrastructure the less capital will be required to expand natural gas infrastructure and can be instead diverted to trace/no emission electricity providers.

Unfortunately the sad state of affairs is that even though only ½ of the total prospective savings projected by the McKinsey Institute was applied for this study, that result is still relatively improbable. The question comes down to why does it appear so difficult to do something society knows how to do and it would be rational to do?

Residential –

Aside from obvious informational issues (how to go about increasing efficiency in the first place), there are two main obstacles to increasing energy efficiency in the existing residential sector. First, the payback rate is rather slow, especially for those that do not use a lot of energy. The payback rate is dependent on the total amount of energy used, but the costs associated with applying the new efficiency measures are relatively fixed. Therefore, increasing energy efficiency is not very attractive to those that do not use a lot of energy because a 500-5,000 dollar investment may take over a decade before breaking even and may not make more than 5,000-10,000 dollars over the lifetime of the house. Also the investment depends on remaining within the improved residence for a significant period of time to recoup on the investment. The lifespan of the house vs. the payback rate is a significant problem. Although it is good for the planet, as a means to make money the slow rate of return reduces incentive.

The above obstacle can be best illustrated in the following example. Suppose Person A offered Person B either 1000 dollars right now (the investment for increasing energy efficiency Person B’s home) or 150 dollars per year over 10 years (the savings from the increased energy efficiency of Person B’s home), which offer has the higher percentage of acceptance by Person B? If psychological behavior from lottery winnings, (a very similar situation), reveal anything, Person B would select the first option an overwhelming amount of the time. The problem is although the second offer yields more money, the time required for its allocation makes it seem smaller. Also the 1000 dollars is concentrated which allows an individual more versatility in how it is spent, whereas the 150 dollars per year has limited options.

Second, the overall ability or incentive to make efficiency changes is an obstacle. This obstacle can be divided into two parts. First, for the wealthy the prospect of saving money through efficiency changes typically does not seem worth the investment or the aggravation involved in the installation of the new infrastructure. Rather it is easier to pay the extra 300-1500 dollars per year in energy costs than to go through the hassle of buying new appliances, installing new insulation and other home improvements. Second, for the less wealthy the prospect of saving money through efficiency changes may not be viable because of inability to afford the fixed price of making the change. Unfortunately the slow rate of return also hurts lower income households when it comes to efficiency changes because it limits the ability to make piecemeal changes using the money saved from one improvement to fund a second efficiency improvement and so on. Another concern for lower income households is the aforementioned total profitability in that it is reasonable to conclude that most lower income households do not use a lot of energy because they cannot afford to do so, both due to limited allocation of energy funds and the lack of funds to create three television, two computer and cappuccino maker homes, which would demand more energy. Thus, with lower energy use not only is the total rate of return slowed, but so is the total amount of money that efficiency investments will yield.

Although this low energy use may not so cut-and-dry because it is plausible to suggest that some lower income household unwittingly use more energy than some higher income households due to inefficiencies in heavily outdated appliances and other electrical items due to cost constraints. Overall in the long-term energy efficiency is definitely viable, but there must be considerable personal motivation to pull the trigger, basically one must care about the environment over any financial incentives. Unfortunately it is likely that those that have such a mindset and have the proper information regarding how to apply these efficiency measures have already done so, limiting the total viability for future changes. Therefore, unless the government steps in and directly or indirectly funds efficiency programs at a greater level of both capital and awareness, it is highly unlikely that a significant amount of efficiency savings will come from the existing residential sector. It is currently unlikely that any other methodology to drive efficiency incentive will work despite efficiency improvements actually being cost negative.

Commercial –

It is highly probable that the greatest level of success in applying increased energy efficiency will come from the commercial sector. The two biggest reasons for this anticipated success is first a greater anticipated rate of return due to shear energy use and second less total unique units that have to be improved. The first reason is important because a faster rate of return not only provides a greater incentive to initiate the improvement, but also allows for a greater ability to work from a piecemeal methodology, thus reducing the initial capital expenditure required for an increase in efficiency. However, due to area constraints, it is highly probable that initial costs would also be higher for commercial infrastructure. For example instead of 1000 dollars now or 150 per year for 10 years, the proposal would be 1200 dollars now or 210 per year for 10 years. The second reason also relates to the higher energy use in that to save x MWh one may have to apply efficiency improvement to 26 homes vs. 1 commercial building [based on the total divergence of efficiency changes that are available].1

The future of the residential and commercial sectors is a different beast entirely largely because of the new federal guidelines that exist in the ACES. If passed as is, the ACES would set national standards for both future residential and future commercial buildings, which would eliminate the problem of incentiving, for the efficiency improvements would be incorporated before sale. The prescribed efficiency codes are documented in Section 201 of the ACES.

The ACES proposes initial baseline standards correlating to the efficiency requirements in the 2004 ASHRAE Standard 90.1 and 2006 International Energy Conservation Code (IECC) code for the commercial and residential sectors respectively. Although some seem to unrealistically believe that all the DOE has to do is snap its fingers and new policy will both be enforced and executed, this is hardly the truth. Clearly there will be some delay between both the date of discussion and agreement and the date of agreement and enforcement. For example a 30% reduction from the baseline is supposed to be the target set immediately after the passage of the ACES. However, enforcement will not begin immediately despite the target being law immediately. Instead it will take anywhere from 1 to 2 and a half years before one can expect 100% of new buildings to abide by the new target code. The reason for this delay is that under subsection c: State Adoption of Energy Efficiency Building Codes – states could drag their feet for up to a year before enforcing the standards put forth on a given target date under Section 201. Also there are questions regarding enforcement issues on a national level and how they transfer to this one-year state grace period (is it consecutive or concurrent?). Thus, it is easy to overestimate the amount of energy saved from new buildings under these guidelines. [Note that this issue does not pertain to the estimates made by the McKinsey Report referenced above because they do not appear to include policies put forth by the ACES in their analysis.]

The biggest problem stemming from improving the energy efficiency of new buildings is that these improvements typically increase the capital costs associated with constructing the buildings, thus forcing the builders to increase the selling price. Price gouging due to forced inclusion from legal standards could account for an additional unanticipated increase in price. For example suppose the new regulations demanded an additional 50 square feet of insulation be installed in all new homes from currently existing standards to meet the new energy reduction standards. It is too idealistic to believe that insulation manufactures and provides will not gleefully raise their price in response to the greater required demand. These prospective price increases will then make most affordable housing less affordable. Therefore, the issue of increasing new home and other commercial building prices due to improved energy efficiency infrastructure will have to be addressed.

Overall this new addendum to the previous energy study illustrates both the benefits of increasing efficiency deployment, the pertinent obstacles to deploy a significant efficiency program to achieve these benefits and the fact that despite the benefits of efficiency, trace/zero emission technologies will still require a significant amount of growth for future energy demands. The biggest issue in aiding efficiency savings involves the development of an incentive type program that does not involve the government directly footing the bill. The problem with the government footing the bill is that due to the economic downturn, the national debt is already set to spike and further handouts for things individuals should already be doing is unacceptable. Ideally a price signal involving the increase in electricity price would serve as the proper motivating factor; however, Congress appears determined to limit any significant change in price signal in the short-term. Therefore, the best option would most likely be specifically targeted very low interest governmental loans given for the purchase of improving efficiency. Hopefully individuals and corporations can push forward in the pursuit of higher efficiency goals reducing the already daunting future requirements for electricity and energy generation under a future emission cap.

--------------------------------
1. "Unlocking Energy Efficiency in the U.S. Economy." McKinsey and Company. July 2009.
http://www.mckinsey.com/clientservice/electricpowernaturalgas/US_energy_efficiency/

Friday, August 7, 2009

Evolution of Agriculture in Africa

As previously discussed in past blog posts there is a significant probability that food shortages will affect a number of countries in the near future, especially those in Africa. Unfortunately these shortages can only be supplemented through proceeds from other countries for a limited time. Therefore, it is important to identify strategies that can be implemented in these countries in order to develop a more productive agriculture infrastructure to lessen dependency on others. Although the countries that need help are not limited to those in Africa, the focus of this post will be on Africa.

The conclusion that agriculture in Africa needs to be improved is not surprising, for humanitarians and scholars have considered the questions of how to increase the productivity of African agriculture for decades. First things first, Africa has the highest population percentage engaged in agriculture and has the second-largest area of cultivable land of all seven continents.1 Therefore, it is unreasonable to suggest that Africa lacks the inherent capacity to produce self-sustaining yields. Unfortunately Sub-Saharan African countries have grain yields per hectare significantly lower than the averages in both Northern African countries and more developed countries1 largely attributed to missing the Green Revolution. To combat these low yields, two options have emerged from theory and the literature as the top candidates for facilitating a green revolution in Africa. Both options have unique strengths and weaknesses, have backing by significant powerbrokers and are currently being tested in the isolated environments throughout Africa.

To review the Green Revolution involved the utilization of high yield variety seed which was genetically selected to possess larger product/fruit (greater mass/yield per hectare), faster growth and day-night neutrality. To achieve this selection, the seeds were cultivated to respond to fertilizers due to higher levels of exposure to nitrogen and phosphate in order to produce additional mass. The faster growth in these seeds was predicated through increased rates of photosynthesis due to the higher mass. However, the rate of photosynthesis also required additional water for sole reliance on rain would regionalize increased growth rates wasting a significant amount of growth potential, thus irrigation levels was increased. In short the Green Revolution involved new seed, higher yields, new irrigation strategies and lots of fertilizer.

The first option for improving African agriculture is the strategy put forth by the Alliance for a Green Revolution in Africa (AGRA) and the Millennium Project, which seeks to initiate a green revolution in Africa through means similar to previous green revolutions. That is to introduce higher quality seed and fertilizer to African farmers in order to provide the same technological and chemical advantages utilized by farmers in more developed nations. In addition to these advances, AGRA also focuses on encouraging farmers to form cooperatives amongst themselves to increase their buying power as well as to develop a cohesive strategy for land management to maintain soil quality. The original large-scale test for the AGRA is the Sauri Cluster where newly provided access to seed and fertilizer lead to the revival of previously resource depleted soils as well as a 50% expansion of cultivated land all resulting in a 300% increase in maize production.2

The Sauri Cluster was selected as a representative of the general African condition, in order to develop a general model to identify points of success and failure for future reference, although it could be argued that the Sauri Cluster has a higher than average soil quality. The Sauri Cluster is located in western Kenya consisting of 11 villages and covering an area of approximately 8 square km.2 It was determined that this region had a strong community system, but did not have the economic infrastructure to provide basic services for necessary growth. In fact it was estimated that 60-70% of the 55,000 individuals living in this region live on less than $1 per day.2 Agriculture is the primary source of employment and capital acquisition in the region, yet a significant number of individuals are still hungry.

In addition to the large jump in maize yield, the AGRA has also had initial success establishing a baseline of credit for farmers through micro-loans to previously identified high-risk loan candidates. The release of this capital has allowed farmers to not only purchase higher quality seed, but also invest in their own farms to improve production. Also the AGRA is creating logistical avenues to document and measure successes vs. failures in policy and what can be done to reinforce the positive outcomes and rectify the detrimental outcomes. This new documentation and recording strategy is important when considering the shear size of the required changes in the African agriculture structure within a number of countries.2

There are those that have criticized the AGRA strategy on the basis that it is incorrect to assume inadequacy in the less chemical and labor-intensive agricultural methodology, which has been practiced by African farmers from centuries. Farmers used to generate enough food for themselves in Africa in the past,1 so instead of ‘fixing’ things through the insertion of what can be regarded as ‘modern-day’ techniques, focus should be on addressing what changed in food production vs. consumption and what to do about it. Unfortunately such reasoning is flawed because such a statement assumes that African farmers have had the opportunity in the past to incorporate new seed, fertilizer and other ‘modern-day’ techniques/technology and have declined to use these tools because they were not advantageous, whereas the reality is these farmers never had the economic clout to entertain these options in the first place.

Another criticism of the AGRA strategy is that the introduction of fertilizer is only a short-term solution in that it may increase yields, but will reduce soil quality in the process creating a system of agriculture dependent on fertilizer which will have the same vulnerability that the developed world faces when it comes to price spikes due to increasing natural gas and oil prices. Most of those individuals that have these concerns seem to only read about the sound bites of the AGRA strategy, not the official strategy itself. Introducing fertilizer and advanced seed is not all that the AGRA proposes to generate an agricultural revolution in Africa. Additional elements for soil improvement involve improved water harvesting and management, planting nitrogen fixing crops and use of manure to compliment fertilizer as well as management groups to track and document the use and quality of soils throughout Africa.2

However, despite other avenues for improvement of African agriculture, incorporation of advanced seed and fertilizer is still a significant component of the AGRA policy. Pertaining to the concern that fertilizer will enhance the influence of the price shock due to fluctuating natural gas and oil prices, at the present time it is difficult to conclude that African farmers will have tons of fertilizer per hectare ratios akin to the developed world.3 Therefore, these lower ratios will reduce the amplitude of any price spikes seen in African nations vs. the developed world relative to fertilizer price. This is not to say that food price will not be influenced by fertilizer price, just that it will not be as excessive (at least until the market is developed enough that it can be adjust to price spikes). Also as discussed in a previous post, an increase in fertilizer and transport price due to increased oil prices was only one of many factors contributing to the increasing food price in the developed world. Overall the claim that fertilizer should not be introduced solely based on the fact that it will not be a permanent fixture in African agriculture is unreasonable. However, such a sword is dual-edged in that is introducing fertilizer in the first place a reasonable strategy if not enough of it will be incorporated to significantly increase yields?

A second concern with the introduction of fertilizer has been the question of fertilizer dependency. The core question regarding fertilizer dependency is does large quantities of fertilizer used over a number of years in the same plot eliminate the natural ability of the soil to support crop growth without fertilizer. Fertilizer dependency is a tricky issue because it is generally unclear that such a phenomenon exists, although some studies claim there is a correlation between fertilizer use and soil depletion and/or complications.4,5 The problem is that proving correlations are difficult and even if true, little work has been done generating any type of fertilizer to soil quality ratio. For instance clearly if a correlation did exist between fertilizer use and depletion of soil quality, it is likely that less fertilizer use would result in slower soil depletion; however, is there a time frame and amount of fertilizer use that delineates the point where a plot is dependent on fertilizer vs. it has the capacity for recovery? Also is the relationship linear or is there a threshold point where all fertilizer use below that point has zero detrimental influence on soil quality? Although it is highly probable that there is an answer to these questions, the answers are undetermined.

Identifying this ‘dependency’ factor is important because if there is no dependency factor, fertilizer can be used to significantly increase yields over the time it is available and then phased-out of African agriculture when the price outweighs its yield benefits. Thus, farmers can increase yields and become more profitable utilizing fertilizer and then revert back to non-fertilizer based agriculture when profitability is not longer viable without any significant detriment. If fertilizer dependency is legitimate and the time frame and quantity required would more than likely be met, then incorporation of fertilizer could be inappropriate due to long-term detriments outweighing short-term benefits.

Another avenue of agricultural improvement sought by the AGRA is expansion of individual crop profitability on a crop per crop basis. Not only do farmers need the tools to increase yield, but also better access to markets because an increased yield loses a significant amount of value if the market is not large or convenient enough to absorb the additional product. Therefore, the AGRA also aims to reduce transaction costs by establishing more rural marketplaces, commodity exchanges, produce alternatives and milling and processing operations.2

The policy and partnership program of the AGRA focuses on strengthening the ability of governments at a national level to establish policies to benefit farmers in effort to facilitate the necessary changes, both on an institutional level and technological level, to catalyze a green revolution. These national changes will also allow a given African nation a better opportunity to participate in global level negotiations with other nations. Based on the history of agriculture and its market structure in Africa, this strategy very well could be the most important element to increasing agriculture productivity and economic prosperity.

The second option for improving African agriculture is continuing not to use fertilizer or advanced seed and instead introduce more thorough organic farming techniques. The reason this strategy is so appealing is that some believe because the Green Revolution skipped most of Africa, most of its farming remains low input low output, thus its farming system is already de facto organic. In general organic farming focuses on the use of specific land management over technology and biological processes over chemical processes.6 For example some of the organic farming in Africa include returning nitrogen to the soil via nitrogen fixation from legumes, planting spring onions to ward off insects and altering surrounding ground layers to reduce erosion.

Organic farming proponents believe that the better option for increasing profitability of African agriculture lies in reducing costs instead of increasing volume of product sold. One of the main cost-cutting methods in organic farming is bypassing fertilizer in favor of more labor-intensive methods that not only generate conditions for profit and yield increase, but also offer temporary employment to fellow Africans. Lower labor costs typically differentiate the higher costs of organic farming in the developed world vs. the lower costs of the same organic farming in the developing world.

It seems rational to conclude that most of any increase in yields derived from organic farming involve faster turnover of available and viable farmland. In the past African agriculture relied on natural replenishment. Food was grown on a given plot until the soil began to lose a significant amount of nutrients (2-4 years) then a quasi slash-and-burn is performed on the land and it is abandoned, while the planter moves on to a new plot. After approximately 5-15 years (depending on various conditions) a secondary forest region will usually grow over the slash-and-burned region and gradually the soil regains its fertility making the land viable for crop growth once again.7

Unfortunately this method of cultivating typically results in soil that is rarely of the quality of the previous soil. Also the natural method of regeneration is imprecise requiring that one err on the side of caution (waiting longer than perhaps necessary) before renewing cultivation efforts. The waiting is necessary, for if cultivation begins too early soil fertility and crop yield is decreased reducing cultivation lifespan to 1, maybe 2 years. If crop growth is pushed beyond this lifespan, soil will become exhausted increasing the probability of weed infestation (typically Imperata cylindrical) basically spoiling the ability of the plot to rejuvenate naturally.7 Therefore, the idea behind introducing more genuine organic techniques is to reduce or even eliminate the downtime for these plots, thus increasing yield by increasing crop turnover. Small additional yield increases could also come from the soil in general being more fertile.

However, there seems to be a suggestion from some supporters that utilizing organic techniques in Africa will generate yields that will rival those generated from significant inclusion of fertilizer. Such claims are problematic because there are a wide variety of studies which support and disprove this statement. In smaller studies organic farms do well versus fertilizer-driven farms on a ratio basis (lb/hectare) and total output (sum of all positive output, not just crops).8,9,10,11 However, there are few complete studies that focus on scaling up organic plots to rival the absolute yield capacity of fertilizer-driven farms. Overall organic plots will generate 20-25% lower yields than fertilizer-driven plots, but will require less energy and will better maintain soil fertility and biodiversity.12

Another issue that needs to be addressed regarding the future evolution of agriculture is water availability. A vast majority of agriculture is rain-fed either due to lack of irrigation availability or irrigation infrastructure; however, in the future such dependency on rain looks to be troublesome as progressing climate change can be viewed as the most pertinent and viable threat to agriculture evolution demanding the development of new strategies for water access. For example the IPCC predicts that by 2020 in some countries in Africa, yields from rain-fed sources could be reduced by up to 50%, which will severely compromise access to domestic food supplies.13 Most of this crop shortage will occur in Northern and Southern Africa due to significant decrease in overall rainfall. Although not hit as hard, Central Africa will also see an overall reduction in total rainfall.13 Unfortunately viable water synthesis alternatives do not appear to be available for large parts of Africa. For example the most popular method of potable water generation is desalinization, but Africa does not have the energy or financial resources to initiate a large-scale desalinization programs similar to those in the Middle East. Although there are initial ideas of generating a large-scale solar energy production network in the Sahara, it is unlikely that anything substantial will be up and running in time to be meaningful regarding a desalinization strategy. Therefore, a cheaper strategy needs to be developed to avoid serious water and agricultural shortfall in the future or application of either the AGRA method or the advanced organic method to improve agriculture will be ineffective.

Although it may not be important at the time due to the improvement of African agriculture being largely driven by the priority to feed Africa itself, one of the factors that supporters neglect to discuss when considering the effects of organic farming is crop certification for export. In order for a farm that is classified as organic to export to the world market it must be certified. The certification is organized under an official participatory guarantee system, which is operated by a group comprised of farmers linked to a specific exporter. Clearly the farm must be ‘up-to-snuff’ because the reputation of the exporter is on the line with each export.

There are two types of certified organic farming in Africa. The first are large conglomerates or agribusinesses (yes there are some in Africa) like SEKEM, which can implement internal control systems such as inspection, marketing activities and certification on their own.6 The second are small groups, maybe in a collective maybe not. It is much more difficult for these small groups to be certified because of lack of organization and lack of resources. Therefore, farms that fall into this second group typically need support through food or development aid programs like Export Promotion of Organic Products from Africa (EPOPA).6

Other than certification the second big problem most African nations will have exporting is that currently there is not a large market for intracontinental exportation, thus it is reasonable to anticipate that the biggest recipient of African exports will be Europe, most notably the European Union. Such a reality is a problem for most nations due to the high transportation costs to cover the distance and the lack of existing infrastructure to reduce these transportation costs. Thus to maximize export potential most sub-Saharan countries would need to focus on low volume high price items.

A renewed focus on exporting may be more detrimental than beneficial to African agriculture if mistakes of the past are repeated where exportation focus became an obstacle in the development of African agriculture. A significant reason that African agriculture flounders behind the systems in North America, South America, Europe and Asia can be attributed to the attitude of colonial powers when most of Africa was originally broken up into colonies answering largely to European powers. Under the control of European powers most land use was geared towards livestock or high value crop production to provide an economic driver for maintaining the colony.14,15 Little attention was paid to the local population and the development of a thriving local market. Rationally the best land was selected to grow these high value crops, which would later be exported back for sale on the European market, because the local market did not have the ability to absorb significant quantities of said crop. The remaining land was utilized for local crop growth with little regard for land quality.

This strategy emulated the old ‘chicken vs. egg’ argument where the local markets were not developed, so little attention was paid to producing crops that could be sold locally, but because few to no crops were grown for sale locally, little attention was paid to developing a large local market that could absorb those crops. Realistically one can somewhat understand the attitude of the colonial powers in that running a colony costs significant capital, thus the colony must produce economically viable resources and growing high value low volume cash crops was the easiest way to accomplish that economic goal. The lack of development of local markets lead to a lack of purchasing power for the local population, which unfortunately forced food and other agricultural products from areas where they were most in demand to areas were the demand was considerably lower, but the market could afford them.14,15

Even though colonial powers no longer rule over various African countries it can be argued that their policies centuries earlier put African agriculture in a significant hole that it has yet to climb out of. Although it would be different now, in that any capital generated from exportation would go to the African countries not its colonial masters, some would argue that any focus on exportation would be at some level of expense to local markets and would simply delay agriculture development in Africa. One could understand this position even now as the World Trade Organization (WTO), International Monetary Fund (IMF) and structural adjustment programs (SAP) have frequently favored developed nations over developing nations in the subject of global trade. A number of policies from these organizations have given considerable power to markets over governments (policies that put Africa at a disadvantage due to underdeveloped markets and little market power).16,17 Stripping power from governments in these developing nations is sometimes viewed as necessary before opening the door to allow trading on the global market, but loss of this power also limits protections that can be applied to domestic consumers, one of the main reasons to explain price spikes in these countries, despite insignificant influence from most of influencing factors of price. Therefore, one of the primary ways to both reduce the volatility of food price and further aid the advancement of African agriculture, regardless of which direct agricultural strategies is applied, may be to establish better governmental control policies outside of the influence of the WTO, IMF and World Bank.

Overall improving export potential would be a worthwhile goal for many African nations if significant quantities of agricultural related items could be exported. However, at the moment without specific quota aid it its difficult to conclude that African nations will eventually be able to effectively compete with larger developed nations on the world market. Realistically these nations will probably never be able to compete on par from a grain exportation slate with countries like the United States, Canada and Russia due to technological and topographical limitations, thus advantages for pursuing exportation become murkier.

Improving the agriculture infrastructure in Africa is not isolated to just a cultivation methodology. Fertilizer is not the only technological advantage that the developed world has over the developing world in the field of agriculture. Another option that has received a significant amount of attention would be to supply African farmers with genetically engineered crops (largely Bt and HRCs) that resist specific types of pests reducing costs and use of pesticides moving beyond ‘simple’ high yield seed. However, executing such a strategy would be difficult. In order to adequately distribute genetically engineered seeds throughout Africa the seeds would have to be provided for no cost or be heavily subsidized to adhere to the intellectual properties rights held by companies like Monsanto, which developed and sell the seeds in question.

The question of genetic engineered crops has been explored before and critical elements of cost would have to be decided before progression. Corporations are concerned with mass production of genetically engineered plants without profiteering safeguards because such crops can be harvested for their seed and that seed can be recycled into the next growing season removing the need to purchase new seed from the corporation. Clearly corporations that spend the money on research and development to produce these new products deserve to be compensated and not have the profitability of their product compromised. However, there are questions of morality in that access to these advances should not be limited only for the rich. So before an idea involving these seeds in mass can go forward a solution will need to be reached to properly compensate these corporations.

Even if approval could be arranged for the mass deployment of genetically engineered seed in Africa there are a number of opponents that believe such a strategy would be flawed. Numerous studies have expressed various concerns about the administration of genetically engineered seed on their own and in co-existence strategies with non-engineered seed. The most pressing concerns are unintentional and possibly detrimental spread of genes from the engineered crop to the non-engineered crop, reduction in fitness of non-target organisms through transgenic trait hybridization, rapid resistance generation for target insects and disruption of natural pest control measures.17,18,19,20,21 Gene transfer is a significant concern by itself as in developing countries there is the belief that wild weeds can be more sexually compatible making transfer of various resistances more probable unintentionally creating ‘super’ weeds.17

There are also concerns with the yield potential of the seeds themselves as problems with stem splitting and boll drop can arise. Also transgenic seed combination with fertilizer could increase the rate of soil depletion due to the accumulation of transgenic toxins (such as Bt toxin) in soil, which would further reduce soil fertility.17

The real issue in genetically engineered crops may be the underlying problem which emphasizes the ‘one pest-one gene’ approach. Unfortunately such a strategy limits the ability to initiate an alternative strategy when pest targets develop resistance to the genetically induced deterrence. When greater selection pressure is applied to a population through a single element it is reasonable to assume that there will be a more rapid and significant reactionary evolution response. This rapid response is why some favor the principle of ‘integrated pest management’ which avoids utilizing single responses to a given condition (a single pesticide for a single insect population, etc.) and instead employs multiple pest and cultivation control mechanisms.17 Under such a strategy even if a certain group develops resistance to a transgenic crop or pesticide there is a higher probability that the group is wiped out by another means before passing those resistance genes on to the next generation. To combat rapid resistance development in the United States the EPA mandates that farmers designate a certain percentage of their plot for non-Bt crop varieties. However, whether or not a similar policy will be incorporated in developing countries with the incorporation of genetically engineered crops is unclear.22,23

Another means to increase efficiency of agricultural operations in the developing world would be to augment their operations with modern harvesting technology. Initially one may be skeptical regarding the application and expansion of modern harvesting technology, but there is a significant advantage. One of the biggest concerns in African agriculture is not an increasing population that becomes harder and harder to feed, but a decreasing population that is unable to produce significant quantities of food through lack of available labor. For example it is typical that millions of young African men and women migrate from their home villages/cities to acquire work leaving a smaller and less able workforce to tend the fields.1 Any potential lack of local labor can be neutralized through application of modern harvesting technology.

Unfortunately similar to genetically engineered seeds, it would be impractical for companies like John Deer and Caterpillar to simply give the necessary heavy machinery to these farmers. Clearly the company would want to sell it to the farmer. The problem with that is the farmer more than likely could not afford it. The United States government gives millions to billions of dollars in the form of food aid annually. Technically there is nothing stopping a temporary shift in policy converting some of that food aid into grants for heavy machinery and proper training in its use. In fact it would be possible to simply create a government sponsored production chain which purchases John Deer/Caterpillar equipment from U.S. manufacturers and then gives it to African farmers through a government run charity. Politically such a policy shift may create an unusual battle between the farm lobby and the manufacturing lobby as food aid is typically purchased not simply given away.

While the above procurement method seems viable, distribution of this equipment may be a sketchy issue. One possibility involves the use of a lottery-type system where farmers would be assigned a ‘x’ digit number (however many numerals are necessary) and a select quantity of numbers would be drawn randomly to determine who receives the equipment.

Another question regarding the above program is would there be domestic strife created by such a program? Largely the question stems from whether or not the local or federal government of a specific country involved in the program should be directly involved or indirectly involved? Circumventing the government avoids issues of potential corruption that could restrict or disallow the acquisition of the equipment for the farmers under trumped up circumstances, resulting in a distribution of those items to those that do not need them (i.e. the rich and the well connected). However, if the government were only an indirect player in the distribution process, what would be the chances of the government seizing the equipment at a later date? What about conflict between farmers that receive equipment and those that do not, especially in the more war-torn states because clearly not all farmers would be able to receive the necessary equipment? This could be a big problem because the farmers that receive the equipment should outpace those that do not, increasing the probability of wealth division between the two parties.

If strife and conflict under a randomly assigned distribution system is determined to be highly probable, another strategy for distribution could involve establishing regions of use and assigning certain operators that would utilize the equipment on farms within the assigned regions. Basically no one would own the equipment instead it would be used by farmers within a given region as community property. This strategy would probably succeed in poor communities will few resources because there would be little conflict as all parties would accept equal access and even if they wanted the equipment for themselves, they would not have the resources to take it from others in the region.

Overall both of the main strategies for developing African agriculture have flaws that seem difficult to rectify. AGRA and its supporters have the problem of fertilizer availability. In the future fertilizer prices will continue to increase driven by increasing natural gas and oil prices as well as potential phosphorous shortages, thus there is little reason to believe that fertilizer price will crest in the future. Although inconsistent assess to fertilizer does not destroy the success parameters for the AGRA, the problem is much more serious if fertilizer dependency is legitimate, for then execution of widespread fertilizer use could be devastating in the long term. If fertilizer dependency is not legitimate, then the strategy proposed by AGRA seems to be rather effective, for as previous mentioned once fertilizer becomes too expensive farmers can simply convert back to an organic agriculture style utilizing some of the more advanced strategies proposed by the organic strategy supporters. Proponents of more thorough organic techniques have the problem that it is still highly debatable that such a strategy will afford Africa the ability to provide a significant portion of its food on a domestic level. As previously mentioned there are serious questions regarding the scale-up potential of organic farming. Lacking the ability to scale-up reduces the potential of both the domestic and foreign markets.

Therefore, the debate regarding the best growth methodology between the two above strategies regarding increasing the viability and productivity of the African agricultural system boils down to two questions. The first question: is fertilizer dependency a significant concern? The answer to this question seems rather complex because of the issue of fertilizer threshold dependency. One could argue that an objective party conduct field test studies on small isolated plots of land that simulate African soil structure and after a considerable period of time wean the soil off of fertilizer (various portions of the area would receive various concentrations of fertilizer) and measure the change, if any, in yield versus a control plot. In fact some of the Sauri region in the AGRA could be used for this analysis due to the introduction of ample quantities of fertilizer a few years ago.2 However, such a study would take a considerable period of time.

The second question: what is the projected lifespan of fertilizer use in African agriculture? Basically how long will most African farmers be able to afford and use sufficient quantities of fertilizer? The reason this question is important is it relates back to another question, what alternative strategies can be undertaken with the money that would be used to provide the initial supplies of fertilizer to various African farmers? If the lifespan of fertilizer use is ‘long’ then it is highly likely that the economic benefit derived from the administration of fertilizer will overshadow many of the alternatives that could be executed in lieu of fertilizer use. If the lifespan of fertilizer use is ‘short’ then it is highly likely that the economic benefit derived from the administration of fertilizer will fall short of these possible alternatives.

The answer to the above question will largely depend on estimations regarding remaining oil, natural gas and phosphorous supplies. Another influencing factor would be whether or not a new catalytic process for fertilizer synthesis is generated in the future. A new catalytic step would more than likely reduce required energy, thus reducing the influence of oil and natural gas price on fertilizer synthesis.

Either way the long-term solution to African food security may rest behind door number 3. Although stocks have been depleted due to efforts to create more cultivatable land, Africa still has considerable wood-based resources (excluding Northern Africa due to a certain topographical feature). Wood estimates range from 65-70 billion tons from twig to trunk.7 Such an ample supply of wood stocks seem to almost scream for a strategy involving biochar synthesis and sequestration in soil. Instead of utilizing slash-and-burn techniques, farmers would instead collect a percentage of any biomass refuge and use it in a pyrolysis process to generate bio-char that could be sequestered in the soil.

There is little question that bio-char has the potential to produce a positive effect when it is integrated into soil; it increases the quality of that soil enhancing the growth rates and yields of any future crops by aiding in the supply and retention of nutrients.24,25,26 However, the question is whether or not the incorporation of bio-char into the wide variety of African soils will have the same positive effects seen in the Amazon and test plots in North America.27,28 The test plots in Canada run by the Dynamotive Energy Systems Corporation reported results after a year long study of an increase of 6-17% in yield as well as greater root depth and plant density.28

In addition to bio-char, the Moringa Tree (the most common variety being moringa oleifera) also has the potential to increase yield. The moringa tree has proven to have excessive quantities of basic nutrients (Vitamin A, C, calcium, potassium, iron, etc.), thus it is an excellent food crop. However, the green matter (most notably in the leaves) can be extracted in ethanol to create a solution containing growth enhancing properties (cytokinine hormones)29 that can be applied to crops. Crops that are exposed to these moringa derived hormones experience accelerated growth, increased pest resistance, reduced rate of decay, larger fruit, deeper roots and greater yields by 20-35%.29

Overall despite the large attention that is being paid to the fertilizer or no fertilizer debate, there are clearly a number of elements that can be incorporated into a successful cultivation improvement strategy. First, based on the above initial studies it would prove valuable to begin employing a system that generates a sufficient amount of bio-char and moringa plant matter as growth enhancement elements. These systems can be created through focal points of land (more than likely government owned) which would be responsible for housing the pyrolysis plant(s) responsible for synthesizing the bio-char as well as growing and producing the moringa trees and the resultant green matter/ethanol solution. A percentage of plants and other biomass material would be taken from fields (only a percentage because complete removal of all refuse could be detrimental to soil quality and reduce erosion protection) and funneled to these pyrolysis plants where the resultant bio-char could be sold at the same local market as food crops from local fields that provide the bio-char. The Moringa trees could be grown near the pyrolysis plants (on the government owned land) or they could be grown on local farms.

Second, clearly a new strategy regarding the influx of water to crops needs to be established. With climate change threatening to reduce the level of rainfall in Africa, a large number of crops will not have sufficient water, which will collapse the entire structure regardless of whether or not fertilizer is used or whatever organic technique applied. One possibility to reduce (but not eliminate) the water burden would be to look into establishing a network of atmospheric water generators in the Sahara desert and other remote regions of Africa.

Third, new government-based protections need to be established under periods of emergency to avoid severe price spikes. Note that it would be difficult to grant governments any sweeping powers to control prices due to potential trade interference, but the ability to neutralize price spikes that would lead to civil unrest and/or greater starvation should be granted. In addition to greater government protections, the AGRA idea to form farmer-based cooperatives should be pursued so farmers can better work with each other to maximize profit and crop growth while generating a greater ability to negotiate on the domestic and global stage.

Fourth, improving transportation infrastructure is important because African agriculture is typically divided in smaller more numerous farm plots, not the larger plots in the developing world, thus it would be difficult to establish meaningful marketplaces in all of these regions that would have the potential to grow. Therefore, establishing better transportation routes to ferry agricultural products to a more centralized marketplace should foster better economic growth. Incorporating rail instead of road should involve less overall cost (due to future maintenance) as well as be more economically assessable to a wide variety of socioeconomic classes.

Overall despite the good initial start in the past decade, there is still much left to do in order to encourage the evolution of Africa’s agricultural system. One of the most important things that must be respected when applying a method is the future of the system. It would be irresponsible to sacrifice the future of Africa agriculture in effort to fill holes in the immediate short-term. Short-term thought is partially responsible for the problems in African agriculture to begin with, thus short-term thought cannot be used to generate a viable solution.


==
1. Murphy, Sophia, and McAfee, Kathleen. “U.S. Food Aid: Time to Get It Right.” Minneapolis: The Institute for Agriculture and Trade Policy. July 2005.

2. Millennium Promise. http://www.millenniumpromise.org/site/PageServer?pagename=mv_sauri

3. Vitousek, P.M, et, Al. “Nutrient Imbalances in Agricultural Development.” Science. June 2009. 324(5934): pp. 1519-1520.

4. Mancus, Philip. “Nitrogen fertilizer dependency and its contradictions: A theoretical exploration of social-ecological metabolism.” Rural sociology. 2007. 72(2): pp. 269-280.

5. Khan, S, et, Al. “The Myth of Nitrogen Fertilization for Soil Carbon Sequestration.” Journal of Environ. Qual. 2007. 36: 1821-1832.

6. Parrott, Nicholas, et, Al. “Organic Farming in Africa.” The World of Organic Agriculture 2006. http://www.orgprints.org/5161.

7. Low, Pak Sum. Climate change and Africa. ISBN-13: 9780521836340. pp 116-119.

8. Food and Agriculture Organization of the United Nations. “Organic agriculture, environment and food security.” 2002. Environment and Natural resources Series No 4. Rome.

9. National Research Council. “Alternative agriculture.” 1994. Washington, DC: National Academy Press.

10. Rosset, P. “The multiple functions and benefits of small farm agriculture in the context of global trade negotiations.” 1999. Food First Policy Brief No. 4. Oakland, CA: Institute for Food and Development Policy.

11. Altieri, Miguel. “The Myth of Coexistence: Why Transgenic Crops Are Not Compatible With Agroecologically Based Systems of Production.” Bulletin of Science, Technology & Society, Vol. 25, No. 4, August 2005, 361-371.

12. Mader, P., et, Al. “Soil fertility and biodiversity in organic farming.” Science. 2002. 296: 1694-1697.

13. “Climate Change 2007: Synthesis Report.” International Panel on Climate Change. 2007.

14. Lappé, F. M., et, Al. World hunger: Twelve myths. (2nd ed.). New York: Grove Press/
Earthscan. 1998.

15. Ross, E. B. The Malthus factor: Poverty, politics and population in capitalist development. London: Zed. 1998.

16. Bello, W., Cunningham, S., & Rau, B. Dark victory: The United States and global poverty. (2nd ed.). London: Pluto and Food First Books. 1999.

17. Rosset, Peter. “Transgenic Crops to Address Third World Hunger? A Critical Analysis.” Bulletin of Science, Technology & Society, Vol. 25, No. 4, August 2005, 306-313.

18. Kendall, H.W., et Al. “Bioengineering of crops. Report of theWorld Bank Panel on Transgenic Crops.” 1997. Washington, DC: World Bank.

19. Rissler, J., & Mellon, M. “The ecological risks of engineered crops.” 1996. Cambridge, MA: MIT Press.

20. Snow, A., et, Al. “A BT transgene reduces herbivory and enhances fecundity in wild sunflower.” BioScience. 2003. 13: 279-286.

21. Losey, E., et, Al. “Transgenic pollen harms monarch larvae.” Nature. 1999. 399: 241.

22. Altieri, M. A., and Rosset, P. “Strengthening the case for why biotechnology will not help the developing world: Response to McGloughlin.” AgBioForum. 1999. 2: 226-236.

23. Altieri, M. A., and Rosset, P. “Ten reasons why biotechnology will not ensure food security, protect the environment and reduce poverty in the developing world.” AgBioForum. 1999. 2: 155-162.

24. Glaser, B, et, Al. “The Terra Preta phenomenon – A model for sustainable agriculture in the humid tropics.” Naturwissenschaften. 2001. 88: 37–41.

25. Glaser, B. Lehmann, J, Zech, W. “Ameliorating physical and chemical properties of highly weathered soils in the tropics with charcoal - a review.” Biology and Fertility of Soils. 2008. 35: 4.

26. Lehmann, J., and Rondon, M. “Bio-char soil management on highly-weathered soils in the humid tropics.” Biological Approaches to Sustainable Soil Systems. 2005. Boca
Raton, CRC Press, in press.

27. Lehmann, J, Gaunt, J, Rondon, M. “Bio-char sequestration in terrestrial ecosystems – a review.” Mitigation and Adaptation Strategies for Global Change. 2006. 11: 403–427.

28. http://www.dynamotive.com/2009/05/12/blueleaf-inc-and-dynamotive-announce-biochar-test-results-cquest-biochar-enriched-plots-exhibit-overall-higher-crop-yield/

29. Foidl, Nikolaus, et, Al. “The Potential of Moringa Oleifera for Agricultural and Industrial Uses.” October 20th - November 2nd 2001. Dar Es Salaam.