Friday, September 25, 2009

Energy Efficiency Individual Deployment

Energy efficiency has been a rather hot topic in the progressive environmentalist movement. Claims that individuals need to do more with regards to saving energy, that increasing energy efficiency is one of the cheapest means to reduce carbon emissions and increasing energy efficiency is necessary to reduce any detrimental influences from climate change are all true. However, despite all of the calls for energy efficiency and plethora of statistics used to support its further deployment most of the progressive websites that tout energy efficiency rarely discuss the means with which one can begin the process of residential or even commercial (for those small-business owners)-based improvements in energy efficiency. It does little good to spout on and on about the greatness that is energy efficiency if one does not provide direction and assistance (be it direct or indirect) to visitors with regards to how to go about the process of improving energy efficiency.

There are two elements that need to be addressed when focusing on energy efficiency: what changes can be made to improve energy efficiency and how to pay for those changes? With that information individuals will then be able to devise a cost/benefit analysis to maximize energy savings relative to cost savings. The links below, provided by a representative from the Office of Energy Efficiency and Renewable Energy (EERE) Information center, should help those individuals that are interested in improving energy efficiency with regards to both of the aforementioned elements.

Top of Form
The Tax Incentives Assistance Project (TIAP)
Consumer Incentives
http://energytaxincentives.org/consumers/

ENERGY STAR
Federal Tax Credits for Energy Efficiency
http://www.energystar.gov/index.cfm?c=products.pr_tax_credits

ENERGY STAR
Frequently Asked Questions
Tax Credits/Rebates/Financing/Grants
http://energystar.custhelp.com/cgi-bin/energystar.cfg/php/enduser/std_alp.php?p_sid=39uq*9Ij&p_lva=&p_li=&p_accessibility=0&p_redirect=&p_page=1&p_cv=&p_pv=1.312&p_prods=312&p_cats=&p_hidden_prods=&prod_lvl1=312&p_search_text=&srch_btn_submit=%C2%A0%C2%A0%C2%A0GO%C2%A0%C2%A0%C2%A0&p_new_search=1

ENERGY STAR
Special Offers and Rebates from ENERGY STAR Partners
https://www.energystar.gov/index.cfm?fuseaction=rebate.rebate_locator

Database of State Incentives for Renewables & Efficiency (DSIRE) -
DSIRE is a comprehensive source of information on state, local, utility, and federal incentives that promote renewable energy and energy efficiency.
http://www.dsireusa.org/

U.S. Department of Energy
Office of Energy Efficiency and Renewable Energy
Energy Savers
Various topics on the issue of making your home more energy efficient.
http://www.energysavers.gov/

Office of Energy Efficiency and Renewable Energy
Energy Savers Blog
http://www.eereblogs.energy.gov/energysavers/

Office of Energy Efficiency and Renewable Energy
Energy Savers
Tips on Saving Energy & Money at Home
http://www.eere.energy.gov/consumer/tips/

ENERGY STAR
http://www.energystar.gov

Office of Energy Efficiency and Renewable Energy
State Energy Program
State Energy Office Contactshttp://www.eere.energy.gov/state_energy_program/seo_contacts.cfm

Monday, September 21, 2009

Ocean Acidity: The Danger and the Remediation

One of the more immediate problems with the rapid increase in atmospheric CO2 concentration due to human activities is the sudden shift in ocean acidity. The natural dynamic equilibrium exchange of carbon between the atmosphere and the ocean has existed for eons. For a vast majority of that time, there was little disruption in that exchange for although pH levels have oscillated between 7.3 and 8.2 such oscillation occurred over millions of years at a slow and steady pace.1 However, the excess CO2 that is being released into the atmosphere in the last 200 years, largely due to burning fossil fuels and deforestation, has accelerated oceanic uptake of atmospheric CO2 in effort to maintain the carbon concentration equilibriums. This additional uptake over a much shorter time frame than that of the past has created a concern regarding the adaptation and survival ability of oceanic flora and fauna.

Recall that when CO2 dissolves in water it forms carbonic acid eventually leading to the reactionary increase in ocean acidity (additional hydrogen atoms are contributed from the breakdown of carbonic acid). In fact CO2 absorption has reduced surface pH (increased acidity) by approximately 0.1 in the last decade after over 100 million years of steady decrease in acidity.1,2,3 Calcium carbonate becomes thermodynamically less stable as oceanic acidity increases, due to reducing concentrations of carbonate, a result tied to the increase in CO2 concentration, increasing the metabolic cost to organisms when constructing calcium carbonate-based infrastructure (shells and skeletons).

In fact the Southern Ocean near Antarctica is already experiencing significant acidification far beyond anywhere else in the world and this increase is having a negative influence on the ability of G. bulloides to build their shells.4 Similar results in calcification rates have also been seen in the Arabian Sea for other similar calcium carbonate shell builders.5 This negative influence limits the ability of the ocean to expand CO2 uptake from the atmosphere without increasing acidity due to reduction in sedimentation burial. In addition this infrastructure instability disrupts a variety of different and important food chains.

There are some that believe nature will be able to adapt to these acidity changes because the Arctic and Southern Ocean regions have life that is more used to higher acidity conditions, but this mindset comes off as rather naïve optimism. The most popular example relating to this belief is the analogy that a 5 degree average temperature increase in Phoenix will not faze its residence as much as a 5 degree average temperature increase in Siberia (or some other cold region). However, such an example irrationally seems to mitigate the fact that pH exists on a log scale, thus even small changes are significant to a given life form, hence why most life, without special ‘millions of years in the making’ adaptation, can only exist within a very small range of pH. It does not matter whether one lives in Phoenix or Siberia, dealing with consistent 120 degree temperatures that arise suddenly is a burden that will have significant influence on livelihood. In fact over 65 million years ago ocean acidification was linked to mass extinctions of calcareous marine organisms,6 it would be foolish to assume that a more rapid acidity increase would fail to replicate this extinction in due time.

Another influencing factor when considering ocean acidity is ocean temperature. It is common chemistry that gas solubility in a liquid decreases as temperature increases because increasing temperature increases available kinetic energy which increases molecule movement. Greater molecule movement increases the probability of bond breaking which reduces the ability of the gas to remain in solution. There is no argument that global temperatures both on land and in the ocean are increasing; therefore, as these temperatures go up it is likely that the overall capacity of the ocean to absorb excess CO2 from the atmosphere will decrease and cause the ocean to release CO2 into the atmosphere.

Releasing CO2 into the atmosphere until a new equilibrium is achieved may slightly increase ocean pH (lower ocean acidity), but the problem is that the magnitude or timing of such a reaction is completely unknown. The ocean cannot be viewed so simply as a giant beaker of water sitting on a bench in a laboratory, so applying any simplistic CO2 solubility curve to determine when any switch from sink to source for the ocean may occur is naïve. Regardless of when the ocean begins to naturally decrease in acidity due to decreased CO2 solubility, it is reasonable to believe that ocean acidity levels will not naturally drop below their present level. Therefore, despite the potential for CO2 release at some point in the future, the issue of ocean acidity still needs to be addressed in the near future.

The rate of calcium carbonate precipitation is an important element in determining the sink capacity of the ocean and the total expected acidity change because calcium carbonate has a tendency to be removed through gravitational settling.1 Considering this removal due to calcium carbonate precipitation is important because despite the total sum of dissolved carbon species (DIC) decreasing, the remaining carbon shifts its balance in favor of pure CO2 (aq) increasing the higher partial pressure of CO2 in the ocean.1 The reason for the shift is the loss of CO3 which drives the aqueous carbonate equilibrium reaction [CO2 (aq) + CO32- + H2O ↔ 2HCO3] to the left to compensate.1

However, dissolution of calcium carbonate, frees more carbonate ions, resulting in an opposite shift reducing oceanic concentration of CO2 enhancing atmospheric CO2 acquisition. Basically precipitation of carbonate reduces CO2 uptake from the atmosphere whereas dissolution of carbonate increases CO2 uptake from the atmosphere. Remember that because both the carbonic acid and the calcium carbonate are in equilibrium, loss of reactionary species typically CO32-, will be influenced by Le Chatlier’s principal.

However, a second factor in this process of CO2 exchange must be considered, the interaction and association between particulate organic carbon and calcium carbonate concentration shifts.7,8 A decrease in calcium carbonate reduces the rate and effectiveness of moving particulate organic carbon to deeper waters, thus weakening the biological pump portion of oceanic CO2 absorption method.1 This result reduces the total CO2 sink capacity of biological denizens of the ocean like phytoplankton.9,10 So an impasse exists in that does decreasing the concentration of calcium carbonate increase oceanic sink capacity or decrease oceanic sink capacity? Currently there is no good answer to that question.

Regardless of the correct answer it cannot be debated that the ocean is becoming more acidic due to an increased rate in uptake of atmospheric CO2, the only thing up for debate is the rate of acidity change. Also it is important to note that any geo-engineering strategy to ward of atmospheric global warming that does not result in the removal of CO2 from the atmosphere will have no ability to reduce the rate of acidification. In fact such geo-engineering methods may actually increase ocean acidity by delaying any CO2 release from the ocean due to temperature increases.

Unfortunately the atmospheric-oceanic exchange is the not the only contribution to increased ocean acidity. Increasing surface temperatures have destabilized methane hydrate stored in sediments beneath the seabed throughout various portions of the ocean. Due to the accelerated warming in the Arctic, most of the new methane hydrate destabilizations are originating in the Arctic and Southern Oceans and areas along the continental shelf.11 The good news/bad news aspect of this destabilization is that most of the methane is absorbed/dissolved in an upper layer of the ocean before it is able to fully escape into the atmosphere, thus only a small percentage of this released methane will immediately influence global warming. Unfortunately it does not stay as methane for long in the ocean as methanotrophs interact with this methane converting it into CO2 not only further increasing ocean acidity, but also increasing the concentration of CO2 in the ocean which will eventually cause the ocean to become a source of atmospheric CO2 instead of a sink for atmospheric CO2. However, unlike the solubility change due to temperature increase scenario, the acidity will not go down because for all of the CO2 released more methane will be converted to CO2 to take the place of the released CO2. The final side detriment to this methane hydrate release is that in the process of converting the methane to CO2 the methanotrophs use oxygen which creates the high probability for hypoxic or anoxic conditions within the localized region of ocean.

With the continuing increase in acidity and the negative influence it seems to be having on oceanic fauna, a remediation strategy is needed before permanent damage occurs. Unfortunately the best option, significantly reducing the concentration of CO2 released into the atmosphere from human based sources, which would eventually reverse the process of ocean CO2 absorption, is decades away, if it happens at all; therefore an alternative stabilizing strategy needs to be considered.

The most straightforward means to reduce ocean acidity would be to speed the removal of unassociated (free) CO2 from the ocean. Reducing free CO2 would in turn reduce the probability of carbonic acid formation and the resultant equilibrium shifts. One of the first ideas that comes to mind would be iron fertilization, but unfortunately iron fertilization does not appear to be as useful as advertised,12,13 especially where it counts in the Southern and Arctic Oceans.

Another idea that has gained some backing in recent years is thermally decomposing limestone into CO2 and calcium oxide and then depositing the calcium oxide into the ocean to facilitate a chemical reaction to sequester CO2. When dumped into the ocean the calcium oxide reacts with water forming calcium hydroxide. Finally the calcium hydroxide reacts with free dissolved CO2 in the ocean creating calcium bicarbonate. The three primary chemical reactions governing this strategy are shown below.

As can be seen in from the reactions, backers feel such a system is carbon negative because while 1 mole of CO2 is generated for each mole of calcium oxide, the resultant reaction of calcium hydroxide with dissolved oceanic CO2 removes two moles of CO2 per mole of calcium oxide deposited into the ocean. The process in its purest form generates a +1 mole reduction in CO2 per mole of processed limestone. In addition the removal of CO2 from the ocean will increase the ability of the ocean to act as a carbon sink pulling in more CO2 from the atmosphere as well as the alkalinity of the calcium hydroxide will increase ocean pH further reversing the increase in ocean acidity.

Unfortunately there are some concerns that significantly reduce the viability of this option. First, the reaction rate between calcium hydroxide and CO2 is contingent on many factors, similar to most chemical reactions most notably pH, pressure and temperature. Also CO2 within the ocean is still rather dilute, which further lowers the probability of reaction. Realistically it is logical to anticipate some inefficiency or non-reaction from the total amount of calcium hydroxide. Therefore, an estimate of 1.6 to 1.8 moles of CO2 reacted per 1 mole of calcium hydroxide in the ocean seems more realistic.

Although some of the benefit was lost, so far so good as the process still removes more CO2 than it generates, right? Not necessarily, second most of the proponents of this strategy play-down that the limestone needs to be processed and that requires significant amounts of heat energy (800-900 C). It takes approximately 2.67 GJ (741.67 kw-h) to calcinate 1 ton of limestone.

Looking at general power sources that could provide that energy, coal typically produces 1 ton of CO2 per 1000 kw-h of electricity, which would put the process on the edge of being carbon positive/negative unless calcium hydroxide reaction efficiency was higher than anticipated, so coal is out. Oil cannot be used because of a continuing dwindling supply and the emission profile is not much better as although it remains carbon negative (approximately 1300 kw-h per 1 ton of CO2) the ratio drops to about 0.1-0.3 tons of CO2 per ton of calcium hydroxide). Natural gas is a bit better, depending on the total efficiency of the combustion, 2,000 to 2,500 kw-h of electricity per 1 ton of CO2 emitted; however, using natural gas would still require the generation of 0.297 to 0.371 tons of CO2, which would take a bite out of the overall CO2 absorption. Overall it does not seem to matter whether the source of the energy provider is stranded or not because the resultant CO2 emission would put unacceptable economic burden on the overall removal ability of the process. Therefore, it appears that a zero carbon emission energy source will have to be used to generate the calcium oxide from limestone to ensure appropriate economical action.

The additional CO2 produced aside, another problem is the shear cost of the electricity to run the process. For example in the United States using an average of 11 cents per kW-h conversion of 1 ton of limestone would cost $81.58. Taking that cost and expanding it to calculate the cost of removing 1 net ton of CO2 from the ocean assuming a zero carbon emission source is used to generate the power, zero transportation emissions (highly unlikely) and a high efficiency reaction of 1.8 moles of CO2 from the ocean and it would currently cost approximately $102 to remove 1 net ton of CO2 from the ocean with this process, just for the energy required for the limestone conversion alone. Assuming that 100% of all of the CO2 produced from the limestone conversion reaction were sequestered the cost would drop to $45.32 per net ton of CO2, but that cost is not absolute because of the cost uncertainty associated with the capture and sequestration processes. Finally in order to generate a reasonably pure CO2 product stream during the limestone reaction either co-firing with limestone, fuel/electricity source and oxygen or separation of the heating/calcine reactions will have to take place increasing the probability of greater cost.

Overall the initial calculations project the limestone strategy to not be economically attractive. However, clearly it is worth paying a price to avoid severe and detrimental climate change, which would cost much more in the long run. Unfortunately the cost may not be the only problem with the limestone strategy. Currently there has been very little practical application of the limestone strategy through discussion of exactly how the calcium oxide would be distributed throughout the ocean. This lack of discussion is a problem because it is very unrealistic to expect a widespread distribution strategy to be successful largely because first the ocean is rather huge. Second, the transport emissions associated with widespread distribution would almost certainly switch the process from slightly carbon negative to definitely carbon positive, and that does not include the associated transportation costs. Remember there are no viable zero emission planes, zero emission ships would probably be too inefficient in transport time and the cost to create infrastructure for new zero emission trains would be backbreaking.

Based on these two problems it appears that the best option is a localized release. Initially this result may seem beneficial because of the large limestone deposits located at Nullarbor Plain, Australia. However, localized distribution also has a significant problem, rate of deposit. If too much calcium oxide is depsoited into the ocean over too short a timeframe and/or area then it is highly possible that a pH shift in the opposite direction would occur in that localized region significantly reducing the biodiversity of the deposit region. If too little calcium oxide is deposited into the ocean then the overall CO2 removal process would be far too slow to make any real difference in averting climate change or reducing ocean acidity and the entire process itself could be viewed as a waste of time and money. Therefore, a proper despoit rate over a given localized area would need to be determined, a determination that appears to be difficult to do in the lab. Also even if a proper balance was determined, another lingering problem is that although the ocean does mix, calcium oxide interaction over a small localized region would probably induce faster reaction over ocean mixing and atmospheric uptake which would further reduce the efficiency of the reaction and the prospect of it being carbon negative. Overall it appears that the total amount of limestone, energy and cost required for the above strategy provide too great of obstacles to make this an effective strategy for remediation of atmospheric or oceanic CO2 at the current time.

With more natural methods lacking efficiency perhaps a more technological strategy to facilitate CO2 removal is necessary. One advantage with the deployment of such a solution is that it does not need to be scaled-up to reduce the acidity of the entire ocean. Due to the non-uniformity of oceanic flora and fauna, technologies can be designed and applied to higher density and biologically and/or commercially important regions to reduce localized levels of acidity to increase survivability. Note that it is important that any technology eliminate the acidity at the localized region not simply divert it elsewhere further increasing the acidity levels at non-targeted regions of the ocean because due to ocean mixing such a strategy would not achieve the desired goal, even in the short-term. At a later time such a device will be proposed here at the Bastion of Reason that will hopefully provide a means to reduce ocean acidity.

--
1. Ridgwell, Andy, and Zeebe, Richard. “The role of the global carbonate cycle in the regulation and evolution of the Earth system.” Earth and Planetary Science Letters. 2005. 234: 299– 315.

2. Caldeira, K, and Wickett, M. “Anthropogenic carbon and ocean pH.” Nature. 2003. 425: 365.

3. Keeling, C, and Whorf, T. “Atmospheric CO2 records from sites in the SIO air sampling network, Trends: A Compendium of Data on Global Change.” Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tenn., USA, 2004. (http://cdiac.esd.ornl.gov/trends/co2/sio-mlo.htm).

4. Moy, Andrew, et, Al. “Reduced calcification in modern Southern Ocean planktonic foraminifera.” Nature Geoscience. 2009. 2: 276 – 280.

5. del Moel, H, et, Al. “Planktic foraminiferal shell thinning in the Arabian Sea due to anthropogenic ocean acidification?” Biogeosciences Discussions. 2009. 6(1): pp.1811-1835.

6. Hood, Maria, et, Al. “Ocean Acidification: A Summary for Policymakers from the Second Symposium on the Ocean in a High-CO2 World.” Intergovernmental Oceanographic Commission of UNESCO.

7. Armstrong, R, et, Al. “A new, mechanistic model for organic carbon fluxes in the ocean: based on the quantitative association of POC with ballast minerals.” Deep-Sea Res. 2002. Part II 49: 219–236.

8. Klaas, C, Archer, D. “Association of sinking organic matter with various types of mineral ballast in the deep sea: implications for the rain ratio.” Glob. Biogeochem Cycles. 2002. 16(4): 1116.

9. Ridgwell, Andy. “An end to the ‘rain ratio’ reign?” Geochem. Geophys. Geosyst. 2003. 4(6): 1051.

10. Barker, S, et, Al. “The Future of the Carbon Cycle: Review, Calcification response, Ballast and Feedback on Atmospheric CO2.” Philos. Trans. R. Soc. A. 2003. 361: 1977.

11. Natural Environmental Research Council. http://www.noc.soton.ac.uk/nocs/news.php?action=display_news&idx=628

12. “Lohafex project provides new insights on plankton ecology: Only small amounts of atmospheric carbon dioxide fixed.” International Polar Year. March 23, 2009.

13. Black, Richard. “Setback for climate technical fix.” BBC News. March 23, 2009.

Wednesday, September 16, 2009

The Reality of Peak Oil

For years now the elephant in the room for the petroleum industry has been the prospect of a level of global oil production not adequate to meet the requisite global demand largely due to a falling rate of oil production, difficulty finding and/or accessing new conventional fields and a continual increase in oil demand. Production maximization has been dubbed ‘peak oil’. With falling production rates and increasing demand it is believed that shortly after ‘peak oil’ is reached oil prices will skyrocket resulting in increased fuel and energy costs leading to a significant impediment for economic growth. The reality of the situation is that ‘peak oil’ is not just a theory that may or may not be realized, but something that is inevitable; therefore a solution will be required for the future regardless of when ‘peak oil’ is attained.

Before getting into specifics regarding ‘peak oil’, a background regarding it as a concept and its influence would prove useful. Although the concept of ‘peak oil’ seems simple enough, attaining a maximum of oil production be it in a specific country or globally, such a simplistic viewpoint can easily lead to misidentifying the actual situation. For the last 150 years oil production has steadily increased worldwide, largely due to emerging new technologies making exploration and extraction more economical and an unyielding demand for oil guaranteeing a viable market, which drives investment. Although demand for oil continues to increase the problem of supply has become more of a concern for producers. Recall that oil is not a short-term renewable resource in that the creation of new crude oil-based supplies would require millions of years, something that is clearly not tenable. Due to the limited supply and escalating demand, most commentators have proposed that oil production will peak at some point in the future, if not already. However, those prognostications have proven to be the first significant problem when preparing for ‘peak oil’ in that the predictions are all over the map.

M. King Hubbert, one of the first oil prognosticators and developer of the appropriately named Hubbert Curve detailing the lifecycle of oil reserves for a given well, believed in 1974 that peak oil would be met sometime in the mid to late 1990s.1 Others, like Sadad Al Husseini, a former head of production for Saudi Aramco (the world’s largest oil company), the Energy Information Administration (EIA) and The Association for the Study of Peak Oil and Gas and Energy Watch Group (EWG) all believe that ‘peak oil’ was reached during the last 3-5 years.2,3,4 In late 2008 the Industry Taskforce on Peak Oil and Energy Security (ITPOES) identified peak oil occuring at 2013.5 The International Energy Administration (IEA) believes that peak oil will occur sometime between 2020 and 20306 while others still, like Abdullah S. Jum’ah, President of Saudi Aramco, believe peak oil is still over a century away.7 Overall it is difficult to get a straight answer when talking to various oil executives because each one seems to have a different estimation for ‘peak oil’.

The first issue that should be addressed regarding the topic of ‘peak oil’ is that some use global demand as a factor when classifying a time point for ‘peak oil’, a strategy that does not appear to be reasonable because the nature of ‘peak oil’ itself should not have anything to do with demand, but instead only when oil production will reach a maxima. Tracking global demand is important because of its relation to oil prices, but based on current production information there will probably be a short grace period between when ‘peak oil’ is officially attained and a correlative response from oil prices based on supply alone (when factoring out other influencing factors like speculation).

The reason for such a wide range of predictions from prognosticators can be largely attributed to a variety of different assumptions that were used to calculate future production levels. One of the first important points of contention when calculating a ‘peak oil’ date is classification of available oil reserves. Typically oil reserves can be classified in one of three different categories based on a confidence level correlating to the accuracy of the estimated amount: Proven, Probable and Possible. Proven reserves have a confidence level of 90-95%; probable reserves have a confidence level of 40-60%; possible reserves have a confidence level of 5-10%. Some estimation only take into account proven reserves where others extrapolate that new technology or higher oil prices will raise the probability of extraction from probable and/or possible reserves.

The biggest problem stemming from this classification system is that a large majority of oil wells are not independently audited, which leaves estimations on the total and remaining reserves of a given field to the country or private company that is operating the well. This single-source non-objective information is subject to frequent speculation regarding whether or not a given country is over-estimating or under-estimating its reserves and whether or not that error in estimation is intentional.8,9 Claims of over-estimation are frequently made against OPEC countries (Algeria, Angola, Ecuador, Iran, Iraq, Kuwait, Libya, Nigeria, Qatar, Saudi Arabia, the United Arab Emirates and Venezuela) because within the OPEC operational and production structure, production rates are related to total available reserves. Basically the more oil a country in OPEC has, or claims to have as some critics point out, the more oil a country can produce and sell on the open market. Others claim that OPEC actually under-estimates available reserves in effort to convince the world that oil is a more valuable commodity which demands a higher market price.9

Under-estimation claims seem to have the potential for both a positive and negative motiviating factor on a psycological level. As stated earlier, if the global community believed that oil was more scarce than it actually was, then they would understand lower production rates and higher prices based on the simple premise of supply and demand. However, the belief in lower than actual oil supplies could also instill the drive to hasten the development and deployment of alternatives to oil to avoid economic calamity in the future when oil is no longer available. The development of these alternatives would result in much lower prices for oil and the omitted supply would then sell for a much lower value than if a genuine reserve number was reported. If such a result occurred it would likely lead to a high probability for an overall net loss for oil producers in an under-estimation scenario vs. a correct-estimation scenario. Interestingly enough, although this scenario seems to be probable theoretically, in practice most individuals do not seem to buy into it due to the snail pace at which oil alternatives are being developed. Therefore, there may be no psycoloical disadvantage to purposely under-estimating existing oil reserves.

With regards to over-estimating reserves, it is true that production potential on over-estimated reserves would be higher, however, if most of OPEC over-estimated their reserves with the intent to produce more, too much oil would flood the marketplace significantly lowering price causing OPEC, as it has done many times in the past, to cut production until oil price rebounds. Clearly all of the OPEC countries know that such a strategy would be implemented in the face of over-production; therefore, it does not appear that over-estimation would prove to be a useful tactic unless only a small number of OPEC countries did so. Under such a scenario it would be the OPEC countries with the smaller genuine reserves that would be more likely to over-estimate, but such an over-estimation would only create a small error relative to genuine global reserves, thus inconsequential to skewing global estimates. So from a logical perspective it does not appear that over-estimation makes much sense for OPEC countries, while under-estimation makes some sense, but realistically current estimates are probably fairly accurate based on current extraction and seismic measurement methodology. Of course the last statement assumes that OPEC countries will behave rationally, which is unclear.

Another element that will influence the time point of ‘peak oil’ is where capital investment in new oil exploration and extraction is directed. The IEA states that most investment capital is used for exploration and development of high-cost reserves because of access limitations on cheaper resources due to government policy.6 Most of these limitations come in the form of environmental concerns or nationalistic concerns in that the government wants a larger stake in the oil profits than any international bidder is willing to give.

One of the concerns with investment in future oil production may ironically be the projected pace of oil alternative research and deployment. The faster oil alternatives are injected into the marketplace, the lower the oil price would drop due to changing the demand curve despite dwindling supply. With a dropping oil price an increase in supply through investment in exploration and developing new wells will further decrease the price reducing the profitability prospects for those new wells making them less attractive for investment. However, at the moment large-scale deployment of alternatives does not appear to be a concern. Oil alternatives are largely vested in either food-derived bio-fuels or non-food-derived bio-fuels. Taking just the United States into account, the estimated production of food-derived bio-fuels for 2010 is 12.5 billion gallons a year or (4% of the EIA anticipated oil demand in 2010).10,11 Estimated production of non-food-derived bio-fuels for 2010 is 39 million gallons a year or 0.0127%.12 So next year bio-fuels could, under ideal circumstances, account for 4.0127% of the total oil consumption in the United States; clearly there is a lot of work left to do and that assumes that those estimates, which are on the high side, are met.

One of the best methods for removing investment barriers may be the formation of partnerships between national and international oil companies which would facilitate better profit sharing between the labor and capital investment made by the international company and the natural resources provided by the national company. Such partnerships may be necessary because the national companies that control most of the remaining oil do not have the capital, extraction technology or the personnel required to significantly increase production. In large respect national companies will have to learn that getting 45-50% of something is better than getting 100% of nothing.

A third concern with estimation is that source information from oil companies themselves do not have a uniform reserve categorization. For example British Petroleum (BP) includes crude oil, condensate and liquid natural gas in production databases whereas IEA uses all of those reserves as well as bio-fuels. Fortunately this concern only provides a problem when prognostication is not specific about what estimates are being used to determine production levels when predicting a time point for ‘peak oil’.

One of the biggest reasons most prognosticators believe ‘peak oil’ will occur in the next five years is the claim that peak production levels have been reached in non-OPEC countries, which provide 55-60% of all global oil.6 If over half of the global supply has peaked then gains in production made by those that can still make gains will be required to outpace the declines in production from the countries that have already peaked to ward off a global supply peak. If one believes that a Hubbert Curve is a reasonably accurate way to track the lifespan of an oil field, then it is unlikely that OPEC can make up for these loses without discovering a large untapped reservoir.

That said there are two issues that early ‘peak oil’ proponents are not considering. First, it is highly likely that non-OPEC production in 2008 was abnormally skewed downwards due to three separate events. The global economic recession sent oil prices in mid-2008 tumbling downwards, which made producing oil much less attractive thereby reducing production. Also production in the Gulf of Mexico was diminished due to higher than normal hurricane activity. Finally significant gas leaks slowed large amounts of oil production in Azerbaijan. Taking these factors into account it would be reasonable to anticipate oil production increases in late 2009 into 2010 at an overall level for non-OPEC countries. However, the rebound in production will be short-lived because the above factors have nothing to do with discovering new reserves to tap, they simply represent small reduction in production blips.

The second factor is the mystery box known as Iraq. In early 2001 geological surveys calculated the total reserve capacity of crude oil in Iraq at 115 billion barrels.13,14 However, due to the current government in operation during the time and limited available seismic technology (2-D instead of 3-D) there was reason to believe that this estimation is significantly lower than what is actually available. In addition recent events in Iraq, most notably a forced unilateral regime change driven conflict, have reduced production rate and capacity in the last seven years. New estimates have increased the potential crude oil by a little over 200% to 350 billion barrels.15 The reason for such a dramatic turnabout is that none of the previous surveys from 2001 and earlier focused any real attention on the vast deserts in Western Iraq where most of these new reserves are thought to be located.14,15

If this new estimate is correct and projecting future oil demand based on two scenarios, the EIA low price oil or high price oil scenario 11,16 then Iraq by itself could supply enough oil to feed global demand for an additional 10 years under low oil prices and 12 years under high oil prices delaying ‘peak oil’ beyond most of the existing projections. The biggest issue with tapping this reserve is the unfortunate reality that currently the Iraqi production capacity is pathetic. Huge levels of investment (billions of dollars a year) along with favorable investment rates for international oil companies and government cooperation between Shiite, Sunni and Kurds will be required if any reasonable amount of these new reserves will be relevant in the context of ‘peak oil’.

With the unpredictability of future new reserves, especially in Iraq, and the wide-ranging assumptions used by prognosticators it is difficult to identify a range for when production will reach an apex. However, even reaching an apex is not necessarily the end of the story, especially for the nature of future oil price, for there is another element that must be considered. Recall ‘peak oil’ focused on determining when oil production reached a maximum, but almost all estimates regarding that time point do not include unconventional sources of oil. Technically unconventional oil or liquids include oil shale, oil sands-based synthetic crudes, coal-to-liquids, gas-to-liquids, extra heavy oil and bitumen.17 Also bio-fuels can be classified as an unconventional liquid, but bio-fuels will be addressed separately.

Significant quantities of unconventional oil are available for extraction, but are not significantly tapped under the condition that it is not economical to do so. Basically it costs more to extract the product and convert it into usable oil then can be made on the market selling it. Although the numbers tend to fluctuate, a consistent dollar/barrel of oil price needs to range from 75-95 dollars for initial production with estimates after sufficient scale-up requiring 40-50 dollars per barrel for profitability although there is a significant question to the validity of the scale-up estimate because of uncertainty.18,19,20

The two principle large categories, with regard to supply, of unconventional oil are extra heavy oil and oil shale, thus the focus will be on these two unconventional oil resources. The reason extraction of extra heavy oil deposits is so difficult is they possess a density at or exceeding water, hence the name, which make them extremely difficult to be produced, transported and refined via conventional methods as conventional crude oil has a density lower than water. Also the concentrations of extra heavy oil are frequently contaminated with sulfur, nickel, vanadium and other metals. Removing these impurities from the production stream significantly increases costs.

Oil shale is organic-rich fine-grained sedimentary rock with large amounts of kerogen. Kerogen is defined as a solid mixture of organic chemical compounds that are insoluble in organic solvents.22 Due to its chemical components kerogen can be converted into oil through a thermal process like pyrolysis or hydrogenation.18,23 The minimum temperature for extraction of oil shale from kerogen is 250 C, but the process takes months, so frequently the thermal process uses temperatures of 480-520 C, which has been described as the maximum conversion rate.18

Extraction/processing of the kerogen normally occurs via above ground (ex-situ/displaced) through oil shale mining and then transfer of the kerogen to a processing plant where the appropriate thermal process is applied. However, new technologies have allowed for on-site underground processing (in-situ/in place), for example hydraulic fracturing, and then the extraction of resultant oil product through a standard well. Ex-situ processing is rather straightforward with regards to the economic costs and the environmental damage as it is similar to mining for coal with the addition of the thermal process at the end expelling additional CO2 into the atmosphere.

In situ processing typically involves heating the oil shale while it is still underground either through injection of a hot fluid or using a planar heating source and allowing thermal conduction and convection to divert the heat to the appropriate locations.24 Most in situ technologies are still in the experimental stages because of the off and on interest in unconventional sources (primarily based on the rise and fall of crude oil price over the last three decades). The chief advantage of in situ methods is that there is a higher probability for a greater capacity of unconventional extraction because the influx of the heating element is more cost-effective at reaching greater depths in unconventional oil deposits. Also in situ methods can extraction deposits of lower grade than ex situ methods.24

The biggest problem with extraction, regardless of technique, is the excessive levels of energy that are required and where that energy is going to come from. Trace to zero emission sources are not universally viable because they are site sensitive and/or intermittent. In the long term these options could become viable, but not at the present time. Natural gas, which is used in conventional oil extraction, is beginning to run short itself, at least conventional sources of natural gas. Similar to oil, unconventional sources of natural gas are available for processing, but it does not make sufficient economic sense to use these reserves. Extraction of expensive unconventional natural gas to power the extraction of expensive unconventional sources of oil typically would result in wasted money and unnecessary environmental damage. Therefore, extraction of these reserves may depend on construction of a nuclear plant near the point of extraction. Fortunately such a strategy is possible because a vast majority of the discovered unconventional oil in the world is concentrated over a small number of locations, so it would not require the construction of hundreds of nuclear plants.

Unfortunately concerns of economic viability are not the only caveat surrounding the extraction of oil shale. Currently the available extraction technology for these reserves is heavily detrimental to the environment. In addition to the generic damage generated by mining, ex situ extraction could result in additional acid drainage due to rapid oxidation of previously buried materials and excess metal contamination of water supplies.24 Ground water and soil contamination is the chief and a legitimate concern for in situ extraction.24 Also the fact that some in situ techniques are more effective when the ground water level at the site of extraction is lowered below the extraction site could increase the probability of surface damage due to flora requiring longer roots to access the water. Of course some argue that the extraction site will be significantly unfavorable for flora and fauna for a long time, thus water alteration is just beating a dead horse. Finally both extraction techniques involve the production of more greenhouse gas emissions than conventional oil extraction. Overall if one is willing to sacrifice the immediate environment around the extraction zone, in situ extraction is superior to ex situ in almost every way once the economics of scale drop a bit.

Despite all of these processing difficulties, the global estimated amount of oil shale in just the Athabasca Oil Sands (Alberta, Canada) and the Orinoco Extra Heavy Oil depsoit (Venezuela) only is approximately 1.638 trillion barrels of oil, which is 51.7 times the current yearly global oil consumption rate.11,16 Therefore, if all conventional oil production were to stop tomorrow, the supply from just these two regions, if tapped, would still be able to met the expected future demand of oil for over 37 years in the EIA high oil price scenario (highly likely if unconventional oil is being extracted) and 47 years in the EIA low oil price scenario. That additional time does not include the reserves in the Green River Formation in the Rocky Mountains, which are also sizable. In short it appears that a more appropriate designation for ‘peak oil’ should be representative of the point when more economically viable oil production (conventional oil) reaches a maximum.

The question of extraction also provides a catch-22 scenario in that if unconventional sources of oil are not tapped and alternatives are not available then not only will the price of oil be high, but more importantly there will be demand that is not met which will be severely detrimental to the global economy (people not able to go to work, fewer plastics and other hydrocarbon based products produced, lower food yields for farmers, etc.). However, if unconventional oil is tapped the price of oil will still be high (required to justify the higher capital costs of its extraction regardless of overall existing oil supply) and the environmental damage both from the extraction of oil and its eventual release into the atmosphere will be significant. Basically the options are global depression or accelerated global warming and lower quality environment. Overall it appears that if the status quo remains, society will have no real choice, but to tap into unconventional sources of oil to avoid a severe global depression due to the significant role that oil plays in the global economy. Also it is reasonable to suggest that any possible environmental damage from the extraction of these unconventional reserves could actually be less than the environmental damage brought on by a global depression due to unavailable capital to fund remediation, research and development and deployment efforts that would limit and/or reverse existing and future environmental damage. Of course that is only valid if that capital is directed toward environmental remediation programs.

Therefore, the future of oil use would be aided by one of two responses, a more economical and environmentally safe means of extracting unconventional oil sources or the introduction of an oil alternative that can be mass-produced. Unfortunately the first option appears unlikely, for oil producers have already devoted decades and millions of dollars in research to developing an economic means to extract oil shale and its ‘friends’ with little success. New methods have been produced, but nothing groundbreaking. With the prospects of developing unconventional oil extraction technologies that operate in an efficient and effective manner lacking, focus must be placed into developing alternatives that can scaled up quickly and cheaply and conservation methods that can be applied to limit oil use.

The chief conservation method involves the mass deployment of pure electric and plug-in hybrid vehicles. If the electrical grid used to supply the power to these vehicles consists of a trace to zero emission source then their application would not only significantly reduce oil use, but it would also reduce global emissions of CO2 and other greenhouse gases. The two biggest problems with mass deployment of a primary electrical vehicle fleet are the ‘chicken and egg’ question and the ability of the grid to handle mass charging over short time periods.

The ‘chicken and egg’ question involves the counterbalance between the absolute number of electrical vehicles available for purchase vs. the development of the proper infrastructure to facilitate the smooth operation of an electrical vehicle fleet. For instance what occurs first, the construction of the infrastructure or the mass production of electrical vehicles? Suppose the infrastructure is constructed first, such a project would be massive, expensive and potentially worthless if a large number of electrical vehicles were not purchased to justify the construction of a national electrical recharging infrastructure. Therefore, planners may wait until a significant number of electrical vehicles are purchased before beginning construction on the national or even local infrastructure; however, the lack of a viable infrastructure reduces the mobility of an electrical vehicle fleet reducing its attractiveness to potential buyers increasing the probability that fewer electrical vehicles are sold. This lack of vehicles then tracks back to the reduced rate of return in the construction of an infrastructure to support electric vehicles, an infrastructure that is needed to support increased electrical vehicle sales. Overall the ‘chicken and egg’ question is rather silly because when discussing it most exclude the simple fact that combustion-based vehicles will inevitably become too expensive to operate at some point in the future due to the eventual lack of existing oil. Construction of an electrical vehicle support infrastructure really has limited risk because of this eventual expense reality for combustion-based vehicles. So the real question is determining at what point in time after constructing the infrastructure it will become profitable.

The second problem is more significant because ideally the more electric vehicles are manufactured and purchased the better due to increased oil supply savings. However, because humans are generally diurnal most of the electrical vehicles will be charged during the night, millions of electrical vehicles demanding power from a portion of the electrical grid all at once, a demand that those who originally designed the grid would never have anticipated. This new stressor creates the very real problem that the grid could be unable to support all processes that demand electricity creating rolling blackouts. Therefore, to support a new electrical grid for an electrical car fleet, significant changes will have to be made to the existing grid. Fortunately the requisite changes coincide with other grid changes that need to be made to reduce electricity inefficiency and better accommodate the application of renewable energy to the grid, so the importance of making these changes is magnified increasing the probability of its occurrence.

Substitution of petroleum in favor of biological-based oils (bio-fuels) is also a favored option for both saving oil and eventually supplanting oil. Unfortunately there are two significant problems plaguing bio-fuels. First, bio-fuels are most easily produced through food-based feedstock due to the lax energy requirements to ferment the sugars from starch into ethanol; however, the production of a large enough quantity of bio-fuel seems implausible because the shear amount of feedstock that would be required to generate production rates of even 10-20 million barrels a day (420 to 840 million gallons or 20-25% of the global demand) would heavily tax available food supplies and mass starvation in even developed countries would be highly probable.

Without the ability to tap into food stocks to provide the feedstock, any future supplies of bio-fuel need to originate from non-food sources. The two most viable candidates to synthesize this type of bio-fuel are cellulous and algae. Synthesis from cellulous sources is problematic due to the excessive energy costs associated with breaking the chemical bonds that make up cellulous. However, if the correct enzyme is discovered then this energy obstacle would insignificant. Synthesis from algae sources is thought to be easier than cellulous, but has its own overhead costs because production rate is entirely dependent on the total mass of algae within the perspective environment, thus enough bioreactors (i.e. space) needs to be set aside for algae growth. Also if any detrimental condition afflicts the algae, production losses could cascade rather quickly taking a large chunk out of the alternative oil supply.

The second problem for bio-fuels is that ethanol cannot simply be dumped into the standard combustion-based engine, but instead it either has to be combined with a significant amount of gasoline, usually in a 20-80 mixture (ethanol-gasoline) or the engine needs to be retrofitted to operate properly with the ethanol. So, unless a new synthesis methodology is developed or an expanded retrofit infrastructure is established, the ability of bio-fuels to influence oil use in transportation sector appears to be limited.

Reduced oil consumption can be achieved without the administration of new technologies such as electric or bio-fuel based vehicles. New investment in the development of mass transit options like buses and light rail could reduce both the capital required for the deployment of these new technologies as well as reduce oil use both in the short-term and the long-term. Unfortunately there does not appear to be a sufficient drive for reinvestment in older mass transit technologies reducing the probability that such a strategy will play a meaningful role in the reduction of oil consumption.

Although pinpointing the exact time point of ‘peak oil’ and the resultant spike in oil prices due to lack of supply, rather than due to short-term temporal factors, seems to be important it really isn’t. Regardless of whether or not non-conventional sources are tapped, it is highly probable that oil itself will become less and less economically attractive as time passes demanding the rapid production and deployment of oil alternatives to avoid economic slowdown in the global economy. There appear to be two elements that must happen to avoid significant economic slowdown in the future due to lack of oil, confirmation and appropriate heavy investment in Iraqi oil development and devotion of significant capital and effort to aiding the development of alternative oil-based resources.

==
1. Grove, Noel. “Oil, the Dwindling Treasure.” National Geographic. June 1974.

2. Cohen, Dave. “The Perfect Storm.” Association for the Study of Peak Oil and Gas. 2007. http://www.aspousa.com/archives/index.php?option=com_content&task=view&id=243&Itemid=91.

3. “Newsletter”. Ireland: Association for the Study of Peak Oil and Gas. 2008.
http://www.aspo-ireland.org/contentFiles/newsletterPDFs/newsletter89_200805.pdf.

4. Zittel, Werner, and Schindler, Jorg. “Crude Oil: The Supply Outlook.” Energy Watch Group. 2007. EWG-Series 3.
http://www.energywatchgroup.org/fileadmin/global/pdf/EWG_Oilreport_10-2007.pdf.

5. “The Oil Crunch: Securing the UK’s Energy Future.” UK Industry Taskforce on Peak Oil and Energy Security. 2008. http://peakoil.solarcentury.com/

6. “World Energy Outlook 2008 – Executive Summary.” International Energy Administration. 2009. http://www.iea.org/Textbase/npsum/WEO2008SUM.pdf

7. Izunda, Uchenna. “Aramco Chief says World’s Oil Reserves will Last for more than a Century.” Oil and Gas Journal.
http://www.ogj.com/display_article/312081/7/ONART/none/GenIn/1/WEC:-Saudi-Aramco-chief-dismisses-peak-oil-fears/. Retrieved 2009-07-07.

8. Gaurav, Sodhi. “The Myth of OPEC.” Australian Financial Review. 2008. http://www.cis.org.au/executive_highlights/EH2008/eh63608.html.

9. Learsy, Raymond. “OPEC Follies – Breaking Point.” National Review. 2004. http://www.nationalreview.com/comment/learsy200312040900.asp.

10. “U.S. EPA raises biofuel target for 2008 to 7.76 percent.” Mongabay. 2008.
http://news.mongabay.com/bioenergy/2008/02/us-epa-raises-biofuel-target-for-2008.html

11. “International Energy Outlook 2009.” Table D4. World Liquids Consumption by Region, High Oil Price Case, 1990-2030. Energy Information Administration. May 2009.

12. Borrel, Brendan. “Biofuel Fraud Case Could Leave the EPA Running on Fumes.” Scientific American Magazine July 2009. http://www.scientificamerican.com/article.cfm?id=cello-biofuel-fraud-case

13. “Iraq Oil.” Energy Information Administration. Country Analysis Briefs. 2007.

14. Luft, Gal. “How Much Oil Does Iraq Have?” The Brookings Institution. 2003.

15. Verma, Sonia. “Iraq could have largest oil reserves in the world.” The Times. May 20, 2008. http://business.timesonline.co.uk/tol/business/industry_sectors/natural_resources/article3964957.ece

16. “International Energy Outlook 2009.” Table E4. World Liquids Consumption by Region, Low Oil Price Case, 1990-2030. Energy Information Administration. May 2009.

17. “International Energy Outlook 2009.” Energy Information Administration. May 2009. pp 9.

18. “A study on the EU oil shale industry viewed in the light of the Estonian experience. A report by EASAC to the Committee on Industry, Research and Energy of the European Parliament.” European Academies Science Advisory Council. May 2007. http://www.easac.org/displaypagedoc.asp?id=78

19. Bartis, James, et, Al. “Oil Shale Development in the United States. Prospects and Policy Issues. Prepared for the National Energy Technology Laboratory of the United States Department of Energy.” The RAND Corporation. 2005.

20. Speckman, Stephen. “Shale Oil – Now?” Desert News. 2006. http://www.shaleoilnow.com/CanDoAt40.pdf

21. Luik, Hans. “Alternative technologies for Oil Shale Liquefaction and Upgrading.” International Oil Shale Symposium. Tallinn University of Technology. 2009.

22. en.wiktionary.org/wiki/kerogen

23. “Secure Fuels from Domestic Resources: The Continuing Evolution of America’s Oil Shale and Tar Sands Industries.” Department of Energy (DOE). 2007.
http://www.nevtahoilsands.com/pdf/Oil-Shale-and-Tar-Sands-Company-Profiles.pdf

24. “Environmental Impacts from Mining.” Office of Surface Mining. 2006.http://www.ott.wrcc.osmre.gov/library/hbmanual/epa530c/chapter3.pdf.

Saturday, September 12, 2009

Saving Newspapers

Like a slow poison the advent and evolution of the Internet is slowly leading to the demise of the newspaper industry. Slow to respond to the possible threat of the Internet, some argue that the industry cannot survive whereas others argue that extreme measures need to be immediately taken such as micro-payments in order to ward off a descent into non-profitability and oblivion. Overall, like with all problems, it is important to understand how the problem arose, how newspapers went from king of mountain to possibly the trash heap and what, if anything can be done about it.

The superiority of newspapers throughout the 20th century was predicated on two key factors, access and analysis. Access can be attributed to two separate elements. The first portion of access relates to the ability to communicate with important people that could have pertinent information regarding a given subject. For instance only certain individuals with proper clearance could listen in on the daily Whitehouse press secretary briefings or talk to the star point guard of the local professional or college basketball team and later communicate that information to the populous in a column. The fact that this information was difficult to acquire as well as valuable to many members of society instilled an important informational niche for newspapers.

Unfortunately the exclusivity of that access has waned in the 21st century with the creation of cheaper communication tools developed for the Internet. Now instead of talking to a beat-writer about what happened in the last game or how rookie x is looking in practice, athletes are now breaking news on team policy and action on Twitter. Instead of only allowing traditional forms of media to listen in on press briefings, both alternative forms of media like bloggers and non-print media like CNN are now able to acquire Whitehouse press clearance and communicate that information over the Internet.

Normally such a lack of access would not be such a significant problem if all things were equal, but that is not the situation. Newspapers are multi-faceted organizations that require significant capital to run properly based on current design structure and therefore have to provide their services at a fee whereas other forms of information dissemination via the Internet have significantly reduced capital costs if any at all. The reduced capital costs allow Internet based information providers to typically offer their opinions and reporting for free. When comparing the merits of two reasonably identical items against each other, the one with a lower cost will almost always be preferred. This rationality was used by newspaper corporations at the beginning of the Internet age when their online material was provided free of charge. In addition it was also presumed that online viewers would eventually migrate from online to offline and purchase subscriptions under the rationality that if the information was good enough individuals would not mind paying for it.

Unfortunately a mass migration from online reading to offline reading did not occur; in fact one could argue that more readers have moved from offline to online. Most newspapers have been unable to evolve from their role as exclusive providers to distinguish themselves from other available information providers on the Internet. Another detrimental element to the rise of online reading is that advertisers still have less confidence in the ability of online advertising to push their products, thus online advertising provides less money than television or newspaper advertising leading to the losing equation for newspapers of less revenue while maintaining similar costs.

With conventional advertising revenues dropping, most analysts believe that newspapers need to increase revenues through the process of micro-payments. Micro-payments simply converts the distribution of online information to a similar context to that of a normal paper; instead of offering online information for free the information requires an individual to pay a fixed fee in some form. These fees can take the form of a fixed price for access over a given period of time, a ‘pay as you go’ program where each article is purchased for a flat fee individually or free access to the given publication is limited to only select material and more detailed content requires the reader pay a fee. Some publications like the New York Times already have applied the third micro-payment option as well as some magazines like Time and Scientific American where most of the content is free, but archived material requires some form of payment.

Whether or not micro-payments will be successful is dependent on one of two schools of thought. The biggest concern with widespread administration of micro-payments is that online consumers have become accustomed to free content and it is reasonable to expect that consumers will not respond favorably to having to pay for content that used to be free leading to complete abandonment of those institutions that apply a micro-payment system. Basically to critics a micro-payment system is too similar to the previous strategy that an Internet version of these newspapers would act as gateways to increase circulation and interest in the offline publication, a strategy that can be regarded as a failure.

Those in favor of micro-payments believe that the reason the online to offline strategy failed was not because people were not willing to pay for the offline publication, but instead modern society has selected online as the primary medium for acquiring news information, not offline. Micro-payments will have a higher probability of success because the medium remains the same. However, the biggest problem with micro-payments may be that newspapers have yet to distinguish their content from those of other providers, instead the online edition of newspapers have relied on their existing reputations to attract readers. Overall it is reasonable to assume that unless newspapers are able to differentiate their content from that of an organization like CNN.com, it is difficult to anticipate micro-payments being successful, relying on local news can only go so far.

Another aspect of access that allowed newspapers to flourish was that until the Internet become sufficiently organized, newspapers were the cheapest means of advertising for the average individual. For most of the 20th century classified ads provided a significant portion of revenue for newspapers, for smaller papers these ads made up a majority of the revenue. The reason for the success of the classified ad is simple awareness per cost, if someone wants to purchase or sell an item, advertising in the newspaper was the best way to reach the most people for the smallest amount of money. Unfortunately for newspapers the development and evolution of online auction sites like eBay and online classifieds like craigs list have eliminated almost all of rationality for taking out a classified ad in a newspaper. Why pay a fixed fee for the opportunity for 100,000 people to see your request to purchase Taylor Swift concert tickets when you can post that request in an online environment where not only do many more people see it, but there is no fee and the request is easier to find and successfully negotiate due to better targeting methodologies?

The second reason for newspaper superiority in the 20th century was analysis. The newspaper provided a medium where individuals could gather in-depth information and facts about a particular subject versus the television news sound bite. Also in combination with the access attribute, the newspaper allowed multiple viewpoints through their letters to the editor section creating a wide breadth of information exchange. The Internet hijacked this information exchange and enhanced it by creating specific specialized niches while newspapers maintained a general information format. Unfortunately the general information format has be expanded in television coverage on stations like CNN due to incorporation of web videos, Facebook, Twitter and the like creating a peudo-discussion forum for viewers with an interplay turnover that leaves letters to the editor in the dust.

Returning to the niches, originally these niches were not a significant concern because of their lack of reach, but as the Internet became more pervasive the advantage and preference of these niches became more relevant. For example it makes more sense to express an opinion regarding tax reform in a medium that is for general consumption and reaches half a million people than in a medium that primarily discusses finance and reaches only 50,000 people. However, once the latter medium gains a larger population it becomes a much more attractive venue for expressing an opinion on tax reform for the purpose of debate.

A second element has also reduced the viability of analysis. The birth of this element is the chief reason that it will be difficult for newspapers to differentiate themselves from other information options to justify micro-payments. Modern culture has seemingly become more accepting of the 60-second sound bite over a 10-minute in-depth breakdown of pros and cons as the means to discuss a given issue. The reason for this shift can be explained one of two ways. First, humans have developed into a worker bee culture where one does not have enough time to read a point-counterpoint study on a topic like healthcare. Overall such a premise is rather hard to believe when one considers how much time people waste in the average day.

Second, humans have become more accepting of a system of consciousness that can be best described by the old adage: ‘too often people enjoy the comfort of opinion without the discomfort of thought.’ It is difficult for most humans to accept being incorrect and partaking in an in-depth analysis of a given topic is much more likely to expose flaws in reasoning leading to the difficult psychological position of actually facing a situation where a long held belief is not correct. It is much easier for individuals to rest easy in the superiority of their own beliefs when those beliefs go unchallenged instead of dealing with the consequences when those beliefs are challenged.

This aspect is unfortunate for newspapers because if a majority of society favors 60-second sound bites then how can newspapers differentiate themselves from other available information sources when most of the niche areas are already occupied? Newspapers seem to be caught in the crossfire of the cultural quandary of how society can evolve to an attitude where being right is more important than thinking one is right even if it involves abandoning an initial hypothesis.

Of course one could make the valid argument that newspapers have only themselves to blame for their problems as they were slow to recognize and act on the opportunities and threats provided by the Internet. Due to this delay in action, the situation can be likened to that of a cancer patient who is told that conventional treatments are no longer viable and only a series of experimental treatments can result in remission, but these experimental treatments may fail and significantly shorten the lifespan of the patient. Clearly any rational actor undergoes the experimental treatments otherwise the only outcome is death. Such is the action that newspapers need to take if they wish to avoid becoming a relic of the past.

So what do newspapers do? First, all newspapers need to apply a micro-payment system to online material where either a subscription is required to access any material or readers utilize a ‘pay-as-you-go’ plan. The best scenario is for the publication to offer both options to readers, which can be selected when a reader registers for access and anytime after. Any changes to the preference of payment can be made at anytime, but will not be applicable until the first of the next month. For the ‘pay-as-you-go’ option readers would be entitled to a free preview of the first paragraph of the article as well as information regarding how long the article is and key discussion points. Such transparency should be viewed as proper, not only because it is right, but also because prospective customers may become agitated by purchasing an article that was perceived to be about something when it was about something else entirely due to a misleading headline. Billing could either be conducted through a system like Paypal or a simple check at the end of the month.

In addition to providing pay-per-view content, newspapers need to ensure that individuals or groups do not manipulate this system through the use of loopholes. For instance one person registering for content and then taking that content and posting it verbatim on a blog allowing his/her readers to view it for free. Therefore, newspapers need to disable the ability of readers to copy and paste their material. One way to accomplish such a requirement is to convert all material to protected PDF files where the save and copy/paste options are not available (the print option still remains usable). Such a system significantly limits the ability of an individual or group to take that information and post it on another site circumventing the payment policy.

Newspapers also need to eliminate the ability to free link to their content, a practice used by many secondary news sites like Huffington Post, Drudge Report, Newser, etc. It makes little sense to allow secondary sites the fruits of labor at no cost when both institutions are in competition within the same field. It can be argued that these secondary news sites provide a publicity service for the primary sites in that if a reader wants to know more or read the original story the reader will visit the primary site, a visit that might not have occurred without the secondary site. Such an opinion is rather irrational because there is little reason to believe that the number of readers directed to the primary sites via the secondary sites exceed the number of readers that do not visit the primary sites at all because they get their information from the leaching secondary news sites.

A major criticism of both the micro-payment and anti-linking strategies is that the entire newspaper community would need to jointly take these actions, otherwise their application will more than likely fail hastening the demise of those publications that apply them. The unpredictability of outside parties makes this concern more dangerous than it should be. Initially it seems that such a concern is unwarranted because the application of micro-payments and anti-linking are essential to the survival of the newspaper industry, not participating guarantees failure. However, such a contention is not valid if a small number of major players do not participate. For example if all newspapers administer the above strategies except for the New York Times, the New York Times could survive because by maintaining relatively free content it would theoretically draw more readers which would then result in more advertisement and advertisement dollars ending in the demise of all newspapers that utilized the above strategies and cementing the survival of the New York Times.

That example basically sums up the real problem for the newspaper industry in general, a lack of industry trust brought on by competition and capitalism. If the industry simply resolves to cooperate in defeating the secondary little to no original content news sites, it can be done with little damage; however, all sides seem more interested in acquiring whatever advantage they can in the competition against each other. A similar example is illustrated by the famous Prisoner’s Dilemma. Recall that in the Prisoner’s Dilemma two criminals are both accused of a crime and during the interrogation process both criminals are asked if they committed the crime where their penalty is influenced by both their response and the response of the other criminal. The figure below documents the penalty possibilities.

The prisoner dilemma represents the ideal example for Nash Equilibriums, the idea that one should always take the action that results in the best outcome for him/herself while assuming that all other players in the environment are acting with the same motives. As shown in the above figure, for both prisoners it makes the most sense to proclaim not guilty because if the other prisoner proclaims guilt then the first prisoner goes free instead of serving 1 year in jail and if the other prisoner proclaims not guilty then the first prisoner serves 5 years instead of 10 years. Based on this information and the requisite actions of the prisoners, both prisoners will always end up receiving 5 years in jail.

However, such a plan of action is illogical because in the Prisoner’s Dilemma both prisoners know the consequences for each proclamation and functioning under the premise of maximizing their overall benefit both should proclaim their guilt guaranteeing 1 year in jail. The only thing stopping such a result is that neither prisoner believes that the other will act in the best interests of the group not him/herself, even when acting in the best interest of the group results in the best outcome for the individual (1 year in jail vs. 5 years in jail).

So what would the dilemma look like for the newspaper industry?


Gee, that looks remarkably similar to the Prisoner’s Dilemma. The only way the newspaper industry survives is if they trust each other to look out for the interests of the industry first over the interests of a given paper, otherwise all of the newspapers will probably go out of business because of declining revenues.

If things work the way they should (famous last words there), then the application of micro-payments and anti-linking strategies should draw traffic away from secondary news sites as well as increase revenues, but what about primary news sites that exist on television and provide online content (CNN, MSN, etc.)? Backed by their television advertising revenue they can still afford to provide their content for free. The best way, and maybe the only way, to combat the free and reasonably quality content of these sites would be to link the online and offline material to remove redundancy and expand the overall content.

Redundancy between online and offline content is a continuing problem that reduces the ability of a newspaper to differentiate itself from its peers and other alternative sites. Logically it could be argued that redundancy is necessary because not everyone has access to online content, thus it is important that they have the opportunity to acquire the same information as those online, largely through the physical paper. In fact because of their experience with physical papers, newspapers normally have material that is available offline that is not available online. However, the problem is that none of that information is that unique or even important. Therefore, if newspapers would designate one region for in-depth material and the other region for sound bite material it would create a specific niche for each tool in their arsenal to make money.

For instance the offline material could have the generic news as it currently does with closing remarks suggesting that if the reader wants to understand more about the central point of the article he/she should visit some specific article written exclusively for the online version of the same publication. In contrast the online version of the publication would have little in the way of generic news, but would have exclusive analysis and content that significantly expands on what is generically reported for a given topic or visa-versa. Also specific commentators could be isolated to a single medium, either offline or online content, which could drive greater attention to that particular medium. Through this strategy instead of an individual visiting either the offline or online material and declining to visit the other medium because it is basically the same, each medium has significantly different content, which would demand visitation to both the offline and online material. However, the one caveat to this particular strategy is that readers are going to want to go deeper and learn more about a particular topic otherwise the expanded content in the online medium will be ignored.

Newspapers also need to acquire a new advantage in access. The average blogger may now have access to the same information sources once only exclusively held by newspapers, but newspapers still have a resource advantage. Using that advantage newspapers can change the context of access from exclusivity to depth. Whereas a blogger may be able to talk to a single Senator on the finance committee about the role of the SEC in future Wall Street regulation, a newspaper like the Washington Post has the capacity to initiate a roundtable discussion involving a much larger number of Senators on the finance committee for a honest and objective discussion about future Wall Street regulation. Clearly reporting on such a topic in such a way would appeal to those that have an interest in what new protections and restrictions will be likely when participating in the stock exchange, much more so than watching Jim Cramer play with sound effects.

In large part newspapers need to drive a new ‘smart’ revolution where the 60-second sound bite is unacceptable; social questions and the news in general need to be examined from all angles. No more allowing a sound bite like ‘death panel’ to be legitimized without extensive examination regarding its validity. No more saying a particular health plan is great or poor without knowing its intricacies. No more giving equal time to opposing sides of a given topic when one side is clearly not factually accurate.

Overall newspapers have the tools to put-off their demise, but they must have the intelligence, guts and trust to use them. If newspapers do not evolve and apply the necessary changes, then it is a significant loss for society. With the loss of each legitimate and reasonably objective newspaper another unique viewpoint is lost which further diminishes the probability that individuals in society can adequately come to proper conclusions regarding various problems.

Medical Malpractice Insurance Cap

One of the chief reasons for the perpetual increase in healthcare costs is the excess administration of unnecessary tests on a patient that has a condition where diagnosis should relatively straightforward. A point of argument has been made that the reason behind the number of unnecessary tests is that physicians are concerned about a misdiagnosis, which will later result in a malpractice suit against them. In response to this perceived concern, the Republican leadership and many of its disciples continue to propose the idea of limiting the monetary redress that an individual can receive as a result of a malpractice suit. By their reasoning such a malpractice award cap will significantly limit the zeal with which trial lawyers will pursue unnecessary malpractice lawsuits, thereby lowering both the probability that malpractice insurance will be adversely affected and physicians would have to pay higher rates based on improper diagnostic expectations, thus finally resulting in fewer tests being conducted lowering overall medical costs.

Lost in the sound bites of healthcare reform and tort reform is the question of how many medical malpractice suits are actually filed in a given year and what is the average financial weight of those suits? Approximately 85,000 malpractice suits are filed annually with financial redress ranging from 500,000 to 1,000,000 dollars.1 There is uncertainty about whether or not medical malpractice lawsuits have increased medical costs due to the size of the redress awarded.2,3 Finally in reference to the main point of ‘defensive’ medicine made by those supporting malpractice redress caps approximately 1/3 of all malpractice suits involve mistaken diagnosis leading to approximately 5-10% of the total costs of healthcare being attributable to ‘defensive’ medicine.4,5

The initial portion of the above premise makes sense; a portion of the escalating healthcare costs can be correlated to the administration of unnecessary tests to eliminate the feasibility that a certain patient may have contracted a very rare disease. Unfortunately the proposed solution is rather shortsighted, irresponsible and does nothing to neutralize the real cause of these excessive tests, but more on that later. The problem with placing a cap on monetary redress due to medical malpractice is that the legislation proposing the cap is universal, thus it would apply to all medical malpractice lawsuits even if the physician was completely and utterly negligent. For example if the physician happened to sever an individual’s vocal cords during a procedure resulting in the permanent loss of speech, a malpractice cap would only limit the patient to a pre-set value of redress. It is impractical and unfair to penalize an individual that has genuinely been mistreated by a physician solely to ward against the fear that less scrupulous lawsuits would result in inflated and unjustified monetary awards.

Instead a more appropriate and malleable solution, if one were really interested in limiting needless medical malpractice lawsuits, would be to evaluate the credibility of a malpractice suit before it could proceed to trial. In essence create an independent state or federal board that would review the merits of a medical malpractice suit and could either find evidence supporting the argument of the patient or find that a unique set of circumstances, not neglect, was responsible for the event that is triggering the suit. Basically this board would act as a grand jury of sorts for the civil division with regards to medical malpractice suits; they would either allow the case to proceed to trial or ‘no true bill’ the case, which would preclude the case from proceeding to trial without new evidence. Unlike the cap proposal, this method treats each situation as unique and judges on a case-by-case basis instead of classifying everything under the same umbrella.

One concern with the above idea is whether or not using such a review board as a proverbial ‘gatekeeper’ would be in violation of an individual’s 5th or 14th amendment right to due process (federal/state). If none of the participating parties on the board have any conflicts of interest with either insurance companies, hospitals or either litigate and the process is transparent in its procedure and ruling then it can be reasoned that due process is not violated. Another attribute favoring this review board is that past groups have occasionally challenged the constitutionality of the grand jury system and have failed in all instances. With regards to the statute of limitations associated with filing a medical malpractice suit, filing with the review board would constitute proper filing. A plaintiff favorable ruling would eliminate the statute of limitations allowing the plaintiff the opportunity to proceed to trial or work towards a settlement for any length of time.

The makeup of the board is important due to issues of fairness and impartiality; therefore, for these boards vetting individuals will be important as well as identifying those that have qualified expertise to make judgments regarding case validity. The two best occupations for board participants seem to be physicians (preferably retired) and medical ethicists. The number of board members would either be 7 or 9 depending on the budgetary allotments and the availability of vetted participants. All transactions between the board and the reviewed cases would be entirely anonymous in that the board members would not have any information pertaining to either litigator, insurance company involvement or hospital involvement, just the factual information and depositions for the case. The litigators would have no information regarding the identities of the board members. Such steps are taken to reduce the probability of reprisal or conflict of interests. Any transparency issues should be neutralized due to the supervision of the state or federal government ensuring that the board members are qualified and impartial, thus even though their identities are unknown, there is no reason to suspect foul play.

There may be an initial concern that if all of the proceedings are kept anonymous the appropriate depositions will not provide enough information to the board to make a proper decision. However, this concern is unwarranted because the purpose of the board is not to generate a conclusion regarding guilt or innocence, but determine if enough information exists and circumstances warrant such a judgment; therefore, direct questioning of possible witnesses is unnecessary.

Overall the clamor surrounding medical malpractice reform, even using the proposed review board above, is rather unnecessary because it does not attack the root of the problem with defensive medicine. First, a significant factor in increasing healthcare costs ties directly to the continuing increase in average life expectancy, something that cannot be helped; live to a greater age and there is a higher probability of experiencing detrimental health afflictions. Second, general society seems to be focusing less on maintaining a high level of quality of overall health with increasing obesity rates and a more sedimentary lifestyle, especially in children, which increases the probability of health related problems. Third, the era of Webmd has given rise to self-diagnosing patients that lack the expertise to properly identifying correlations between symptoms and cause. Instead of realizing that there is a 99% chance that symptom x is caused by common condition y, these patients are instead treating common condition y and rare condition z with equal weight and thus demand expensive tests to better verify the correct condition.

This third condition harkens to an underlying problem in society of the ‘normal expert’; the stance that expertise is only valid when said expert agrees with the opinion of the individual questioning the expert. Basically the patient disregards the diagnostic expertise possessed by the physician, placing his/her own diagnostic ability on an equal level eliminating trust and leading to the demand for the expensive tests. Unfortunately there is little the physician can do because if he/she elects not to put the patient through these tests under the rational that it is a waste of money, the patient will more than likely leave and visit another physician until he/she finds a physician that will perform the tests. Therefore, because performing the tests for patients of such psychological make-up is a foregone conclusion the initial physician might as well administer the tests to collect the associated fee.

Overall addressing these above factors will do much more to derailing the increasing costs of healthcare than establishing a detrimental and unfair medical malpractice redress cap. Even if medical malpractice suits become more problematic in the future, a review board to judge the validity of claims made seems much more practical than a universal cap that penalizes legitimate and non-legitimate claims equally.



==
1. Hyman, David, and Silver, Charles. “Medical Malpractice Litigation and Tort Reform: It's the
Incentives, Stupid.” Vand. L. Rev. May 2006. 59(1085): pp 1089.

2. “Medical Malpractice Insurance: Multiple Factors Have Contributed to Increased Premium Rates.” General Accounting Office (GAO). GAO-03-702. June 2003.

3. Lewis, L, et, Al. “Faulty Data and False Conclusions: The Myth of Skyrocketing Medical Malpractice Verdicts.” Commonweal Institute. 2004.

4. “The Great Medical Malpractice Hoax: NPDB Data Continue to Show Medical Liability System Produces Rational Outcomes.” Congress Watch. January 2007.

5. Phillips, RL, et, Al. "Learning from malpractice claims about negligent, adverse events in primary care in the United States.” Qual Saf Health Care. 2004. 13 (2): 121–6.