Saturday, December 22, 2012

Man or Puppet

One of the most enduring and controversial philosophical questions ever is the issue of whether or not humans have free will. The importance of this controversy emanates from its personal association of ego and the nature of societal evolution. Unfortunately there are times when the controversial nature of this question experiences an artificial increase in controversy due to ineffective comparison values based on the points of arguments. Basically there are times when appropriate boundary conditions are not applied and arguments boil down to comparing apples and oranges creating confusion and inconsistent conclusions. Therefore, it is important to state in clear specific language what definition of ‘free will’ is being utilized when arguing either in its favor or opposition. Note that the terms unconscious and subconscious will be used interchangeably.

Typically there are two legitimate definitions regarding free will: 1) The ability to choose between a set of options with the potential of develop different outcomes that can shape the future stemming from that choice. 2) Everything in 1) with the additional ability to formulate the set of options from which the choice will be made. Note that these definitions do not include weight or power in the decision making process. Just because an individual does not have significant influence to directly create a different outcome does not mean that the individual does not have free will.

For the purposes of this discussion ‘free will’ will be acknowledged as the cause of the action both in origin and execution, i.e. the second above definition. Note that it is not appropriate to view free will as “the ability to select an action which leads to the fulfillment of a desire” or “selecting based on existing values and characteristics to achieve a desire.” The reason is that such definitions are limiting because they does not contradict with determinism by identifying whether or not the person was actually in control of the decision-making, thus undercutting the chief issue of debate regarding free will.

What is the argument surrounding this definition? Realistically the question of free will is derived from two aspects of human thought: creation of choice and selection of choice (action). Thus, human thought can be broken down into three different constructs: 1) Using sensory, biological and quantum information an individual can consciously create a list of options regarding a given situation and then make a decision from those options. 2) Subconscious thoughts using sensory, biological and quantum information create a list of options that are then ‘transferred’ to the conscious mind and then a decision is consciously made from those subconsciously created options. 3) Both options and decisions are significantly influenced by the subconscious mind in such a way that free will is basically non-existent. Some would argue that there is a fourth option in that all choices are created and all actions are based on randomness, but discussion of such an option makes little sense because nothing substantial can come from it due to the inelastic nature of the argument. A side question also exists in that suppose option 2 or 3 is correct, can conscious decisions (even at low influences for 3) influence the subconscious created list?

The chief camp opposed to free will, determinism, would be best classified as aligned with option 3. Determinism concludes that all actions are predicted with the present dictating the future entirely and necessarily (every occurrence results from prior events), thus free will does not exist because no conscious choices exist because no alternative options exist, only one path is ever taken. When viewed through the lens of a math problem free will can be regarded as x + y = z where x is positive and z is greater than 5 (some boundary conditions exist for one cannot execute any desire action) whereas determinism can be regarded as past and existing conditions demanding that x = 2 and y = 4 thus one will always choose z to equal 6.

Some determinists even extend the argument to that because physics is reversible one could argue that all effects create their initial state causes; however, such a mindset is only technically correct not practically because spill one billion cups of coffee and never will the coffee ‘magically’ jump back into the cup from the floor even though it is theoretically possible. For humans all actions are traced back to biological events in the body and brain through sensory and information processing beyond conscious consideration. Determinists also like to point out that if determinism is false then actions are random and the randomization of actions also eliminates the existence of free will.

The nature of thought is an important consideration when discussing free will. What makes up a thought? Some determinists argue that free will does not exist in part because a vast majority of the information processed in the brain is unconscious. Therefore, how can free will exist if a majority of the methodology that governs the choices one makes operates outside of consciousness? Support for the above belief is drawn from one of the most famous neurobiological experiments thought to concern free will. In the 1980s Benjamin Libet conducted an experiment to observe the readiness potential in relation to perceived conscious thought. After the conclusion of the experiment it was determined from EEG recordings that the readiness potential occurred, on average, 350 milliseconds before a subject had the conscious inclination to act.1

Based on this information Libet and his supporters argued that the brain had already decided to act on a subconscious level before consciousness had determined to act, thus it was extrapolated that any action should have this same subconscious action component so no action was based on conscious decision-making making free will an illusion.1,2 However, Libet did not totally forsake free will despite what some may think. Due to a 100 to 200-millisecond separation between conscious recognition of action and the actual physical action, he concluded that humans had a conscious ‘veto’ power with which they could prevent actions, thus maintaining an aspect of free will. Some reinforce this empirical evidence of unconscious processing with the mindset that one cannot know until one knows. Basically a person cannot know what a thought will be until it occurs and this mindset is established by unconscious elements along with other physical and mental elements that are beyond the conscious mind.

However, there are interesting side questions to this issue both regarding the readiness potential and the so-called ‘veto’ power. Incorporating how the brain creates thoughts is necessary to deducing the validity of free will. It is not appropriate to presume every depolarizating neuron in the brain is akin to a thought. From a neurobiological level thoughts are created when a large enough depolarized neuronal network has been created throughout various locations in the brain. Therefore, in the nature of information processing it seems inappropriate to associate all neuronal firing that fails to create a thought as ‘evidence/support’ as information processing for a deterministic worldview. Thus, this so-called pre-conscious decision making process associated with the readiness potential may simply be spontaneous brain activity.

One aspect of consideration is to suppose that subconscious information processing is not able to produce the necessary depolarization cascade to produce a thought without an associated conscious addition. For example one may need to apply 10 psi to crush object A, but hand A can only produce 7 psi. No matter how long hand A applies pressure to object A it will never produce enough pressure to crush it. Therefore, hand B will have to be applied to produce the additional required pressure, just like a conscious thought may be required to depolarize enough neurons in the proper areas of the brain, either through elimination of inhibition or additional depolarization, to produce a given thought. There is belief that consciousness is dependent on information feedback between different regions of the brain versus an activation hierarchy. If this is the case one can argue that consciousness produces all thought because without consciousness there would be no thought, just incomplete information processing, and with no thought there would be no action.

Libet and other have also considered this conscious component (referred to as a ‘trigger’), but he and others more preferred veto ability. Unfortunately there are two major elements to veto theory that raise concern. First, if the action is entirely based on unconscious signaling (outside conscious control) then how could the consciousness “know” what to veto? In some context consciousness has to almost guess what action to stop. From Libet’s experiment there is a very small window (100-200 millisecond) in which the conscious inclination is developed regarding the action before the action actually takes place, but it is difficult to reason that a consciousness could identify the exact type of action and inhibit it in such a short time frame. Granted the lack of knowledge of brain leaves the possibility open, but it does seem unlikely. Another option may be consciousness having some ‘universal veto’ ability that is able to inhibit any unconscious provoked action.

Second, there seems to be a conscious inconsistency with a veto power. The veto power is associated with consciousness, but principle action is unconscious, which implies a sense of randomness associated with existing conditions. However, people do not tend have perceptions consistently pop into their heads like ‘Don’t wave my left arm’, ‘Don’t stomp my right foot’, ‘Don’t smile’, etc. Would such behavior in conscious context be expected of a conscious veto?

However, despite the ‘trigger’ or ‘veto’ it can be argued that conscious control does not govern all aspects of thought. Two separate types of thought can be expressed. The first type of thought can be classified as stray thoughts, those without rational cohesion. These thoughts spontaneously appear in the consciousness, but are improperly constructed and typically do not make sense. For example the thought of a purple dragon that blows cotton candy instead of fire appearing in the mind. These types of thoughts can also be associated with typical dream states and epiphanies (like the famous example of the aromatic structure of benzene being thought of as a snake eating its own tail.) The second type of thought can be classified as cohesive thoughts. These thoughts are triggered by conscious involvement and make much more sense than stray thoughts. Due to the involvement of the conscious mind cohesive thoughts involve better understand of how a person perceives and responds to the existing surrounding space (self, inanimate objects and animated objects).

Others would counter such a claim by asking where is the origin of the conscious aspect of thought if it does not derive from the unconscious aspect. That question is the big question and is currently unanswerable based on current neurobiological knowledge, but there is precedence for the elastic nature of the unconscious/involuntary mind. Zen monks, through a strict meditation regiment, are believed to be able to develop the ability to voluntarily influence previously involuntary biochemical and neuronal processes in the body.3,4

In addition one point of contention with individuals who claim that either alien hand syndrome or anarchic hand syndrome supports a lack of free will is that these conditions are products of a broken system (damaged brain). It is not appropriate to draw conclusions about a working and functional system by observing the behavior of a broken system. To the point of conscious requirement for movements, theorized above, being disproved by these syndromes again the broken system could remove conscious or unconscious inhibitory controls that would originally prevent the movement, which would require conscious intervention to overcome (primary motor cortex relationship with premotor cortex).5 Eliminate this inhibitory action through breaking the system and conscious intervention is no longer required.

It would be similar to having a sluice controlling water flow through a dam and then punching a hole in the dam. After creating the hole the sluice does not have much of a purpose anymore with regards to controlling the flow of water. There is evidence to suggest that the unconscious behavior is the result of autonomous activity in the primary motor cortex with a lack of input from the premotor cortex, thus offering an explanation for unconscious actions in these situations.5,6

Likewise it is not appropriate for individuals to claim that after the application of an external electrical stimuli to the brain results in some form of involuntary movement supports a lack of free will. The brain functions through sending electrical and chemical signals between neurons to culminate the production of thought and action. Typically the applied electrical stimuli in these experiments are of significant magnitude that they create a large enough signaling depolarization cascade that normal operations can do little to stop. It would be akin to stating that sea walls do not function as designed on any level when a 10 ft. wall fails to stop a 30 ft. tsunami. Also when the electrical stimulus is applied there is no direct application concerning whether ‘unconscious’ or ‘conscious’ neurons are chiefly influenced due to a lack of knowledge.

Another argument made by determinists is that the existence of free will is an emotional reaction by individuals who must be ‘reassured’ that their interactions with others and society make logical and emotional sense, especially in the context of morality. Unfortunately this position is not a useful point of argument because it does nothing to address the question of whether or not free will exists. Whether or not someone believes something exists has no influence on whether or not it actually does exist on an absolute level. Although it is understandable why such a mindset would be naturally occurring in most people.

Others argue that determinism is essential because if one is responsible for actions taken/decisions made in one given situation then one must be responsible for how one is as a person. However, it is argued that this premise cannot be correct because at one point there must have been an origination of the person being that person. Basically humans cannot create him/herself or mental states ex nihilo. However, this belief seems to be dependent on the assumption that free will exists from birth… what if that is not the case?

Can one claim that an individual has free will upon birth? Based on the first definition provided above the answer could be yes depending on one’s personal viewpoint regarding choice selection. Based on the second definition provided above it is difficult to conclude yes because without having an understanding of self one cannot understand the rationality behind the choice offered, thus one cannot influence those choices with the conscious aspect of the thought. No infant upon birth has an understanding of self, thus no infant can have free will based on the second definition. Until that recognition of self occurs free will cannot exist. Upon recognizing self one creates conscious influence on existing boundary conditions and can exert influence on both choice creation and choice action. Self in this situation could arise from the principle of emergence based on a maturation of neuronal processing.

One confusing aspect of determinism is the attempt to argue that fatalism is not an inevitable response. Determinists like to argue that people confuse fatalism and determinism. The argument is that individuals become skeptical that they can control their desires and motivations, thus they elect to not even try. Another means to state this issue is that those suffering from fatalism believe that their choices have causes, but no resultant effect thus they have no influence or power. However, the only difference between fatalism and determinism is state of mind. The reality for both determinism and fatalism are the same, one is powerless to create a different future from what is already determined by the existing conditions created by past events. The details regarding whether the inevitability of the future is due to causality or not is rather irrelevant.

The problem with attempts to differentiate determinism and fatalism is that they do not make any sense. If determinism is correct then individuals cannot control what happens, so it does not matter whether or not they try to apply effort to influence their actions because those actions occur anyways. Also even in fatalism an individual’s choices have causes and effects because the basic concept of determinism is built upon that reality. Suppose John wants to give flowers to Suzie so he orders flowers from Delivery Company A (Action 1). Delivery Company A assigns the flowers to be delivered by Jason. (Action 2). Jason delivers the flowers and Suzie gives John a kiss (Action 3). Even if Jason has no choice in the matter (action 2 must be taken) the completion of action 2 is required for the execution of action 3 at that particular moment, thus action 2 has a cause and an effect. Granted Jason still is powerless, but his ‘choice’ is required for the world to function, thus Jason makes the choice to deliver the flowers no matter what because that is what the past and current boundary conditions force him to do. Overall based on existing empirical evidence and logic it is difficult to conclude that determinism, as a matter of direct influence of choice, is a serious threat to the existence of free will.

However, removing the threat of determinism from choice selection is not the only barrier to supporting free will. For the purposes of this debate only option 1 corresponds to free will. Almost immediately option 2 could create a problem for those believing in free will. A more extreme position on free will is taken by Descartes and John Paul Sartre claiming that humans have a form of ‘absolute freedom’ where the only restriction on a free will is that it always must be free. This position does not appear to be correct based on how information is processed in the brain because there are unconscious inputs that more than likely will always remain unconscious and thus cannot be influenced by the conscious mind. These unconscious inputs will place inherent boundary conditions on the means in which free will can operate limiting the number of choices that can be created even if the conscious mind can create the choices. Realistically the only way to agrue a position of ‘universal’ freedom for free will is to believe that free will is driven by an element that exists beyond these unconscious processing factors, a soul for instance.

Some liberalists (free will advocates) argue that just because free will is not absolutely free that fact that people can make choices still means that free will exists. It is similar to saying that something is not true because there is a lack of absoulte knowledge. This statement can be divided into two different aspects. First, absolute freedom means the ability to do anything even if it violates the laws of physics. Clearly it is inappropriate to take this definition when regarding absoulte freedom. Second, absolute freedom means the ability to do anything within the boundary conditions created by uncontrollable elements like the laws of physics, chemistry, etc. The concern with this explanation is that can something really be considered free will if the available choices one can choose from are created outside of that freedom?

Some attempt to provide support for an unconscious choice creator by demonstrating the dominant role the unconscious mind plays on behavior. The interesting aspect of this argument for the unconscious role is that the unconscious mind does not come into being as a complete rigid entity, which cannot gain new information processing abilities or capacity as one ages. Like the conscious mind, the unconscious mind grows and learns over time, most likely with a heuristic processing methodology. It could be reasoned that this learning method could be influenced by conscious action through either the expansion of existing ideas into new previously unconsidered ideas (spark of inspiration) or through continuous specific decision making invoking greater long-term potentiation (LTP) probability. Countering the determinist mindset of ‘one cannot know until one knows’ LTP actually can create higher probabilities of knowledge prediction. In one respect through LTP conscious decision-making could influence unconscious choice creation to a point where it becomes more conscious than unconscious based on ‘rigging’ the heuristic processing of the unconscious part of the mind.

Libertarian free will supporters also attempt to break through determinism by arguing that the indeterminism of quantum mechanics creates sufficient randomness so that past elements cannot have only a single outcome condition and without determinism free will exists by default. Ignore for the moment that this randomness does not address the potential choice creation of the unconscious mind, determinists also raise the question that if quantum randomness exists then how can one conclude that this randomness is not the governing factor in decision making and action versus the individual actor? Some even go so far to suggest that any real randomness would make the whole world independent of any earlier states.

The problem is the anti-randomness group only seems to focus on the extreme aspects of randomness. This belief is silly because the world does not exist on a string of numerous random elements placed together. Instead random events can occur, but the extent of those events is controlled by various existing boundary conditions, which are established by past events/action and existing conditions (including one’s brain function). To John Fiske and others these boundary conditions are what prevent a sane mother from strangling her first-born child. Note that whether one wishes to suggest that these boundary conditions forgo real randomness is a matter of question. Is real randomness the possibility that everything could happen or that something (but not necessarily anything) randomly happens? Free will is based on the notion of option two; for example mental illness could be viewed as the loss of the conscious second parameter of choice, thus random actions are taken within the boundary conditions available.

Finally between the determinists and liberalists are the compatibilists who maintain that determinism is compatible with free will. Sadly compatibilists are rather pointless players in the real debate regarding free will because they do not seem to actually want to debate the existence of free will. Compatibilists define free will as “the freedom to act according to one’s determined motives without hindrance from other individuals.” Recall earlier that such a definition was viewed as inappropriate because it does not influence the nature of the debate on the existence of free will. The concern with the compatibilists viewpoint is that they appear to only care about free will as a relative concept not an absolute concept. Take David Hume who states that the concept of free will spoken of by compatibilist should not be viewed as an actual choice, but instead the person will always make the one decision that he/she is required by the universe to make based on the existing conditions. Basically compatibilists believe that all that matters is that people think they have free will not whether or not they actually have it because in absolute terms they don’t because it is a deterministic world.

Daniel Dennett exemplifies this viewpoint where he states the only well-defined things are “expectation”. Without total knowledge individuals have the ability to act differently from what anyone expects, which demonstrates free will. In some context this viewpoint is such that free will exists because humans do not know enough to say that it does not exist, but that is not addressing the real question of whether or not free will actually exists. Compatibilists are determinists who, for what ever reason, do not have the capacity to accept the ‘lack of control’ consequences of determinism and instead take a position that is basically a cop-out refusing to address the real nature of free will and its validity. The only way this position is not a cop-out is making the argument that humans will never know, within reason, if absolute free will exists and making such a statement is not wise as the history of predicting the future has demonstrated. For example in the 1910s one could suggest that no one would ever know what it was like to walk on the moon and almost everyone asked would wrongly agree.

On a side note regarding the Newcomb paradox, it does not appear to be paradoxical on its face. The paradox appears to be derived from inconsistent definition of the initial conditions and power of the predictor. For example the paradox stems from a perceived conflict between two decision making strategies derived from two separate philosophical arguments: 1) Past events cannot be affected so future action cannot influence a past event; 2) The prediction of the Predictor establishes equivalence between the choice and the content of the opaque box which is determined by the prediction of the Predictor. Therefore, the choice in the future affects the past prediction.

There are two questions that influence this issue, one directly and one indirectly. The direct question is whether the dominance principle or expected utility hypothesis is the superior choice. The indirect question is whether or not free will exists. Addressing the first question the obvious choice is to select only box B (expected utility). The logic of this choice is demonstrated through mathematical probability payout. If the Predictor is correct selecting A and B will only net $1,000 whereas selecting only B will net $1,000,000.

If the Predictor is incorrect then logic entails that selecting A and B is the best course of action netting either $1,000 over $0 or $1,001,000 over $1,000,000. However, this second reasoning is flawed because it does not take the probability of Predictor accuracy into account. Instead it only uses standard game theory reasoning at a basic single action level (similar to the simplest Prisoner’s Dilemma scenario). The key element to this issue is the predictive capacity of the Predictor. If one believes that the Predictor has an accuracy capacity rate exceeding approximately 50.05% selecting B makes more money. Normally one could conclude that because there are more possibilities below 50.05% than above one should select A and B, but the paradox presumes the Predictor is very accurate, if not completely accurate, therefore the probability that its accuracy capacity is lower than 50.05% is zero. The problem with invoking the viability of the dominance principle in Newcomb’s paradox is that the dominance principle is flawed on its face because it does not consider scale relative to probability of occurrence.

The indirect question in the Newcomb paradox of the existence of free will or not is not applicable. The reason for this irrelevance is because the issue of free will is entirely defined by the predictive capacity of the Predictor. If the Predictor is flawless then no free will exists because the Predictor is an agent of determinism, which uses the past events and current boundary conditions to define the near future. If the Predictor can be incorrect then free will can exist because there are elements that can influence events that cannot be predicted through deterministic processes. However, in the paradox the individual outlining the boundary conditions when stating the paradox determines the predictive capacity of the Predictor. Therefore, the issue of free will is completely determined by the bias of the individual introducing the specific parameters of the paradox. To those who argue that free will is defined in the problem even if the Predictor is always right, that “free will” is relative free will not absolute free will (basically the individual making the decision does not realize that he/she has no choice in the decision).

Largely the question of free will boils down to the expected definition. Based on what is known biologically it is difficult to argue that determinism eliminates the ability to choose and for those choices to impact the future in such a way that different outcomes would fail to emerge. It seems that most determinists focus on demanding that liberalists have the burden of proof aspect where both sides actually have to prove their position for there is no suitable default position. However, there is an issue with the nature of the legitimacy of free will being defined in how choices are created in the first place. It stands to reason that the conscious mind is not exclusively responsible for all of the information processing that allows for the creation of existing choices. The question is how much conscious thought must go into the creation of the choice(s) for it to be judged as ‘created by consciousness’ over unconsciousness? The difficulty of that question makes it difficult to conclude in either the affirmative or the negative about the validity of free will. Overall for all of the semantics that are used about free will answering the question of choice creation, not choice action, appears to be the most important question of all.


1. Libet, B, et Al. “Time of conscious intention to act in relation to onset of cerebral activities (readiness-potential); the unconscious initiation of a freely voluntary act.” Brain. 1983. 106:623-42.

2. Libet, B. “Unconscious cerebral initiative and the role of conscious will in voluntary action.” The Behavioral and Brain Sciences. 1985. 8:529-566.

3. Dooley, C. “The impact of meditative practices on physiology and neurology: A review of the literature.” Scientia Discipulorum. 2009. 4:35-59.

4. Carruthers, M. “Voluntary control of the involuntary nervous system; Comparison of autogenic training and siddha meditation.” Experimental and Clinical Psychology. 1981. 6:171-181.

5. Assal, F, Schwartz, S, and Vuilleumier, P. “Moving with or without will: Functional neural correlates of alien hand syndrome.” Annals of Neurology. 2007. 62(3): 301–306.

6. Kayser, A, Sun, F, and D'Esposito, M. “A comparison of Granger causality and coherency in fMRI-based analysis of the motor system.” Human Brain Mapping. 2009. 30(11): 3475–3494.

Saturday, December 15, 2012

Fixing a Broken Clock in College Football

One of the common arguments made by those supporting the idea that college athletes should be paid beyond scholarships are the exorbitant salaries of college coaches. For example 42 head football coaches make at least $1 million dollars per year with the salary average for all coaches around $1.64 million.1 The problem with this argument is it ignores all of the problems associated with paying college athletes and how those problems make such a desire nearly impossible; these problems were discussed in a previous blog post here. However, by itself the argument that college football coaches are paid too much money is a valid one. With more and more college students swimming in greater amounts of debt due to tuition increases, scholarship/grant decreases and worsening high-salaried job prospects it is difficult for defenders of these salaries to continue to hide behind ‘don’t hate the coach, hate the market’ type arguments.

The failure of the market argument is that the market has been inflated and effectively destroyed because it is awash with television money. There is no correction factor because there is no incentive construct. For example a head football coach of university A in a BCS automatic qualifying conference can go 4-8, but because of the revenue sharing in the conference from television deals, especially those conferences with their own television networks, the $1 million dollars paid to that coach is not viewed as a significant loss even if gate receipts drop because of poor play. In fact some may view the search process involved in finding a new coach more costly over absorbing the costs associated with a mediocre coach producing mediocre results, especially for a university that does not have a history of success in football.

When identifying an object or system as broken the natural reaction is to begin processing solutions to fix the problem(s) to repair the system. Unfortunately this reaction has skipped the high value college athletic environment. Such a reality is sad because the fix is rather simple. Instead of providing large base salaries for coaches, universities should arrange all contracts to operate on commission with incentives and a small base salary. An example of such an arrangement is shown below.

Instead of paying a coach $3.6 million dollars a year like the University of South Carolina pays Steve Spurrier, payment over a given year could be as followed:

Base Salary = $40,000
Salary increase per win over unranked team = $5,000
Salary increase per win over ranked team* = $10,000
Salary increase per win over the historical rival university = $15,000
Salary increase upon going to a non-BCS bowl game = $15,000
Salary increase upon winning a conference championship = $25,000
Salary increase upon going to a BCS bowl game = $50,000
Salary increase upon winning a national championship = $250,000

* = the increase is only valid for victories over teams ranked at the end of the year not when they were played.

Under such a contract if coach A lead a team to a 10-3 record with 3 ranked victories, the rival victory and a non-BCS bowl game appearance he would be paid $135,000. Some might argue that such a salary is not fair, but anyone who makes such an argument has a distorted sense of importance. In the above scenario coach A makes $135,000, a salary that is more than a large number of other occupations that are more important to the infrastructure of society including public school teacher, police officer, fire fighter, farmer, lab technician, most engineers, some general practitioners, etc. Therefore, how is awarding coach A such a salary unfair? Especially when the workload of a college football coach is less than the workload of all of the above mentioned professions.

Some could argue that such a system depends too much on luck, not skill because what happens if a team in a given year is devastated by injuries resulting in numerous losses? While such an argument is a definite possibility its influence is insignificant. One could argue that a chief aspect of being a coach for a given sport is the ability to design strategies that enhance the strengths of the players while concealing their weaknesses, thus a rash of injuries should not affect a good coach as much as a bad coach. Also good coaches are able to improve the abilities of weaker players reducing the reliance on their recruitment and maintenance of four and five star prep talent. Therefore, a commission system actually differentiates between a good coach and a bad coach and how they should be financially rewarded for their job performance, exactly how capital markets should function.

If such a commission system is created it is important that the conferences themselves or even the NCAA design a strong system of regulations to avoid inflation. For example a commission system does little to restore market functionality and legitimacy when a coach at university A is awarded $80,000 per win. One possible regulation would be a maximum salary, as a combination of incentives and base, is defined by a NCAA defined percentage of total shared conference revenue and university specific ticket gate. Basically suppose university A was awarded $13 million dollars in revenue sharing from conference television deals, etc and made $8 million dollar in ticket revenue. If the NCAA defined a 1% salary limit then the maximum potential salary of the coach at university A would be $210,000. With the ticket gate inclusion in the above example clearly larger universities are going to have an advantage in coach recruiting because of the ability to offer higher salaries, but it is difficult to eliminate this advantage in a general market operating system. However, good coaches at smaller universities can increase their salaries by raising the profile of the football program and increasing ticket gates and interest.

Overall fixing the salary system in popular college athletics should free up a lot of wasted money that can be diverted to supporting other male and female sports beyond football or basketball as well as provide additional funds for other scholarships encompassing either sports, academic or special needs. People frequently speak of allowing the market to determine the value of something, but do individuals have the will and/or intelligence to recognize when the market is broken and work to fix it accordingly?



Monday, December 10, 2012

Addressing Pain before it becomes chronic

It is estimated that 50-60% of patients do not receive adequate pain control after surgical or other invasive medical procedures.1,2 Not surprisingly when this pain is not addressed properly there are significant increases in morbidity as well as increases in short-term and long-term medical costs.1,3-9 Some estimate that approximately 116 million individuals suffer from either acute or chronic pain that is not managed properly.10 While there are various, somewhat arbitrary, time periods assigned to the development of chronic pain the general definition is pain that persists beyond the expected period of healing for a given injury. There are two major types of chronic pain: nociceptive, which results due to nocieptor activation (pain receptors) and neuropathic, which results due to damage to the spinal cord or periphery sensory neurons.

One of the initial problems with addressing pain management is the method that is used to evaluate the intensity of pain. While numerous criticisms have been levied against the standard pain numerical reporting system because of its subjective non-uniform nature (a 5 out of 10 for pain for person A may be much different than a 5 out of 10 for person B), little attention is paid to tracking changes in pain progression. Typically post-operative pain is characterized by one or two visits by a nurse with assignment of some pain medication. Basically the pain is viewed as a somewhat static condition that will persist at the recorded level for the duration of the day if not treated by medication. Therefore, one way of better managing pain requires more attentive inquiry regarding how pain is progressing in a patient over the course of a day. Instead of once or twice, inquiries should be made every hour during a normal diurnal time frame for tracking changes. This method will also assist in improving pain management by creating a more reliable evaluation metric for specific treatments. Some seek to measure pain by looking at how certain metabolites change in the bloodstream with time, but with currently limited knowledge of threshold concentrations it is difficult to judge how effective such a strategy would be.

Another problem with pain management is addressing the development of chronic pain. The first question is whether the development of chronic pain stems from improper pain management shortly after surgery or a traumatic event created by improper/inefficient surgery? Based on existing statistics outlining how much long-term pain the population appears to be suffering from it is unlikely that improper/inefficient surgery is the cause of a majority of the chronic pain development. Therefore, this increase in experienced longer-term pain in the population is more than likely due to improper pain management. Unfortunately the current treatment methodology must be flawed in some way because it draws concern that these inconsistencies in effectively dealing with pain arise despite an increase in the overall use of opioids to manage pain.11-14.

While the use of multi-modal analgesia strategies has created sufficient levels of hope in better managing pain, most notably reducing side effects, this hope has not created significantly better long-term outcomes for a majority of people. The inability to attain the potential of multi-modal analgesia is largely due to the large number of variables involved in researching the effectiveness of various technique combinations (dose levels, surgery type, specific genetic factors, analgesic agents, etc.). Basically the individuality demanded by the development and application process reduces a large potential for streamlining and standardizing multi-modal analgesia strategies.

One of the hallmark symptoms of pain is hyperalgesia, which is an increase in pain sensitivity/perception in response to pain inducing stimuli. The typical cause of hyperalgesia is amplification and prolonged nociceptive excitation. Pursuant to this development there is evidence to suggest that current pain management techniques are short-term gain and long-term loss when it comes to addressing pain. One of the most popular choices to managing pain are opioids, especially for acute and chronic cancer pain, but opioids have also started to expand to chronic non-malignant pain. Opioids operate by binding to one of their seventeen different receptors, but three (Mu, Kappa and Delta) are largely responsible for the pain reduction ability of most opioids. Interaction with mu opioid receptors (MOR) either directly or as agonists is the most common pain management pathway. While MORs are expressed on numerous types of neurons the most important with regards to the propagation of pain appear to be the primary sensory neurons called small-sized (C-fiber) and medium sized (A-fiber, specifically Adelta) in the dorsal root ganglia (DRGs).14-17

When molecules interact with MORs they induce presynaptic inhibition that prevents N-type calcium channels from opening, which release neurotransmitters contacting with superficial dorsal horn neurons.18-20 Additional inhibition occurs through activation of G protein-coupled inwardly rectifying potassium channels on dorsal horn neurons resulting in hyperpolarization.20,21 Basically these opioid agonists are effective at addressing pain because of this dual neutralization methodology. Note that there is also some belief of indirect neutralization methods like immune cell activation of opioid receptors.22,23

Unfortunately improper activation of MORs can result in counteracting excitatory activity through the up-regulation of pronociceptive pathways,24,25 which leads to hyperalgesic effects. This specific outcome has been labeled opioid-induced hyperalgesia (OIH). OIH is characterized by increased probabilities for pain in general leading to increased probabilities for the development of chronic pain and tolerance to opioids, which decreases the ability to treat that chronic pain. There is concern that OIH routinely develops into chronic pain due to abrupt inappropriate withdrawals of opioids leading to long-term potentiation (LTP) in the spinal cord. This LTP response is thought to derive from massive activation of NMDA receptor glutamate responses with potential dependency on spinal cord-based TRPV1-expressing afferents along with substance P and chemokines.26-29 OIH can either be acute or chronic.30,31

Focus is applied to spinal cord and DRG LTP because it can develop due to electrical stimulation of appropriate afferents or noxious stimulation (nerve injury or inflammation).27,31-33 One common place for LTP augmentation is at synapses between nociceptive afferents and neurokinin 1 (NK1) receptor expressing projection neurons in lamina I.27 These projection neurons are principally responsible for sending pain signals to the brain.31,34-35 In addition there is similar pharmacology between LTP generation and long-term hyperalgesia.27 Finally LTP development at synapses between C-fibers and superficial dorsal horn neuron is induced by abrupt withdrawal of opioids.26 This is an important distinction because medication in general is typically only administered until symptoms subside. Unfortunately in most situations, including opioid treatment, suddenly stopping medication can result in negative biological consequences.

OIH can ‘leak’ over into the spinal cord by promoting the activation and translocation of protein kinase C, nitric oxide and cholecystokinin and in worst-case scenarios this development can lead to neuronal apoptosis further increasing pain reception problems.36-39 In some respects OIH could be viewed as initially nociceptive and eventually progresses into a neuropathic element.

However, all of this information is still indirect because LTP in the spinal cord with a relation to pain has not been studied directly.27 The lack of direct testing leaves an open question regarding spinal LTP length and how it fully influences the development of chronic pain. LTP for a given group of neurons can last for hours, days, months or a lifetime, but indirect evidence suggests LTP in the spinal cord lasts for several days.27 In this light chronic pain is thought to develop from inhibition of endogenous anti-nociceptive systems or intermittent low-level nociceptive input from periphery neurons. For example pain threshold reduction LTP is also perpetuated to a chronic level through the decreased activity of endogenous anti-nociceptive systems, thus reducing the ‘natural’ abatement adding chronic pain development.

One of tricky elements with addressing OIH is differentiating it from opioid tolerance. When increasing the opioid dosage for treatment of chronic pain the reason for the increase must be identified between opioid tolerance or OIH. In situations of tolerance it may be appropriate to increase opioid concentration depending on the severity of the pain, but in OIH more opioids would result in greater probability of pain. The most common strategy for treating OIH is to cease opioid treatment and substitute a non-opioid analgesic. Unfortunately non-opioid analgesics are typically not as effective as opioids and have their own side effects thus reducing the ability to manage pain.

Differences in analgesic treatment ability has lead to some rotational methodologies where opioids are used for a time and then replaced by non-opioids before a return again to opioids in an attempt to manage pain, but avoid compounding side effects from either treatment. Obviously the success of weaning a patient off of opioids as a means to treat OIH is based on the rate of OIH progression. Unfortunately it is difficult to assess the rate of advancement of OIH in a given patient. However, interestingly enough the future of managing chronic pain may not be developing a new pill or new multi-modal analgesia strategy, but instead developing a strategy where chronic pain does not develop in the first place.

A critical element in the pathway development for OIH is matrix metalloprotease (MMP) concentration. MMPs are a multigene family of tightly regulated zinc-dependent enzymes that maintain homeostasis through their role in tissue degradation and repair.40,41 The two MMPs that appear to play the most prominent roles in pain development are MMP-2 and MMP-9. MMP-9 is frequently released after nerve injury and directs the cleavage of IL-1b.14 Continued cleavage of IL-1b is then governed through a positive feedback mechanism with MMP-2.14,40 There is also suggestion that MMP-9 can interact with NMDA receptors NR1 and NR2B through integrin-beta1 and NO pathways.41 However, MMP-9 influence only seems to occur over a very short time frame (< 24 hrs) for after OIH acquisition to role played by MMP-9 seems to lessen significantly.14

Morphine is one of the most commonly utilized drugs for pain management and is frequently regarded as the standard for comparing the effectiveness of other pain management drugs. Due to its interaction with the μ-opioid receptor morphine chiefly influences in the posterior amygdala, hypothalamus, thalamus, nucleus caudatus and putamen with some associated action in the laminae I and II of the spinal cord. The effects of morphine interaction with its receptor are analgesia and sedation, but can also result in physical dependence.

While morphine is a commonly used pain management drug, its action may have a more detrimental long-term effect in that its interaction with opioid receptors leads to induction of rapid MMP-9 up-regulation. The initial up-regulation occurs in the DRG neurons, not in the spinal cord, and activates pro-nociceptive pathways from the DRG, most notable the cleavage of IL-1b.14 The increased concentration of MMP-9 is not derived from mRNA increases, but translational regulation instead.14 MMP-9 up-regulation does occur in the spinal cord after sustained morphine exposure and could play a role in opioid-induced withdrawal symptoms.41 In some context this biological response could be the body attempting to neutralize the synthetic (non-natural) neutralization of pain possibly in effort to ensure that the mind recognizes that the pain is occurring in effort to cease the pain creating activity, ward off its future application or begin/speed the healing process because pain usually involves some form of injury.

One of the chief aspects of hyperalgesia is the augmentation of Adelta fibers from mechanically insensitive (silent) to mechanically sensitive. This process occurs at high probability in two separate areas: first, during the surgery itself due to cutting an incision and second from MMP-9 up-regulation.42,43 Incision derived hyperalgesia does not rely on NMDA receptor activation, but instead its ‘sister’ receptor a-amino-3-hydroxy-5-methyl-4-isoxazole-propionate (AMPA).30,42 This sensitivity increase applies not simply to pain invoking stimuli, but also non-pain inducing mechanical stimuli due to a reduced mechanical response threshold in Adelta fibers.43 Reduced mechanical response also translates into a much larger spontaneous activity (up from 0% to 38% in Adelta afferents and from 0% to 40% in C-fibers).43 This spontaneous activity may play a role in the facilitation of chronic pain through LTP or mechanical sensitization of nociceptors. Inflammation also is though to reduce this spontaneous firing threshold.44,45 In both scenarios the reduced mechanical response threshold decreases gradually to a new equilibrium instead of all at once. This gradual reduction may play a role in the capriciousness of chronic pain development (different people may have different new equilibriums that are obtained at different rates).

Under most circumstances the application of a NMDA antagonist like ketamine can prevent OIH, but such action also reduces the pain neutralization ability of the administered opioid and studies looking at the benefit of combining opioids and NMDA antagonists have resulted in mixed results.46 Include that result with the significant psychotomimetic side effects (sedation, confusion, and lack of coordination) associated with NMDA antagonists and these types of opioid antagonists are typically only used to address opioid overdoses. Part of the problem with using NMDA antagonists to treat pain directly outside of combination with an opioid is that different molecular organizations of the NMDA receptors due to the three different subtypes, each having multiple isoforms, which results in different binding affinities.47

New strategies for short-circuiting the development of the OIH or other chronic pain pathways could be addressed through two different means. First, prevention of IL-1b cleavage, which is a downstream agent in the pain development pathway, will reduce hyperexcitability of sensory neurons by inhibiting potassium channel opening and increasing sodium channel opening.48-50 A similar alternative would be to prevent IL-1b binding by use of a IL-1 receptor antagonist. Second, the elimination of MMP-2 or MMP-9 could treat chronic pain for MMP-2 appears to be a maintenance pain molecule of some sorts whereas MMP-9 seems to be a trigger.

Some believe one strategy to prevent the development of neuropathic pain is to utilize loco-regional anaesthesia techniques over general anaesthesia.30 Some of the loco-regional agents that are hypothesized to be useful are μ-opioid receptor agonists and clonidine along with antagonists at T-type VGCCs and GABAA receptors.27 At least for major morbidities, the data looks promising for the results of several meta-analyses suggest that use of loco-regional analgesia or continuous paravertebral blockade is associated with decreased risk of postoperative pulmonary complications in patients undergoing upper abdominal and thoracic surgical procedures.51,52

The preoperative use of loco-regional analgesia is also associated with a reduction in respiratory complications after major abdominal surgery, although the effect of loco-regional analgesia might not be as prominent as it was previously, partly because the incidence of respiratory complication has progressively decreased during past years.53 Meta-analyses in patients undergoing high-risk cardiothoracic and vascular procedures suggest that use of preoperative thoracic loco-regional analgesia might decrease pulmonary complications, cardiac dysrhythmias, and overall cardiac complications.54,55 So even if current loco-regional analgesia techniques do not have any significant pain reduction characteristics they have some positive benefits.

However, there may be an even better means to amplify loco-regional anaesthesia through the use of MMP-2 and/or 9 inhibitors in the anaesthesia prior to surgery. By preventing MMP-2/9 activity during the pain inducing surgery itself, it may prevent the pain cascade from initiating at any significant level, thus eliminating the need to large amounts of pain control and the potential for the development of OIH. For example NOV manipulation can inhibit MMP-2 expression in the DHSC and MMP-9 expression in DRG and the spinal cord.56 Under normal pain conditions NOV is down-regulated in DRG and DHSC. One means to increase NOV expression is treat individuals with dexamethasone. However, caution must be taken before utilizing the increase of NOV or a similar agent as a treatment possibility because its influence has different effects on different cells. There is little information regarding what negative side effects may stem from applying MMP2/9 inhibitors immediately prior to surgery, so studies must be done to determine their nature and severity. One important consideration is to create the proper balance of inhibition because of the positive role MMP-9 has in wound healing.57

Overall pain management continues to be problematic in society. With the continued increases in OIH development it is more difficult because unless strict controls are established a common means to treat pain can become a catalyst for its further development. Unfortunately patients have a tendency not to be logical and practical when it comes to pain management for when a person is in pain they tend to do stupid things. It could be a great boon to pain management to develop a strategy to neutralize chronic pain before it even fully develops allowing other analgesia elements to be moved to a secondary strategy to treat more extreme conditions. The pre-surgical inhibition of MMP2/9 could have the potential to be such a strategy.


1. Chapman, R, et Al. “Postoperative pain trajectories in cardiac surgery patients.” Pain Research and Treatment. 2012. Article ID 608359. doi:10.1155/2012/608359

2. Wheeler, M, et Al. “Adverse events associated with postoperative opioid analgesia: a systematic review.” Journal of Pain. 2002. 3(3):159–180.

3. Oderda, G, et Al. “Opioid-related adverse drug events in surgical hospitalizations: impact on costs and length of stay.” Ann Pharmacother. 2007. 41:400–06.

4. Ballantyne, J, et Al. “The comparative effects of postoperative analgesic therapies on pulmonary outcome: cumulative meta-analyses of randomized, controlled trials.” Anesthesia and Analgesia. 1998. 86(3): 598–612.

5. Rodgers, A, et Al. “Reduction of postoperative mortality and morbidity with epidural or spinal anaesthesia: results from overview of randomised trials.” The British Medical Journal. 2000. 321(7275):1493–1497.

6. Beattie, W, Badner, N, and Choi, P. “Epidural analgesia reduces postoperative myocardial infarction: a meta-analysis.” Anesthesia and Analgesia. 2001. 93(4):853–858.

7. Holte, K and Kehlet, H. “Effect of postoperative epidural analgesia on surgical outcome.” Minerva Anestesiologica. 2002. 68(4):157–161.

8. Marret, E, Remy, C and Bonnet, F. “Postoperative Pain Forum Group. Meta-analysis of epidural analgesia versus parenteral opioid analgesia after colorectal surgery.” Br J Surg. 2007. 94:665–73.

9. Fischer, H, et Al. “A procedure-specifi c systematic review and consensus recommendations for postoperative analgesia following total knee arthroplasty.” Anaesthesia. 2008. 63:1105–23.

10. Institute of Medicine of the National Academies Report (2011). Relieving Pain in America: A Blueprint for Transforming Prevention, Care Education, and Research. Washington DC: The National Academies Press.

11. Frasco, P, Sprung, J and Trentman, T. “The impact of the joint commission for accreditation of healthcare organizations pain initiative on perioperative opiate consumption and recovery room length of stay.” Anesth Analg. 2005. 100:162–68.

12. Zaslansky, R, et Al. “Tracking the effects of policy changes in prescribing analgesics in one emergency department: a 10-year analysis.” Eur J Emerg Med. 2010. 17:56–58.

13. Manchikanti, L, et Al. “Therapeutic use, abuse, and non-medical use of opioids: a ten-year perspective.” Pain Physician. 2010. 13:401–35.

14. Liu, Y, et Al. “Acute morphine induces matrix metalloproteinase-9 up-regulation in primary sensory neurons to mask opioid-induced analgesia in mice.” Molecular Pain. 2012. 8:19-36.

15. Ji, R, et Al. “Expression of mu-, delta-, and kappa-opioid receptor-like immunoreactivities in rat dorsal root ganglia after carrageenan-induced inflammation.” J. Neurosci. 1995. 15:8156-8166.

16. Wang, H, et Al. “Coexpression of delta- and mu-opioid receptors in nociceptive sensory neurons.” PNAS. 2010. 107:13117-13122.

17. Lee, C, et Al. “Dynamic temporal and spatial regulation of mu opioid receptor expression in primary afferent neurons following spinal nerve injury.” Eur J. Pain. 2011. 15:669-675.

18. Heinke, B, Gingl, E, and Sandkühler, J. “Multiple Targets of mu-Opioid Receptor-Mediated Presynaptic Inhibition at Primary Afferent A{delta}- and C-Fibers.” J. Neurosci. 2011. 31:1313-1322.

19. Kohno, T, et Al. “Actions of opioids on excitatory and inhibitory transmission in substantia gelatinosa of adult rat spinal cord.” J. Physiol. 1999. 518(3):803-813.

20. Kohno, T, et Al. “Peripheral axonal injury results in reduced mu opioid receptor pre- and post-synaptic action in the spinal cord.” Pain. 2005. 117:77-87.

21. Yoshimura, M, North, R. “Substantia gelatinosa neurones hyperpolarized in vitro by enkephalin.” Nature. 1983. 305:529-530.

22. Mousa, S, et Al. “Beta-Endorphin-containing memory-cells and mu-opioid receptors undergo transport to peripheral inflamed tissue.” J. Neuroimmunol. 2001. 115:71-78.

23. Stein, C, et Al. “Peripheral mechanisms of pain and analgesia.” Brain Res Rev. 2009. 60:90-113.

24. Angst, M, Clark, J. “Opioid-induced hyperalgesia: a qualitative systematic review.” Anesthesiology. 2006. 104:570-587.

25. Mao, J, Price, D, and Mayer, D. “Mechanisms of hyperalgesia and morphine tolerance: a current view of their possible interactions.” Pain. 1995. 62:259-274.

26. Drdla, R, et Al. “Induction of synaptic long-term potentiation after opioid withdrawal.” Science. 2009. 325:207-210.

27. Ruscheweyh, R et Al. “Long-term potentiation in spinal nociceptive pathways as a novel target for pain therapy.” Molecular Pain. 2011. 7:20-57.

28. Chen, Y, Geis, C, and Sommer, C. “Activation of TRPV1 contributes to morphine tolerance: involvement of the mitogen-activated protein kinase signaling pathway.” J. Neurosci. 2008. 28:5836-5845.

29. Ma, W, et Al. “Morphine treatment induced calcitonin gene-related peptide and substance P increases in cultured dorsal root ganglion neurons.” Neuroscience. 2000. 99:529-539.

30. Wu, C and Raja, S. “Treatment of acute postoperative pain.” The Lancet. 2011. 377:2215–25.

31. Ikeda, H, et Al. “Synaptic amplifier of inflammatory pain in the spinal dorsal horn.” Science. 2006. 312:1659-1662.

32. Zhang, H, et Al. “Acute nerve injury induces long-term potentiation of C-fiber evoked field potentials in spinal dorsal horn of intact rat.” Sheng Li Xue Bao. 2004. 56:591-596.

33. Sandkühler, J and Liu, X. “Induction of long-term potentiation at spinal synapses by noxious stimulation or nerve injury.” Eur J Neurosci. 1998. 10:2476-2480.

34. Nichols, M, et Al. “Transmission of chronic nociception by spinal neurons expressing the substance P receptor.” Science. 1999. 286:1558-1561.

35. Mantyh, P, et Al. “Inhibition of hyperalgesia by ablation of lamina I spinal neurons expressing the substance P receptor.” Science. 1997. 278:275-279.

36. Mayer, D, et Al. “Cellular mechanisms of neuropathic pain, morphine tolerance, and their interactions.” PNAS. 1999. 96:7731– 6.

37. Chen, L and Huang, L. “Sustained potentiation of NMDA receptormediated
glutamate responses through activation of protein kinase C by u-opioids.” Neuron. 1991. 7:319 –26.

38. Chen, L, and Huang, L. “Protein kinase C reduces Mg2+ block of NMDA-receptor channels as a mechanism of modulation.” Nature. 1992. 356:521–3.

39. Mao, J, Price, D and Mayer, D. “Thermal hyperalgesia in association with the development of morphine tolerance in rats: roles of excitatory amino acid receptors and protein kinase C.” J. Neurosci. 1994. 14:2301–12.

40. Ribeiro, A, et Al. “Expression of matrix metalloproteinases, type IV collagen, and interleukin-10 in rabbits treated with morphine after lamellar keratectomy.” Veterinary Ophthalmology. 2012. 15(3):153-163.

41. Liu, W, et Al. “Spinal matrix metalloproteinase-9 contributes to physical dependence on morphine in mice.” J. Neurosci. 2010. 30:7613-7623.

42. Zahn, P, Umali, E and Brennan, T. “Intrathecal non-NMDA excitatory amino acid receptor antagonists inhibit pain behaviors in a rat model of postoperative pain.” Pain. 1998. 74:213–23.

43. Pogatzki, E, Gabhart, G and Brennan, T. “Characterization of Adelta- and C-Fibers Innervating the Plantar Rat Hindpaw One Day After an Incision.” J. Neurophysiol. 2002. 87:721-731.

44. Ahlgren, S, White, D and Levine, J. “Increased responsiveness of sensory neurons in the saphenous nerve of the streptozotocin-diabetic rat.” J Neurophysiol. 1992. 68:2077–2085.

45. Kocher, L, et Al. “The effect of carrageenan-induced inflammation on the sensitivity of unmyelinated skin nociceptors in the rat.” Pain. 1987. 29:363–373.

46. Van Elstraete, A, et Al. “A Single Dose of Intrathecal Morphine in Rats Induces Long-Lasting Hyperalgesia: The Protective Effect of Prior Administration of Ketamine.” Anesth Analg. 2005. 101:1750 –6.

47. Paoletti, P and Neyton, J. “NMDA receptor subunits: function and pharmacology.” Curr Opin Pharmacol. 2007. 7(1):39–47.

48. Takeda, M, et Al. “Enhanced excitability of nociceptive trigeminal ganglion neurons by satellite glial cytokine following peripheral inflammation.” Pain. 2007. 129:155-166.

49. Binshtok, A, et Al. “Nociceptors are interleukin-1beta sensors.” J. Neurosci. 2008.

50. Takeda, M, et Al. “Activation of interleukin-1beta receptor suppresses the voltage-gated potassium currents in the small-diameter trigeminal ganglion neurons following peripheral inflammation.” Pain. 2008. 139:594-602.

51. Marret, E, et Al. “Meta-analysis of intravenous lidocaine and postoperative recovery after abdominal surgery.” Br J Surg. 2008. 95:1331–38.

52. Hudcova, J, et Al. “Patient controlled opioid analgesia versus conventional opioid analgesia for postoperative pain.” Cochrane Database Syst Rev. 2006. 4:CD003348.

53. Wijeysundera, D, et Al. “Epidural anaesthesia and survival after intermediate-to-high risk non-cardiac surgery: a population-based cohort study.” Lancet. 2008. 372:562–69.

54. Wu, C, et Al. “Effect of postoperative epidural analgesia on morbidity and mortality following surgery in medicare patients.” Reg Anesth Pain Med. 2004. 29:525–33.

55. Liu, S and Wu, C. “Effect of postoperative analgesia on major postoperative complications: a systematic update of the evidence.” Anesth Analg. 2007. 104:689–702.

56. Kular, L, et Al. “NOV/CCN3 attenuates inflammatory pain through regulation of matrix metalloproteinases-2 and –9.” Journal of Neuroinflammation. 2012. 9:36-55.

57. Broadbent, E, et Al. “Psychological stress impairs early wound repair following surgery.” Psychosomatic Medicine. 2003. 65:865-869.

Tuesday, December 4, 2012

Fixing the Carbon Tax

This blog has previously questioned the foresight of those who support a carbon tax regarding its regressive nature. The regressive nature of a carbon tax is defined in that it creates a greater burden on the poor. Some have argued against this characterization claiming that there is an association between income and carbon emissions thus the wealthier the individual the greater amount of money he/she will have to pay in taxes. While this assertion is correct it only focuses on the absolute nature of the tax, not the relative nature of the tax. The regressive issue correlates to the percentage of income that poor individuals spend on energy and fuel related activities. The addition of the tax increases this burden further complicating the decision making process between food, energy and medical supplies. The percentage of expenditure is what matters in this situation not the total amount spent because the rich have a higher probability of affording the increase versus the poor because the rich have more disposable income. It does not matter that one has to spend $1,000 more if one has $30,000+ more to spend.

Proponents believe this regressive characteristic of a carbon tax can be countered by establishing a dividend system making the carbon tax revenue neutral for the government. Basically all of the money collected from a carbon tax will be offset to the public in some way. Some plans propose relief of payroll or sales taxes, but neither one of these options would be effective. Payroll tax relief is not appropriate because it does not account for individuals without jobs and it could also negatively influence Social Security funding. Sales tax relief is not appropriate because the chief goal of the carbon tax is to eliminate carbon emissions and one side element to reducing emissions is not just to convert to trace emission sources, but to also create more efficiency and less consumption. The provision of sales tax relief could create a psychological conflict similar to Jevon’s Paradox regarding consumption and carbon emissions if a person attempts to ‘maximize’ their tax relief.

Others want to distribute an annual dividend similar to those issued by certain companies and the State of Alaska equally divided among all adult citizens with half shares (typically) going to children. For example the Carbon Tax Center (CTC) provides an example of such a system focusing only on gasoline. In the example a $10 per ton of CO2 tax raises an estimated $55 billion dollars of revenue annually and then is divided evenly among 300 million U.S. citizens for an annual rebate of $183. Ignore the fact that in this example both adults and children receive the same amount of money, which is probably not suitable. The CTC then calculates that the average lowest quintile income individual will spend an extra $80 annually on gas due to the carbon tax, thus these individuals will net $103 per year from the carbon tax. Despite the limited nature of the example the CTC and other carbon tax proponents use this mindset as a means to counteract the regressive nature of a carbon tax.

Unfortunately those who hold the belief that monetary dividends will effectively mitigate the regressive nature of the tax are not accounting for the changes in behavior that the tax is supposed to motivate. First, note that analysis like the CTC example above is somewhat complicated because the assumptions are rarely listed in an organized and clear manner. For example the CTC expects a carbon tax to focus on applying the tax as far upstream as possible most likely at source points. Basically the tax should apply when a producer (coal mine, natural gas wellhead, port facility, importer, etc.) transfers ownership of a given quantity of the material to another party (energy utility, oil refiner, owner of a gas pipeline, etc.).

This strategy is appropriate and should be effective for eliminating waste, maximizing efficiency and reducing costs. However, when analyzing how the tax affects individuals it must be stated how the companies who are absorbing the tax pass that burden off to these individuals. For most analysis it appears that authors are assuming that an equal burden is passed on to consumers. This assumption may not be accurate for it stands to reason that an equal burden is the minimum that a company would try to recoup from the tax. Therefore, costs for downstream consumers may increase more than for upstream providers where the tax is actually applied. Some may celebrate this change because higher costs could eliminate carbon emissions even faster, but this change would also make a carbon tax more regressive.

The second important element that most, if not all, carbon tax proponents are forgetting is how the nature of the tax changes with time. The carbon tax will increase over time, which will create a dual expansion of the regressive characterization of the carbon tax. First, a higher tax will lead upstream providers to increase their prices for downstream consumers placing a greater burden on all consumers, but more so on those that have less overall money to spend (the poor). Second, a higher tax will further motivate those who have the capacity and funds to reduce their consumption of carbon emitting elements, thus reducing the total amount of transferred tax they have to absorb and the total number of carbon products taxed leading to a reduction in the size of the dividend check received from the government. It stands to reason that rich individuals will be in better position to reduce their carbon emissions versus the poor through the purchase of high capital cost trace emission products like solar panels and electrical/hybrid vehicles.

A side note to this second point is that it is more likely that rich people have more control over their living arrangements (houses and condos versus apartments) as well. This control affords them a better opportunity to install things like solar panels or solar water heaters versus apartment landlords that do not have to pay utilities and could balk at the costs ($250,000+ per complex) of installing these high capital cost trace carbon emission items for their tenets. Therefore, individuals living in apartment or low-income housing may have no ability to significantly change their carbon consumption habits as time passes. With this handicap poor individuals will struggle even further under the increasing burden of the carbon tax as time passes and the tax increases.

Clearly because numerous carbon tax proponents do not acknowledge this reality as a problem they have not produced a solution. At this moment there appears to be two possible solutions. First, the dividend derived from the tax can be progressive versus matching. Basically poorer individuals will be given a larger percentage of the total dividend and richer individuals will be given a smaller percentage. While this system would give poorer individuals more money and thus greater opportunity to invest in trace emission energy and transportation sources, numerous individuals would view such a system as unfair. Interestingly the fairness of such a system is much more complicated than most would care to consider.

Second, because the goal of the carbon tax is to reduce emissions and the conversion from heavy emitting sources and strategies to lower emitting sources and strategies is basically one-directional a clause in the carbon tax could lower the tax rate when certain emission reduction targets are reached. This method is possible because of the one-directionality of conversion in that once someone starts powering their business through say geothermal energy over coal they almost never switch back to coal largely because of all of the financial investment required for the initial switch. Therefore, lowering the tax rate on carbon after certain emission reduction targets are obtained is acceptable and will help those individuals who did not have the resources and/or capacity to move away from heavier carbon emissions.

Such a carbon tax could operate in the following manner:

- The carbon tax would increase with the passage of time until reaching a particular ceiling [for example the tax would be $15 per ton of CO2 in the first year increasing $5 per ton for the first 10 years then increasing $15 per ton for the next 10 years with no increases afterwards (a final tax value from a time perspective of $215 per ton of CO2 20 years after the institution of the tax)].

- The reason for the small increase transitioning to the large increase is to provide a form of grace period to allow individuals and business to adjust to the tax in a controlled and efficient manner and then balloon the tax to ‘punish’ those who have yet to move from high carbon emission sources to low carbon emission sources.

- However, because not all parties have emission decisions under their control, especially the poor, there must be a means to offset the cost of the time-maximized tax. The goal of the carbon tax is to reduce carbon emissions, thus reducing the carbon tax with respect to emissions can offer this offset. To ensure the validity of the carbon tax no emission related reductions would take place until after the first 10-year period. After 10 years, a 50% emission reduction based on 2005 emission levels will reduce the tax rate by 20%. Afterwards each 12.5% emission reduction would correspond to a 10% tax rate reduction resulting in a total tax reduction of 60%. Note that these tax reductions correspond to emission reduction ranges not direct values.

- If the carbon tax succeeds then under the above system the slope of the tax relative to time will not progressively increase like the currently championed system, but will instead increase until reaching a maximum and then decrease at a lesser magnitude to the increasing slope relative to the emission decrease. Note the graph below.

- As stated above this system works because once individuals and business switch to trace emission materials and systems these groups will not switch back when the carbon tax drops for it will simply cost more money to switch again and if emissions go back up then the tax will increase because the emission goal will no long be achieved.

- One concern some might raise is that a decreasing tax relative to decreasing carbon emissions may create an artificial emission reduction floor before zero emissions. This position would argue that certain individuals/groups would not give up high carbon emissions because the tax could decrease (versus a conventional system where the tax continues to increase), thus total emission reduction would not reach 100%, but instead 80-90ish%. However, for this to occur these carbon consuming groups need to absorb the burden of the tax throughout its lifecycle including the maximum. This strategy makes little sense for paying the tax will almost definitely result in higher costs versus switching to a trace generator (of course assuming that these individuals have the capacity and funds to make the switch)].

Finally states need to create organizations that will manage how prices change from principle suppliers to secondary suppliers to consumers to ensure that principle or secondary suppliers are not price gouging their respective consumers and scapegoating that gouging on the tax. The low competitive nature of larger source carbon emitters could even suggest that state governments create a price ceiling relative to the tax to eliminate the possibility of price gouging in the first place. Also government may need to intervene if apartment landlords refuse to properly adjust to the carbon tax instead of forcing their tenets to absorb it due to low economic mobility.

Overall it is important to establish a form of carbon emission limitation either directly (cap and trade) or indirectly (carbon tax) to motivate a reduction in carbon emissions. There have been numerous debates on whether a cap and trade or a carbon tax system would be superior at reducing emissions with a carbon tax having pulled ahead recently because of its transparency and simplicity. However, carbon tax proponents are behaving a lot like solar/wind proponents in that they are only studying how their solution affects the present, not the future. A critical element to creating an appropriate solution is to not only address the chief problem, but also understand any potential problems that will arise from the original solution. Carbon tax proponents must address how poor individuals will be negatively affected by the carbon tax not only in the present, but also in the future. The emission reduction adjustment discussed above is only one means of addressing this future problem.