Friday, December 30, 2011

The Flaw of the Flat Tax

Paying both federal and state income taxes can be an exasperating and depressing experience. It is easy to understand this dissatisfaction, especially because individuals do not know the precise destination of their tax dollars, instead watching the money vanish into a black hole riddled with waste and misappropriation. Unfortunately the black hole is not even the worst part of the tax system, yet few people seem to either realize the real problems in the tax system or care enough to do something about it.

One of the more popular suggested tax reforms is the elimination of the progressive tax system in favor of a flat tax system where all taxpayers would pay the same percentage of income. Proponents of the flat tax system sing its praises for its simplicity, what they believe as fairness and its ability to acquire more revenue for the federal government than the current progressive system due to a perceived elimination of a vast number of deductions and tax shelter tricks. However, proponents of the flat tax are either misinformed or trying to create a greater advantage for themselves with little concern for others.

Despite numerous studies concluding that flat taxes provide insufficient funds relative to the existing progressive tax system from a methodological standpoint comparing revenue gains from a flat tax system to the current progressive system is silly because the current system is so riddled with immorality and waste that almost any reform will give the appearance of a short-term increase in revenue with favorable assumptions. Therefore, if one were to be reasonable the flat tax system would need to be compared to other reforms in the progressive system and other new systems to gauge the legitimacy of its revenue collection superiority. Simply comparing one reform against the current system without comparing it against other possible reforms diminishes the authenticity of such an analysis. Such a mindset is also questionable in that it is irrational to conclude that there is only one possible alternative to an existing system or solution without careful analysis.

With regards to fairness it is easy to see why proponents of a flat tax system believe such a system is fairer to all parties than the current progressive system. The common flat tax and tax cut battle cry has frequently been ‘why should those that make more money be penalized?’ under the typically bias mindset that those who make a lot of money work harder than those who make less money. Most flat tax supporters appear to believe that the progressive tax system punishes ambition and success, which in turn may discourage individuals from being successful because the more successful they are the less they get. Of course any practical analysis instantly characterizes that complaint as complete and utter bull because first it is irrational to believe that an individual will be less motivated to acquire financial resources simply because they will have to pay more in taxes. Anyone given a choice between pursuing a career where he/she would make 25,000 dollars a year and pay 4,500 in taxes versus make 100,000 dollars and pay 34,000 dollars, would obviously selection the latter option. No reasonable person would elect to only make 20,500 dollars a year instead of 66,000 dollars a year to either ‘stick it to the man’ or because paying 34,000 dollars instead of 4,500 dollars to the government is so distressing that making a net of 45,500 dollars more is immaterial.

Proponents of a flat tax would contend that the above statement misses the point. The very fact that the disparity exists in the tax system at all is what is unfair. If these individuals feel so strongly about ‘rooting’ out unfairness then they should start promoting a communistic economic system. Capitalism is fraught with inequalities and unfairness for gone are the days where individuals can consistently achieve financial success by simply studying hard and working hard. There is indeed still the rare case where an individual pulls him/herself out of poverty to make it big, but in the modernized capitalistic system who you know and what resources you possess have a much more pronounced effect on your success than any drive to work hard. A majority of the individuals that have wealth today definitely took advantage of some of the unfair elements of capitalism to amass their fortunes. How funny it is that most individuals complain about unfair elements that are to their disadvantage, yet say nothing about unfair elements that are to their advantage.

Unfairness in capitalism and the role of the progressive and flat tax is best illustrated in an example. Consider two participants running a 400-meter dash, Runner A and Runner B where Runner A gets to start 100 meters ahead of Runner B (think of this as better connections and greater access to resources largely due to parental connections). The gun is fired, the participants run the race to the best of their ability and Runner A wins easily. Suppose then the timers elect to remove 5 seconds off of Runner B’s time closing the difference between the two times, yet still maintaining an easy victory for Runner A. Those that support a flat tax would cry foul about removing 5 seconds from Runner B’s time, but would remain silent about Runner A’s 100 meter head-start. Replacing the progressive tax with a flat tax would simply be removing one unfair element that favors the poor further stacking the deck in favor of the rich and well connected.

A more conspiracy theorist view of the flat tax could label the system itself as a fiendish negotiating tactic by the super rich. Perhaps one day the masses will realize the actual disparity in the tax rate between those who make 35,000 dollars a year versus those who make 1,000,000 dollars a year is not in their favor, but the favor of the million dollar earner due to government neutering of the IRS and tax loopholes for the rich. Such realization will result in a demand to close tax loopholes for both the individual and corporations reestablishing the genuine rate as outlined in the progressive system.

With this eventual understanding that some perceive to be inevitable, one could believe that the flat tax is just a preemptive strike to neutralize progressive tax reform. In essence although the flat tax would result in an increase in tax payment from net effective rate of the mid-teen (what most well-connected wealthy individuals pay) to something like 20% it would dramatically reduce the probability of reform within the progressive system itself that could result in a rate increase from mid-teens to 39% or even higher. So by giving up something like 4-5 percent points, the biggest benefactors of a flat tax save themselves a 20-30 percent point increase at some point in the future.

Finally the importance of the assigned tax rate in the flat tax itself deserves analysis. If the rate is too low, then the government will not be able to recoup the necessary capital required to effectively run programs for the benefit of its citizens, which would lead to the loss of certain programs or increased national debt either element would negatively affect the economy. If the rate is too high, then those in the lower portion of tax bracket in the progressive system will end up paying more and those in the higher portions of the tax bracket may end up paying less and that hardly seems to emulate the ‘fairness’ of the system, the less you make the more you pay. This single statement has deduced the reality behind the flat tax. Inherently in its purest ‘on paper’ form the flat tax is fair, but the current practice of capitalism is not. The insertion of a flat tax into an unfair system of much greater magnitude corrupts any authenticity in the flat tax. So while a flat tax may be fair in isolation, in the current economic system of the United States, it is unfair and unwarranted.

Death Penalty Logic?

Since 1973 the death penalty has regained its previously controversial place in society. The controversial nature of the death penalty is somewhat confusing in light of looking at the benefits of death penalty opposed to its detriments. A critical first question when evaluating the merits of the death penalty involves defining the purpose of the death penalty. One commonly citied goal for the death penalty is that of a deterrent, a means to prevent future actions which would warrant the death of the perpetrator. Although in theory this goal may seem logical and noble as preventing crime is always preferable to reacting to crime, no hard proof has ever been produced demonstrating a long-term reduction in homicides directly associated with the reemergence of the death penalty. So the goal of using the death penalty as a deterrent does not appear to be working,

If validating the death penalty through its use as a deterrent is not applicable in reality altering its goal to that of a tool for vengeance seems like the next logical step. The question of whether the death penalty actually serves as a positive psychological tool for the families of the victim is essential to judging the validity of this death penalty goal and purpose. Overall it is a troubling thought to think that families actually receive some level of satisfaction when the criminal is put to death over sentencing the criminal to permanent incarceration, but for the sake of argument consider for a moment that this reaction is indeed the case.

Why is it that the ‘vengeance’ mentality demands the ‘eye for an eye’ treatment in this situation? Although ‘ an eye for an eye’ can be regarded as an appropriate reaction, the concern is that some have come to believe that it is the only appropriate response available, which is not correct. Clearly a single action has many different reactions of differing severities. It is unrealistic to believe that only one specific response among a multitude of possible responses is appropriate instead there are a number of different options for redress and justice.

Why do some individuals look upon a lifetime of imprisonment as inappropriate? One of the worst things you can do to an individual is restrict or remove freedom. Although one could argue that the worst thing that can happen to an individual is the loss of his/her life, with suicide rates greater than zero clearly other individuals believe that living in certain circumstances are worse than death. From a psychological perspective restricting the freedom of an individual for the rest of his/her life appears to be an appropriate response for the vengeance seeking individual set on inflicting a significant level of suffering to the criminal as redress for the crime. For certain individuals one could regard life imprisonment as a worse punishment than death for certain individuals.

In regards to what comforts death offers a family over life imprisonment, any significant difference seems rare as both death and life imprisonment remove the ability of the individual to repeat the action against an individual that the family would regard as important or even society in general. The death itself may not be as tragic an event for the criminal as desired because of acceptance of the inevitable.

Also the numerous appeals and legal hurdles rightly involved in an execution typically provides delays of multiple years, which could inflict undue psychological damage on those who want to see a particular individual die, a case of deferred justice. In the end realistically very little is gained when electing to execute an individual over the punishment of life imprisonment for the sake of vengeance alone. Instead by electing to utilize an unnecessary and potentially savage response over one that is appropriate in its own right society could lose another shred of its humanity.

If deterrence and vengeance as reasonable motivating factors for utilizing the death penalty are eliminated, what makes the death penalty useful? Despite its flaws, the death penalty is remarkably effective as a bargaining chip for prosecutors when negotiating plea agreements with eligible criminals in question. The fear provided by the death penalty may not work as an effective deterrent in the prevention of certain crimes, but is effective in neutralizing the vigor in which a criminal may desire a trial. Whether or not this is actually a good thing is debatable. In a logical and perfect world one would conclude that only an individual that is actually guilty of a crime would plead guilty, but unfortunately due to fear and psychological trauma that ideal is not always achieved as individuals not guilty of the crimes they are accused of have pled guilty to lesser offenses or been found guilty by a jury.

So the death penalty may be a great negotiating tool only because of its greatest flaw, its finality. Clearly individuals in the past and more than likely in the present have been killed for crimes in which they have later been exonerated. Therefore, an individual that is not guilty may believe that pleading guilty to avoid the death penalty might be the only way to eventually prove his/her innocence of the accused crime and still have a life to live when this innocence is recognized. The fact that this psychological belief even exists demonstrates a great failure of the criminal justice system.

By stripping the death penalty of a worthy and significant purpose, the proverbial ‘death penalty horse’ has been shot and killed, but just to be thorough it is time to pick up a stick and commence wailing on it. Even if the death penalty did have a worthwhile purpose there would still exist multiple fronts in which opponents could attack both its philosophical existence (even when excluding moral arguments) and its execution (no pun intended). The primary goal of the prison system in general seems to be rehabilitation not punishment. The use of the death penalty displaces that goal for it is impossible for one to become a productive member of society if one is dead. It could be argued that the same failure occurs when an individual is sentenced to life without parole, but such a belief is in error.

Although an individual sentenced to life without parole will never be able to reemerge into society as a changed individual, if the individual is indeed rehabilitated he/she can instruct other individuals within jail and in society on what mistakes to avoid and provide other information and experiences that could better both the prison society and non-prison society. In response to those that believe certain individuals cannot be rehabilitated, the only way to be sure of such a position is to never give those individuals a chance.

Also when looking at life in prison versus the death penalty from a simple economic argument, the death penalty losses out as in every state that has studied the economic differences between death and life in prison has concluded that executions cost significantly more money mostly due to the required appeals that come with a death sentence to limit the possibility that a not-guilty individual is being put to death. Finally with regards to individuals who are deemed too violent for general population, solitary confinement is always available.

Overall opposition to the death penalty could be driven by any underlying concern for the sanctity of human life, but elements of practicality, efficiency and accuracy also play important roles.

Wednesday, December 14, 2011

Progress at Durban?

The two sides of contrast in the COP-17 agreement at Durban seem to divide on the perspective of goal advancement. Assume for a moment that the necessary emission reductions and other necessary strategies (leaning more towards various geo-engineering applications) can be akin to society collecting 10 fireflies in a jar. Suppose the consequence to not collecting these fireflies in a certain time frame is death.

The group who believes that Durban is a success could argue that Durban captured one firefly, finally the ‘developing’ nations like China and India acknowledge that they will need to cut emissions in consort with the developed nations not after. These individuals could also argue that those who view Durban as a failure are being unreasonable, assuming that a vast majority, if not all, of the fireflies could be captured in a single swipe of the jar. The ‘Durban was a success’ crowd believes that the best strategy for capturing all ten fireflies is to capture one firefly at a time and because Durban seems to have done just that it should be considered a success.

The ‘Durban was a failure’ crowd believes this mindset is foolish. This criticism flows one of two ways. First, society has not collected 10 fireflies yet and until that happens nothing else should be viewed as a success. Overall while understandable, this mindset is rather counterproductive and unrealistic because no rational person would conclude that such a dramatic shift in human society could occur in a single element short-time frame.

Second, even if you could argue that Durban was a success based on both the U.S., China, India, etc. actually agreeing, despite no binding elements, to reduce emissions within a global carbon scheme the problem is timing. Assume that both China and the U.S. actually live up to this pledge, a critic could state that why should people be happy about capturing one firefly, a firefly that has been evading the jar for over ten years since Kyoto. Does society really have the luxury of spending an average of ten years capturing each one of the remaining nine fireflies?

A critic could argue that perhaps it is time for a new strategy to capture these fireflies, one that does not involve aimlessly running around wildly swinging a jar (global climate conferences). Recall that a number of individuals believed that this ‘Durban’ firefly was captured in Bali in 2007 and the final details from Bali were supposed to be addressed in Copenhagen in 2009, all interested parties know how that turned out.

The biggest telling point in the above criticism is that with no binding elements in the agreement the proverbial can has simply been kicked down the street until the next conference, something society has been doing for the last decade. Without binding elements that could penalize non-participating or non-complying countries such emission agreements are equal to having a sizable hole in the jar where whether or not any fireflies remain captive is at the discretion of the firefly not society. When it is in the interest of the firefly to be free it is difficult to consider the possibility that it will voluntarily restrict itself for the sake of others.

Friday, November 25, 2011

Revisiting Campaign Finance Reform

The original question in Citizens United v. Federal Election Commission revolved around whether or not the FEC could use the McCain-Feingold Act (a.k.a. Bipartisan Campaign Reform Act) to prevent groups from distributing political advertisements within 30 or 60 days from a specific type of election. However, while this narrow element was the original nature of the case, the majority in the case expanded the breadth of the ruling to justify whether or not money expenditure in an election could be considered an extension of free speech and if corporations could use it for the direct purpose of supporting the election or defeat of a given candidate.

The somewhat sad reasoning in Citizens is that Justice Kennedy in the majority opinion seems to suggest that there is no way to distinguish between media (who was not restricted the McCain-Feingold Act) and other non-media corporation, even though governments and its agencies had been doing just that for years leading up to this case. The real question stemming from Citizens is what is the obligation of the United States to the Constitution when the consequences to possibility not upholding an aspect of it could be disastrous?

One of the chief problems with Citizens is the rationality that money is a form of speech and the First Amendment should protect its use. The underlying problem in the application of such a belief is that there is no inherent limit to the distribution of money. In this regard society tiers the importance of an individual’s speech by how much money he/she has. This scenario creates an unequal weight on speech that is not inherent to the accuracy of the speech. The point of the First Amendment was to ensure all speech because all speech was viewed as equal based on the premise of equal weight within reason. The tiered environment created by money destroys the assumed ‘equal weight’ environment, which ‘housed’ the First Amendment. The court in First Nat. Bank of Boston v. Bellotti did not properly appreciate this understanding.

Now one could argue that the ‘influence’ of newspapers and other print media, which received an exception before Citizens, also destroyed this environment. Such an argument is not correct. Newspapers offered the option of readers commenting on inaccuracies or perceived impartialities through the ‘letters to the editor’ section reducing the argument weight relative to the opinion produced by the paper. This option is not available for print insert, television or radio advertisements, the principle mediums of action by those who ‘demonstrate their speech’ with money.

Based on the entry costs associated with these mediums there is little to no opportunity for the average citizen to counter inaccurate information given by these ‘speechmakers’. In addition these ‘speechmakers’ can repetitively engage in this speech tapping into a very large audience. This lack of correction as a means to control weight is important because most of these advertisements are ripe with inaccurate and/or misleading information because to those producing them the point is not to win an election fairly, honestly and/or morally, the point is to win by any means necessary.

Another problem is that individuals and media outlets have an inherent ceiling to the influence they can exhibit in a political environment. Basically the maximum weight of their argument is reasonably capped. Individuals engage in direct speech (i.e. soapbox) are clearly limited by time and resources so their message(s) rarely carry lasting influence. Newspapers only produce one paper per day, which heavily restricts the content that it can devote to attacking/praising a given candidate(s) or the total influence it has as numerous papers would have to devote large percentages of space to a given candidate to generate lasting influence.

Television stations have a greater theoretical ceiling having the ability to disseminate content all 24 hours in a day, but face a ‘feasibility’ ceiling in that devoting too much aired content to attacking/praising a given candidate(s) will drive away undecided and ‘independent’ viewers allowing the station in question to only retain individuals devoted to loving/hating that given candidate/policy in a pre-conceived way. The tiered structure of ‘money speech’ has a much larger ceiling as advertisements of support/ridicule can appear in many different mediums generating huge levels of exposure (dwarfing those of newspapers and single television stations) with a much lower probability of turning off individuals who the advertisements are meant to influence.

Therefore, these advantages make ‘money speech’ much more valuable than ‘conventional speech’ and the more money one has the more ‘money speech’ one can make. Some try to make the argument that many people can ‘pool’ their money into collective organizations which would represent their interests with more ‘money speech’ than these individuals could muster on their own. Unfortunately due to the incredible imbalance in the current economic system the only organizations of this nature that could compete with corporate interests acting as a potential counterbalance are worker unions.

However, individuals who oppose the existence of these unions, because they do not agree with their political positions, are continuously attacking these institutions in various states in effort to destroy them. The systematic attempt to eliminate these established ‘common man’ money pools and the inability of other pools to generate equalizing amounts of money to compete with corporate interests heavily damages the validity of the pooled money argument. It is reasonable to suggest that the largest corporations will always have dramatically more ‘money speech’ than common citizens or smaller companies.

Those who argue that the point of Citizens was to liberate the ‘money speech’ for small businesses are either naïve or purposely misleading their audience. Available ‘money speech’ for small businesses only matters if that business agrees with the position of a larger business and if this is the case then there is little point for the small business to contribute because of vast percentage of the ‘speech’ on that given topic will be made by the larger business because it has more to gain or lose from influencing policy. If the smaller business disagrees with the larger business in a matter of policy there is no reasonable expectation that the smaller business will be able to utilize the ‘advantage’ of its ‘money speech’ to defeat the opinion of the larger business. In fact lack of viable restrictions on ‘money speech’ actually weakens the power of the speech for the small business relative to large business regardless of which business is actually right.

Interestingly the characterization of money as speech changes an intangible element to a tangible one, which actually strengthens the argument for regulating this type of speech. The original point of Citizens (from the petitioner’s viewpoint) argument was that it was not fair that their organization was restricted from releasing a political advertisement based on a specific time deadline, a deadline which did not apply to media organizations. The argument was that this deadline was a complete restriction of their organization’s ‘speech’. One could see the potential validity of their argument in that their ‘speech’ was being restricted in its entirety by the deadline. However, while the First Amendment disallows a government entity (federal, state or local) the ability to restrict an individual from speaking at all (outside of very specific situations), it does not restrict that same government entity from applying restrictions to certain types of speech.

A similar vein can be seen within the Second Amendment in that even if one argues that the rights of private citizens not in a government sponsored militia to bear arms are supported by the Second Amendment, an argument that is nearly impossible to make logically, government can still restrict the types of arms one can own legally. For example just because the Second Amendment states one can bear arms does not mean that the government has to allow an individual the right to own a nuclear bomb. Thus the right to bear arms is not universally protected in all forms. The same logic can be applied to the First Amendment in the form of ‘money speech’. Based on that precedent the government could place a ceiling on how much money a ‘person’ could spend in a given election cycle (just not donate to a given candidate, but actually spend be it independently or through some subsidiary).

One could argue that such ceilings were addressed in Buckley v. Valeo, but the reasoning in Buckley is incredibly naïve when addressing the ceilings relative to the improbability of corruption: “[the] absence of pre-arrangement and coordination…alleviates the danger that expenditures will be given as a quid pro quo for improper commitments from the candidate.” Perhaps one could hold on to such illusions in 1976 when the Buckley ruling was made, but with changes in technology as well as existing anecdotal evidence over the last 30 years it is extremely difficult to view such a reasoning as valid in 2009 (when Citizens was ruled) or 2011.

Another interesting association between the First Amendment and money can also generate allowable government restriction. The spirit of the First Amendment was designed to protect differing opinions, but not opinions that were deterministically false, there is a reason libel and slander laws exist. Normally deterministically false statements are of little consequence because of the small scale in which they occur; however, within each election cycle based on what is at stake due to how the decisions legislators make influence the well-being of the general public the importance of deterministically false speech in the election environment, regardless of intent, is significantly magnified. Therefore, it should be the prerogative of a government agency to penalize and restrict individuals or groups making clearly false, ambiguous or misleading ‘money speech’ in an election environment within proper jurisdictions.

Some want to argue that these types of restrictions are not necessary largely because voters are intelligent actors and money invested in election cycles only has a muted influence on which candidate a voter votes for. This reasoning seems to fall short of viability on two points. First, if such a statement were accurate then why are hundreds of millions of dollars spent in each major Federal election cycle; clearly the individuals/groups spending this money have conducted numerous studies to identify the best and most efficient means to spend the money as ‘speech’. Therefore, it is difficult to accept the reasoning that all of this money and time would be spent on an endeavor that had little to no influence.

Second, the belief that general voters intelligently analyze candidate platforms and logically determine whether those platforms are valid and will be effective at solving problems is naïve. Most voters do not either have the time, the experience or the desire to undertake such a task, especially because of the general lack of specificity offered by candidates on their platform (most simply give general stock answers to questions or flat out lie). Thus, without this in-depth analysis most voters rely on media outlets and advertisements to ‘inform’ them regarding political platforms and opinions. Overall ‘money speech’ clearly plays a significant role in politics with regards to influencing voting trends and habits and to argue otherwise is simply foolish.

When considering the manner of speech itself a distinction must be drawn regarding subjectivity. There are two types of elected official: legislative and judicial (note that this categorization is different from branches of government of which there are three). These two categories are divided by the roles they play in crafting the law. The legislative category is responsible for creating, debating and passing/failing perspective legislation (the President is also a part of this category) where the judicial category is responsible for determining whether two separate laws contradict and how to resolve that contradiction and address criminal sentencing.

Between these two categories the legislative one has a much greater level of subjectivity relative to how to solve a given problem. The purpose of passing new laws is to solve a problem in society, yet due to imperfect knowledge and boundary conditions the analysis ability to determine whether or not a given solution is successful is not purely determinate. An extremely simple example of this process is determining a solution for x + y = 7. In this situation there are numerous solutions to the problem regardless of methodology.

However, the general openness of the legislative category does not exist for the judicial category. Determining if a given piece of legislation is constitutional, in conflict with another piece of legislation or if a defendant is guilty, etc. are much more restrictive due to existing logic and boundary conditions. For example for this category instead of x + y = 7 the problem is x + y = 7 where x > 3 and y is positive. Basically the level of subjectivity is much smaller and there are fewer viable possibilities for x and y.

The general point of speech in the election of an individual who will take a legislative role, regardless of type, is to demonstrate support for a particular idea or group of ideas that embodies the candidate. It can be argued that another rationality is to exhibit support for certain personal characteristics of the candidate, which could allow him/her to better work with other legislators to come to a deal. This situation is different for a judicial election because the limited options eliminate the second rationality for speech support. Judges do not negotiate with other judges in a quid-pro-quo manner similar to politicians. The first opinion is also significantly hindered by the limited number of correct options due to sentencing guidelines, logic and legal precedent. Therefore, speech in support or opposition against judicial candidates can only effectively be given as a measure of how effective a judge is at upholding the law on an analytical basis.

Unfortunately most of the ‘money speech’ in judicial elections is based around fear and bias largely driven by an attempt to seat like-minded individuals regardless of whether or not the legal opinions of the candidate in question are correct. This lack of respect for the legal system is troubling and actually could allow for the restriction of support/detraction speech in judicial elections due to Brandenburg v. Ohio.

Brandenburg v. Ohio largely addressed the issue of ‘clear and present danger’ exception to the First Amendment, which was first validated in Schenck v. United States. Originally the ‘clear and present danger’ exception was clarified under the ‘bad tendency’ test in Whitney v. California where if the speech has a tendency to cause sedition or lawlessness it could be constitutionally prohibited. However, Brandenburg created a new standard for the exception through a three-pronged test, which limited its application restricting government ability to restrict speech. The three elements that make up the test are intent, imminence and likelihood. When individuals devote ‘money speech’ to the defeat of a sitting judge who has not demonstrate malfeasance it can be argued that such ‘speech’ is meets the three elements of the Brandenburg test.

Arguing that a judge that has a reputation for ruling correctly legally and logically should be replaced in an election demonstrates intent in that supporters of the challenger believe that the challenger will rule differently than the sitting judge. However, if the sitting judge has ruled correctly then these supporters are supporting a candidate who will rule incorrectly, a candidate that intends to break the law by improperly evaluating it. Likelihood occurs because judges do not summarily rule on the constitutionality of an issue randomly and spontaneously, typically a petitioner must bring a suit, which challenges the standing of a given law. Therefore, if an individual is bringing a suit and is successful it stands to reason that the likelihood of that individual acting upon that new ruling is very high.

The only questionable element is imminence, but similar to likelihood it stands to reason that if an individual is bringing a suit against a particular law that if it is overturned that petitioner will act upon the law as soon as possible (a near immediate effect). Therefore, it appears possible that the government would be authorized to disallow ‘money speech’ in an election against a standing incumbent judge who has not demonstrated malfeasance.

Note that ‘money speech’ would be targeted in the above example over general speech because of the breadth of contact. The ‘danger’ in a judicial election is an individual taking the bench who will make judgments that are incorrect solely due to personal or professional motivations. Only ‘money speech’ has the ability to influence enough people to elect the judicial candidate who will be inappropriate for the job. There is little reason to suspect that general speech will be able to create a sufficient level of influence. Also note that the ability of the government to restrict ‘money speech’ only applies to judicial elections with an incumbent as there is not existing record to judge for two competing non-incumbents.

One exception that could be discussed regarding an incumbent re-election is past action within the sentencing range. While the judicial rulings on guilt and constitutionality are rather firm, the most subjective aspect of a judge’s role is sentencing. Some individuals may disagree with a judge who assigns penalties on the higher edge of the guidelines (5 years instead of 3 years for a 3-5 guideline crime) or visa-versa. In these situations if ‘money speech’ can demonstrate specific instances of such behavior through explicit citations then it would be difficult to eliminate ‘money speech’ made in opposition of that premise on the basis of the Brandenburg test.

One may try to argue that ‘money speech’ should never be restricted in elections based on the sole element of personal opinion regarding likeability. Basically ‘money speech’ could be used simply to exclaim to the public that candidate A is a ‘good guy’ and individual or organization A likes him. The problem with this mindset is that it is very unlikely that an individual or organization would spend thousands to tens of thousands of dollars in ‘money speech’ driven only by a personal like for the given candidate, there will be an ulterior motive.

Overall while the rationality in that completely restricting the ability of an individual or organization to participate in the political process through purchasing advertisement may seem logical, the First Amendment also does not guarantee unlimited speech in an environment when all individuals do not have the same opportunity for speech. Therefore, this realization logically, and more than likely legally, gives the government the ability to place a ceiling on the total ability of individuals to ‘speak’ in these types of environments. For example the government could set a ceiling on the amount of money that a given individual or corporation could spend in an election cycle to 20,000 dollars. Also based on the general differences between those who make the law and those who enforce punishment and interpret the law monetary speech restrictions in judicial elections could be even more strict, possibly even disallowed. While some believe that the ruling in Citizens significantly curtailed the government’s ability to restrict corporate money in political activities, any interpretation that the government is unable to apply monetary caps to corporations or individuals for political activities is unethical and logically wrong.

Wednesday, November 16, 2011

Intestinal Bacteria and Obesity

Some important biochemical interactions and responses to obesity have previously been discussed here.

In recent years explanations for the sudden rise in obesity have ranged from a further unbalanced internal biological energy balance to environmental pollution. Another accompanying explanation that is gaining support is that the type of bacteria residing in an individual’s intestinal tract is important relative to what foods an individual consumes. There is widespread belief that particular bacteria types drive certain metabolic rates and processes that have a significant effect on weight loss vs. weight retention.

The digestive process can be broken down into three stages after chewing. First, the food enters the stomach and is rendered into chyme by hydrochloric acid. Second, the chyme goes into the small intestine where a vast majority of the nutrient absorption occurs through osmosis, active transport and diffusion to nearby capillaries and eventual transport to the blood stream. Third, the indigestible and unabsorbed material passes through the large intestine where some of the indigestible material is processed (usually fermentation) by appropriate intestinal bacteria, water is reabsorbed and remaining material is packaged for excretion. It is this third element that is of particular interest here.

The human intestinal “metagenome” consists of trillions of microbes that provide enhanced metabolic capabilities due to absent enzyme inclusion (polysaccharide metabolization), protection against pathogens (indirect mucosal defense and luminal colonization competition), immune system support and aids gastrointestinal development and maintenance through interaction with epithelial cells.1-5 The two major elements which drive the specific populations of the metagenome in a given individual are genetics and diet. At the moment there is little that can be done regarding genetics, but the influence of diet is prevalent and that influence begins as early as infancy.1 In fact there is reason to believe that this “metagenome” is most influenced within the first few years of life and can have significant effect on immunity development.6,7

A vast majority of intestinal bacteria belong to one of two phylum of bacteria: Firmicutes and Bacteroidetes. Among these two phylum the bacteria in the intestinal with the largest populations are thought to be (in no particular order) genera Bacteroides (bact.), Clostridium (firm.), Bifidobacterium (bact.), Peptostreptococcus (firm.) and Ruminococcus (firm.) with minor populations of Escherichia (proteo), Lactobacillus (firm), Enterobacter (proteo) and Enterococcus (firm) with various methanogens.3,8,9. The parentheses identify the phylum type for the particular bacteria. It must be emphasized that specifics regarding exact populations are still far and few between relative to the specific genus which make up the Firmicutes and Bacteroidetes phylums for they contain 250 and 20 genera respectively;10 however, it is thought that Ruminococcus makes up a significant percentage of the Firmicutes phylum. On a side note Firmicutes bacteria are typically gram-positive (outside a very small few which have pseudo membrane walls) and Bacteroidetes bacteria are typically gram-negative.

Not surprisingly various intestinal bacteria populations are not evenly distributed throughout the digestive system, but each specific bacteria group has some environmental niche, notable is that higher bacterial populations are found in the lower portion of the intestinal tract vs. the upper portion. Also the upper portion has a large percentage of aerobic bacteria vs. the lower portion having a large percentage of anaerobic bacteria with the terminal ileum as the transition zone.7,11

The principle reason why intestinal bacteria have perked interest in the obesity ‘epidemic’ originated from an experiment in mice which demonstrated that intestinal bacteria play an important role in energy metabolism and weight changes. The study involved using a set of control mice and axenic mice (note that axenic mice are mice without any significant amounts of intestinal bacteria i.e. germ-free mice). Under normal conditions the axenic mice, controlled for age and background, weighted about 40% less than the control mice. However, after colonizing intestinal microflora (from the distal section) derived from the control mice within the axenic mice, the weight of the axenic mice increased by 60% over a short period of time.12 The inclusion of the microflora is thought to influence weight gain through three mechanisms: increases in intestinal glucose absorption, energy extraction from indigestible foods and concomitant higher glycemia and insulinemia.12,13

Changes in the suggested mechanisms from above are though to occur through influence on the action of two signaling proteins: carbohydrate response element-binding protein (ChREBP) and liver sterol response element-binding protein type-1 (SREBP-1) which in turn influence intestinal fasting-induced adipocyte factor [Fiaf; a.k.a. (angiopoietin-like protein 4)].14 When Fiaf is expressed it inhibits lipoprotein lipase activity, which increases the probability that fatty acids are released from triacylglycerols; these fatty acids can then be absorbed by muscles and adipose tissues to be used as energy (basically the fatty acids are consumed). If Fiaf is not expressed then lipoprotein lipase activity increases, increasing the probability of more fat synthesis. Germ-free mice seem to avoid obesity due to excess food consumption, commonly called diet-induced obesity, through three independent mechanisms: increased levels of Fiaf, increased levels of adenosine monophosphate activated protein kinase and reduced food consumption.14

Since the original study more studies have demonstrated differing intestinal bacteria populations in individuals of various weights. Most studies have developed support for a similar pattern between the obese and the non-obese in that more obese mice have a higher population of Firmicutes over Bacteroidetes.15-18 However, other studies have demonstrated no changes with populations in these bacteria or even the reverse with Bacteroidetes at higher population than Firmicutes.19,20 Thus the principle question becomes: do bacteria x protect against obesity in some way or are they simply preferentially selected in non-obese individuals vs. bacteria x which are preferentially selected in obese individuals?

Associate these elements with the fact that the Firmicutes/Bacteroidetes ratio drops when obese individuals lose weight (assuming no dramatic increase in fiber consumption) and Firmicutes population could be tied to fat, possibly through lipid production and storage. One study did demonstrate specific enzymatic activity in obese individuals associated with gram positive bacteria (Firmicutes) over gram negative bacteria (Bacteroidetes).21,22

The problem with fully determining the role of the Firmicutes/Bacteroidetes relationship is the contrasting results. For example some studies report that Bacteroidetes population increases from 3% to 15% with a hypocaloric diet in obese individuals where the Firmicutes population does not undergo significant changes.13,19 If this case is accurate it indicates that Firmicutes growth is not augmented by increased calories/fat, but instead Bacteroidetes growth is inhibited by those elements in some way. However, others report a decrease in Firmicutes population with weight loss and a decrease in Bacteroidetes (50% reduction) in obese individuals vs. non-obese.19

The issue with the Firmicutes/Bacteroidetes ratio may not be the change in the ratio, but instead the change in absolute population. For example in obese individuals what drives the change in the ratio, a decrease in Bacteroidetes population, an increase in Firmicutes population or do both change in general consort with each other? For example if an increase in the Firmicutes population is the dominating factor then it could be possible that Firmicutes responds to non-insoluble fiber elements. However, if a decrease in the Bacteroidetes population is the dominating factor then it could be possible that Bacteroidetes reduces fat absorption.

Other results have shown that axenic mice gain more weight when colonized with microbiota from obese mice opposed to lean mice.15 This result leads to the question of whether Firmicutes are able to extract more energy from a conventional diet over Bacteroidetes or do Firmicutes drive greater amounts of fat storage over Bacteroidetes? The second possibility sees support in that decreases in Bifidobacterium in mice fed a high fat diet also correlated to an increase in lipid polysaccharide (LPS) concentrations.23

The ‘battle’ between Firmicutes and Bacteroidetes begins at birth. The most influential element in early childhood appears to be the duration of time an infant spends consuming breast milk over solid foods and formulas.1 Based on comparisons of Firmicutes and Bacteroidetes populations between infants who consume breast milk and infants who consume formula, infants that consume breast milk longer have lower Firmicutes/Bacteroidetes ratios and seem to have lower probabilities for future obesity1,24-26 (E/A, Gillman et al. 2001, Kalies et al. 2005, Mayer-Davis et al. 2006). Examination of different populations of infants between Africa and Europe supported this conclusion of higher Bacteroidetes populations and lower Firmicutes populations in children breastfeed longer. The rationality behind the difference between African and European children is that Africa infants had to be breastfeed for additional time due to financial limitation or resource availability regarding formula.

One of the major reasons for this developmental difference seems to be the population growth of Lactobacilli and Bifidobacteria in breastfeed infants vs. formula feed infants, which fail to develop these two types of bacteria in significant proportions.27-29 The colonization of Bifidobacteria is thought to be especially important in the maturation of the intestinal lining and localized lymphoid tissue and delayed Bifidobacterial colonization increases the probability of a variety of gastrointestinal and/or allergic conditions.30-32

Originally it was thought that the bacteria present in breast milk was from skin contaminates, but recent testing has developed support for the idea that the bacteria is, not surprisingly, derived from the maternal intestine and follows the entero-mammary pathway to the mammary gland.33 Also no Bifidobacteria has ever been isolated from skin samples from women who have Bifidobacteria in their breast milk.30 The derivation of these bacterium from the mother’s own intestinal system may provide insight into why obese mothers have children that are pre-disposed to becoming obese and why fit mothers have children that have resistance against obesity as those bacteria populations heavily influence the populations in the infants.

Another important association between intestinal bacteria and obesity is the role of interspecies hydrogen transfer from hydrogen producing bacterium to hydrogen consuming methanogens. Non-obese individuals have very small methanogen-based intestinal populations whereas obese individuals have larger populations.10 This population shift has also been associated with genetically homogeneous obese mice (ob+/ob+) over heterogeneous mice (ob+/ob-) and homogeneous non-obese (ob-/ob-).15 The association with genetically obese mice over mice that have become obese through food consumption supports the notion that methanogen population influences weight over methanogen bacteria being selected for based on weight. Basically the methanogen population of bacteria expands first before one gains significant weight. The importance behind this relationship is best demonstrated by understanding the biochemical process involved in the formation of fatty acids in the body.

Methanogens like Methanobrevibacter smithii enhance fermentation efficiency by removing excess free hydrogen and formate in the colon. A reduced concentration of hydrogen leads to an increased rate of conversion of insoluble fibers into short-chain fatty acids.10 Proprionate, acetate, butyrate and formate are the most common SCFAs formed and absorbed across the intestinal epithelium providing a significant portion of the energy for intestinal epithelial cells promoting survival, differentiation and proliferation ensuring effective stomach lining.3,10,34 Butyric acid is also utilized by the colonocytes.35 Formate also can be directly used by hydrogenotrophic methanogens and propionate and lactate can be fermented to acetate and H2.10

The Methanobrevibacter smithii population in non-obese individuals is very small on an absolute level whereas the population in obese individuals is much higher (gastric). This result is supported by metagenomic study which identified more Archaea gene fragments in ob+/ob+ mice over leaner heterogeneous ob+/- or ob-/ob- mice.15 Overall the population of Archaea bacteria in the gut, largely associated to Methanobrevibacter smithii, is tied to obesity with the key factor being availability of free hydrogen. If there is a lot of free hydrogen then there is a higher probability for a lot of Archaea, otherwise there is a very low population of Archaea because there is a limited ‘food source’.

Interestingly anorexic individuals also see an increase in Methanogen bacteria (Methanobrevibacter) over non-obese healthy individuals.21 This increase in anorexic individuals seems to make sense as fermentation rates probably increase in effort to maximize energy optimization from food intake due to reduced food consumption. Increased fermentation rates would increase H2 concentrations resulting in increased Methanogen populations.

Other investigators have looked at how receptor interaction with intestinal microbes influences weight. A promising avenue of research is Toll-like receptor (TLR) 5, a transmembrane protein expressed in the intestinal mucosa that recognizes bacterial flagellin.35 Analysis of TLR5 knockout mice vs. controls demonstrates a 20% greater body mass in the knockouts, a weight which corresponds to an increase in visceral fat.35 This additional body mass is thought to occur through greater food consumption (knockout mice consume 10% more food than controls), which seems to lead to greater fat deposit formation. However, despite this increased food consumption there were no significant changes in short-chain fatty acid concentrations between knockouts and controls.35 Also due to mixed results it is difficult to draw any conclusions regarding differing influences on orexigenic or anorexic hormones between knockouts and control.35

Elimination of intestinal bacteria through broad spectrum antibiotic treatment supported the contention that intestinal bacteria and TLR5 had an interactive relationship in controlling an individual’s weight as germfree TLR5 knockout mice did not suffer from the same weight gain as their TLR5 knockout non-germ free kin.35 The implantation of the microbiota from a TLR5 knockout mouse into a previously germ-free non-knockout mouse lead to the development of a similar phenotype to the TLR5 knockout in the germ-free mouse.35 This result suggests that there are certain bacteria that interact with TLR5 because despite the non-knockouts having the necessary receptors they still develop attributes similar the knockouts, thus the microbiota of the knockouts do not appear to have the required bacteria for activation. This lacking makes sense because without TLR5 receptors it stands to reason that bacteria, which activate TLR5, would be selected against.

Based on the information above it appears that activation of TLR5 somehow reduces weight gain. This result occurs either through interaction between TLR5 and orexigenic and anorexic hormones (which would influence appetite) or involves the reduction in fat deposit synthesis from soluble elements. Due to the results from germfree knockouts and the mixed hormone results, the second possibility seems viable. For example interaction between Bacteroidetes and TLR5 could lead to the inhibition of lipoprotein lipase activity (possibility through increased expression of Fiaf). This action would result in less fat storage and less overall weight gain.

If the above contention were true this action of changing Fiaf expression probably has a positive feedback effect in lean individuals and a negative feedback effect in obese individuals. For example as individuals lose weight the Bacteroidetes population increases which would lead to more TLR5 activation and less fat storage. However, as individuals gain weight Bacteroidetes population decreases which would lead to less TLR5 activation and increase the probability for greater fat storage.

One of the big remaining questions is how do the populations of Bacteroidetes and Firmicutes change to influence weight changes? One possibility is that while both Bacteroidetes and Firmicutes assist in fermentation perhaps Bacteroidetes are more responsive to complex sugars and other complex carbohydrates and Firmicutes are more responsive to simple sugars. Usually obese individuals consume lots of fat and simple sugars which are converted more easily to fat. The consumption of these types of foods select for Firmicutes over Bacteroidetes. When an individual loses weight it typically involves changes in the diet largely a reduction the amount of simple sugars and fats. This change could lead to a reduction in Firmicutes and due to less competition from the Firmicutes a corresponding increase in Bacteroidetes.

Another possibility for the increase in Bacteroidetes is that weight loss (excluding surgical intervention) involves a large amount of exercise. This additional exercise would lead to larger demands for energy consumption both in currently stored fat and newly consumed food. Such a change should reduce the amount of fat storage possibly involving the increased expression of TLR5 which could increase the population of Bacteroidetes if Bacteroidetes do indeed activate TLR5.

Overall it certainly appears that Firmicutes and Bacteroidetes play an important role in controlling weight. This influence seems to stem from two different mechanisms: overall food consumption and the extraction of energy from that food and probability of fat storage vs. fat consumption. While the exact mechanisms have not been discovered, Bacteroidetes appears to favor lean bodies and Firmicutes appears to favor obese bodies. Whether or not there is an evolutionary element is unclear. Beastfeeding also appears to be an important early element in driving either a lean or obese future. Due to potential feedback elements associated with fat content and intestinal bacteria populations like Firmicutes doping individuals with Bacteroidetes like Bifidobacteria may seem like a good idea, but the best option for weight loss involves the old stable tactics of high quality diet with insoluble fibers and exercise.

--
Citations:

1. Filippo, C, et, Al. “Impact of diet in shaping gut microbiota revealed by a comparative study in children from Europe and rural Africa.” PNAS. 2010. 107(33): 14691-14696.

2. Backhed, F, et, Al. “Host-bacterial mutualism in the human intestine.” Science. 307:1915-1920.

3. Son, G, Kremer, M, Hines, I. “Contribution of Gut Bacteria to Liver Pathobiology.” Gastroenterology Research and Practice. 2010. doi:10.1155/2010/453563.

4. DiBaise, J, Young, R, and Vanderhoof, J. “Enteric microbial flora, bacterial overgrowth and short bowel syndrome.” Clin Gastroenterol Hepatol. 2006. 4(1):11-20.

5. Gorbach, S. “Probiotics and gastrointestinal health.” Am J Gastroenterol. 2000. 95(1 suppl):S2-S4.

6. Palmer, C, et, Al. “Development of the human infant intestinal microbiota.” PLoS Biol. 2007. 5(7):e177. doi:10.1371/journal.pbio.0050177.

7. Berg, R. “The indigenous gastrointestinal microflora.” Trends Microbiol. 1996. 4(11):430-435.

8. Guarner, F and Malagelada, J. “Gut flora in health and disease.” Lancet. 2003. 361(9356): 512–519.

9. Moore, W and Moore, L. “Intestinal floras of populations that have a high risk of colon cancer.” Applied and Environmental Microbiology. 1995. 61(9):3202–3207.

10. Zhang, H, et, Al. “Human gut microbiota in obesity and after gastric bypass.” PNAS. 2009. 106(7): 2365-2370.

11. Rolf, R. “Interactions among microorganisms of the indigenous intestinal flora and their influence on the host.” Rev Infect Dis. 1984. (6)(suppl 1): S73-S79.

12. Backhed, F, et, Al. “The gut microbiota as an environmental factor that regulates fat storage.” PNAS. 2004. 101(44): 15718–23.

13. Cani, P, et, Al. “Role of gut microflora in the development of obesity and insulin resistance following high-fat diet feeding.” Pathologie Biologie. 2008. 56:305–309.

14. DiBaise, J, et, Al. “Gut Microbiota and Its Possible Relationship With Obesity.” Mayo Clin. Proc. 2008. 83(4): 460-469.

15. Turnbaugh, P, et, Al. “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature. 2006. 444(7122):1027–31.

16. Ley, R, et, Al. “Obesity alters gut microbial ecology.” PNAS. 2005. 102:11070–11075.

17. Armougom, F and Raoult, D. “Use of pyrosequencing and DNA barcodes to monitor variations in Firmicutes and Bacteroidetes communities in the gut microbiota of obese humans.” BMC Genomics. 9:576, 2008.

18. Guo, X, et, Al. “Real-time PCR quantification of the predominant bacterial divisions in the distal gut of Meishan and Landrace pigs.” Anaerobe. 2008. 14:224-228.

19. Ley, R, et, Al. “Microbial ecology: Human gut microbes associated with obesity.” Nature. 2006. 444:1022–1023.

20. Duncan, S, et, al. “Human colonic microbiota associated with diet, obesity, and weight loss.” Int J Obes. 2008.

21. Armougom, F, et, Al. “Monitoring Bacterial Community of Human Gut Microbiota Reveals an Increase in Lactobacillus in Obese Patients and Methanogens in Anorexic Patients.” PLoS ONE. 2009. 4(9):e7125.

22. Turnbaugh, P, et, Al. “A core gut microbiome in obese and lean twins.” Nature. 2009. 457:480–484.

23. Cani, P, et, Al. “Selective increases of bifidobacteria in gut microflora improves high-fat diet-induced diabetes in mice through a mechanism associated with endotoxemia.” Diabetologia. 2007. 50(11):2374–83.

24. Gillman, M, et, Al. “Risk of overweight among adolescents who were breastfed as infants.” JAMA. 2001. 285:2461-2467.

25. Kalies, H, et, Al. “The effect of breastfeeding on weight gain in infants: results of a birth cohort study.” Eur J Med Res. 2005. 10:36-42.

26. Mayer-Davis, E, et, Al. “Breast-feeding and risk for childhood obesity: does maternal diabetes or obesity status matter?” Diabetes Care. 2006. 29:2231-2237.

27. Balmer, S and Wharton, B. “Diet and faecal flora in the newborn: breast milk and infant formula. Arch. Dis. Child. 1989. 64:1672–1677.

28. Favier, C, De Vos, W, Akkermans, A. “Development of bacterial and bifidobacterial communities in feces of newborn babies.” Anaerobe. 2003. 9:219–229.

29. Haarman, M and Knol, J. “Quantitative real-time PCR assays to identify and quantify fecal Bifidobacterium species in infants receiving a prebiotic infant formula.” Appl. Environ. Microbiol. 2005. 71:2318–2324.

30. Martín, R, et, Al. “Isolation of Bifidobacteria from Breast Milk and Assessment of the Bifidobacterial Population by PCR-Denaturing Gradient Gel Electrophoresis and Quantitative Real-Time PCR.” Applied and Environmental Microbiology. 2009. 75(4):965-969.

31. Arvola, T, et, Al. “Rectal bleeding in infancy: clinical, allergological, and microbiological
examination.” Pediatrics. 2006. 117:e760–e768.

32. Mah, K, et, Al. “Distinct pattern of commensal gut microbiota in toddlers with eczema.” Int. Arch. Allergy Immunol. 2006. 140:157–163.

33. Perez, P, et, Al. “Bacterial imprinting of the neonatal immune system: lessons from maternal cells?” Pediatrics. 2007. 119:e724–e732.

34. L. Luciano, R. Hass, R. Busche, W. V. Engelhardt, and E. Reale, “Withdrawal of butyrate from the colonic mucosa triggers ’mass apoptosis’ primarily in the G0/G1 phase of the cell cycle.” Cell and Tissue Research. 1996. 286(1):81–92.

35. Cummings, J and Macfarlane, G. “The control and consequences of bacterial fermentation in the human colon.” Journal of Applied Bacteriology. 1991. 70:443459.

36. Vijay-Kumar, M, et, Al. “Metabolic Syndrome and Altered Gut Microbiota in Mice Lacking Toll-Like Receptor 5.” Sciencexpress. 2010. 10.1126/science.1179721.

Friday, November 11, 2011

A Qualitative Discussion Regarding the Development of an Air Capture Complex

The reality of the situation involving global warming is that both reduction in carbon emissions and carbon emission remediation will be required to significantly reduce detrimental damage to the environment, damage that will influence the future survival rate of humanity. For carbon emission remediation two elements take precedence: effectiveness and speed. Effectiveness is rather self-explanatory; if the process is unable to remove more CO2 from the air than is added over the lifecycle of the process then such a remediation strategy is not worth exploring. Speed is necessary because there is already a dangerous amount of CO2 in the atmosphere and the rate of carbon mitigation is not proceeding nearly fast enough relative to the capacity of natural sinks to remove CO2.

Therefore, although there are other more cost-effective ambient air capture techniques, which involve more natural processes (planting trees or synthesizing bio-char), these processes are significantly slower than technological methods. Speed is important, not just on a general level, because also a feedback level for during the process of removing the necessary CO2 the now more acidic ocean could lose a significant amount of its sink capacity has it begins out-gassing previously absorbed CO2 back into the atmosphere due to the change in the concentration gradient, thus the process must be fast enough to accommodate any further reduced sink capacity. Also the threat of some permafrost melt will also increase atmospheric CO2 concentrations which will need to be effectively managed beyond mitigation of directly derived human emissions.

Most economists are troubled by the calculated theoretical (no large scale system has been empirically tested yet) costs of technology driven air capture ($400-600 dollars per ton of CO2) and they should be, but as explained here the options available to the global community to address global warming consequences are quite small. Realistically due to the deficiencies of natural sinks there are only two choices: rapidly deploy both emission reduction programs and direct air capture technology or prepare to live in a much harsher environment which should reduce life expectancy. The reason for the parameters of the first choice is that natural sinks (land and ocean) will be unable to remove enough CO2 from the atmosphere, even in a scenario of rapid emission reduction because there is already too much CO2 and other greenhouse gases in the atmosphere, to avoid significant detrimental environmental effects.

However, these air capture unit must be efficient, otherwise they will lose their speed advantage over augmenting natural sources. Therefore, there are some important operational issues that must be addressed before deployment. The first major issue is water use. Regardless of the system, the chemical reaction utilized to absorb CO2 from the atmosphere requires the use of water. In most situations the water is supposed to act as a catalyst, but due to the open-air nature of the reaction system a significant percentage of the water (how much is heavily based on overall process design) is absorbed into the atmosphere becoming water vapor making water recovery more difficult.

Another important consideration is that most of the costs associated with air capture relate to gross costs per ton of CO2 captured because the estimates do not take into consideration what energy source is utilized to power the capture unit. A general background regarding where energy in most system designs is utilized can be found here. If a trace emission source is utilized (nuclear, geothermal, wind or solar) then the gross cost can be reasonably estimated as 90-99% efficient (thus the net cost will be 1.01-1.1 times more than the gross cost). However, if a fossil fuel source is utilized then the net cost will be higher than the gross cost (largely dependent on the exact fuel mix), but most of the time at least 1.3-1.5 times. Not only does the use of a fossil fuel energy source increase per unit costs and overall long-term costs, but it also reduces the overall speed of CO2 removal making technology air capture less attractive vs. natural sources. Therefore, it is important that all air capture units be powered by trace emission sources.

The final consideration is developing an endpoint for the captured CO2. A number of air capture developers have dreamt to using the captured CO2 as a marketed product either to enhance oil recovery, in a methane or hydrocarbon based fuel or in commercial industry (soda, etc.). Unfortunately the first two options add CO2 back to the atmosphere, just another means of disrupting the overall extraction efficiency of these units making them less desirable relative to expanding natural sources. The commercial option is incredibly insufficient at utilizing a vast majority of the CO2 that needs to be captured (gigatons of CO2). The best means to address this glut of CO2 appears to be sequestering it underground.

With all of these additional considerations to take into account it is not wise to simply build these air capture units at random. These units clearly need to be constructed in an orderly and cohesive manner perhaps even in a localized autonomous network. This network needs to contain a water source, a power source and a means of utilizing the captured CO2 in addition to having recycling pathways for all necessary materials used in the selected air capture reactions.

The most important element in such an air capture complex is selecting a power source. Recalling that volumetric speed is one of the principle elements behind the need for technology air capture the selected power source will need to be reliable and have as little downtime (intermittence) as possible. This requirement limits the viability of using wind or solar power as those energy methodologies cannot reliably power this proposed complex 24 hours 7 days a week.

Now one could argue that wind or solar would be appropriate with storage, but the general lack of storage options and empirical track record hurts the viability of this response. For example Solar Tres uses molten nitrate salt as a storage medium, but it is difficult to conclude that enough storage could be generated on a consistent basis to generate the 24-7 operational period. Remember energy can only stored if it is in excess, which will not be true most of the time if the solar panels are already providing power to various elements in the complex. Pumped hydro shares the same problem, as well as limiting the location for the complex because of its required topography. Also the addition of a storage medium would increase the cost associated with capture.

Without being able to rely on solar or wind power due to consistency concerns, viable energy options fall to nuclear power and geothermal. Nuclear power in a conventional plant is a little tricky because the provided power would be too much for the needs of the complex, thus if nuclear power was used in this way it would have to come from an outside plant, brought in by transmission lines. Another nuclear option may be to utilize a small modular unit design for a given complex.

Geothermal power is an attractive option when considering an Enhanced Geothermal Systems (EGS) model, but not a conventional model. A conventional geothermal model is similar to utilizing a pumped hydro storage system in that it limits where the complex can be constructed. An EGS model is also attractive as a secondary means to utilize some of the captured CO2 as a supercritical fluid in the system itself. So it appears that from the perspective of the complex itself the best power sources in decreasing order of effectiveness are EGS > Nuclear > Solar > Wind.

There are two main strategies for developing a water source: desalination or atmospheric capture of water vapor. The advantage of an air capture complex is that both of these strategies can be utilized because the air capture unit itself does not rely on natural wind patterns, but creates its own direct air flow to drive ambient air into the reaction section. The only boundary condition for the unit is tied to the power source utilized. This flexibility advantage is useful because the air capture complex requires a system to provide the initial water and would be heavily aided by a system which recycles water (reduces loss from air absorption).

Therefore, the complex could be constructed near a source of salt water, use desalination to provide in the initial water supply and utilize atmospheric water condensers to limit the amount of water required from desalination after the initial reactant amount. Note that tapping a continuous water source (such as desalination) may not be necessary as long as a sufficient amount of water is made available at the beginning of the process until the atmospheric collectors can successfully begin the recycling process, it just appears to be an easier overall strategy.

Desalination use has always involved two major concerns: the energy required for the process itself and determining an endpoint for the brine. The energy requirement is not a significant issue if using one of the two best-fit energy sources for the complex. Under normal circumstances the brine is a very significant issue that has environmental repercussions as returning it to the sea (standard practice) is through to have serious detrimental effects in the localized region where it is returned. Other than injecting it back into the source, effective ideas to address the leftover brine are far and few between.

Similar to the absorbed CO2, the total raw amount of salt from the brine, which would be generated from such a desalination project, is so large that using it in commercial endeavors does not appear viable. Some have proposed ammoniating the brine and using it to increase the volume of CO2 capture. The concern with that strategy is providing the necessary ammonium to react with the brine to create a consistent and worthwhile reactant volume. Another option that has been floated is incorporating the brine into a set of molten salts that would be used in either nuclear power reactors or batteries. However, the viability of such an idea is still questionable.

It is understandable that if the economic impact of developing such a complex was quantitatively calculated that it would be high; however, the nature of the complex is that all of these elements will be required in the future based on the current environmental-use path humans have embarked upon, thus the cost is not based on luxury, but necessity. The idea behind such a complex is actually to lower costs by tying many of the air capture units into the same required operational elements, thus making the technology air capture strategy more economical, saving money for investment in other environmentally necessary avenues like emission reduction. Overall while the manifestation of such a complex may not be exactly as described in this blog post, the reality is that such a complex will be needed in one form or another.

The figure below is a crude visual representation regarding what the complex may look like without mileage delineations between units as such an element needs to be modeled to maximize the efficiency of each given operational unit.


Wednesday, November 9, 2011

Third Party Voting in the 2012 Presidential Election

The mindset that individuals should vote for a third party candidate over Obama due to the failings of Obama to live up to his campaign promises, especially with respect to the environment is foolish. The irrationality of this mindset is demonstrated in two parts.

The first element involves the value of single issue voting. Third party candidacy largely distinguishes itself through a strong differing opinion on a single political issue, be it in the realm of the economy, environment, judiciary, foreign affairs, etc. However, the problem with focusing on a single issue, for both the candidate and the voter, is that rarely is an issue an island unto itself. Therefore, one must analyze the entire policy platform put forth by the third party candidate to see if there are contradictions, which would eliminate the usefulness of the single-issue position. Unfortunately for most supporters third party candidates can rarely be characterized as ‘exactly like that major party candidate except on this one issue’.

Also due to the emotion and intensity that can encompass these single issues and their adherents, most single-issue voters erroneously convince themselves that there are more individuals who think as they do and will act on this single issue than there actually are, warping the probability of success. One reason for this mindset is most single-issue voters frequent environments that act as echo chambers of sort for their opinion on a particular issue catalyzing the belief that more people share the belief in question with a similar level of commitment and passion.

Another aspect to the emotional element of single issue voting is respect and the lack thereof between certain groups and a given political party. For example one psychological aspect that has grown in the environmental movement is that they feel neglected, that Democrats ignore them due to a belief that environmentalists will always fall in line and vote Democrat because the alternative (Republican) is worse. The almost comical nature of this element is that some believe that not voting for Obama will ‘show him that I matter’.

Such a belief is foolish because the 2012 election can realistically only play out one of two ways: 1. Obama wins despite reduced support from environmentalists; in this scenario Obama, and perhaps the Democratic Party, could view the power structure of environmentalists with less respect because he still won even with reduced support; 2. Obama loses; so what lesson did Obama learn that can actually be applied in the future? He may lament having neglected environmentalists (from an ego standpoint), but how is that relevant if he is not going to run for President (or for that matter any political office) again, thus the ‘lessoned learned’ cannot be applied because he will no longer be in a position to apply it.

The sad thing is that this neglect that environmentalists feel is self-inflicted as they have yet to produce a viable alternative to give Democrats pause, a real power vacuum threat. The Green Party is a complete joke and realistically has done more to damage the environment than to benefit it (the 2000 election springs to mind). Even if the idea is just to ‘teach the Democratic Party a lesson’ not necessarily Obama, without a viable political party to oppose both the Democrats and Republicans, all individuals who feel that their vote is being ‘taken for granted’ will essentially either continue to cast a ‘taken for granted’ vote or will instead cast a ‘thrown away vote’.

The previous paragraph flows well into the second element, a lack of preparation and organization required for victory. Pertaining to the most viable scenario as stated above, third party candidacy in the 2012 Presidential Election, what candidate is a serious challenger to both the Democratic and Republican nominees? Winning the Presidency demands name recognition, money and logistical support. What third party candidate has these attributes in such capacity to rival the engines of the two major political parties? A secondary influencing factor is the non-democratic nature of a Presidential election where whether or not a vote counts is determined by where you live.

The Electoral College, a relic from a bygone era, complicates voting for POTUS by heavily penalizing those individuals that do not have the above three elements. The best example of this detriment was seen in 1992 where Ross Perot garnered 18.9% of the total popular vote, yet received 0 electoral votes. Therefore, due to the role of the Electoral College in the process of electing the POTUS, Mr. Perot was no closer to winning the presidency at 18.9% of the popular vote versus had he received 0.4% of the popular vote. Do those who wish to challenge Obama and the Republican candidate in the 2012 election with a third party candidate actually believe that they could generate a better result than that of 1992 in the current environment? If so, what leads them to draw such a conclusion and does objective analysis destroy that ‘rationality’ behind the conclusion?

The simple unassailable fact is that this late in the game there is little reason to believe in the election of an individual not from one of the two major political parties. Some third party voters attempt to make a stand on the grounds of morality. This intent is largely seen as a strategy against those who claim to be voting for a particular candidate who they classify as ‘the lesser of two evils’. Such a classification signifies that the voter is not in agreement with that individual’s platform, but is instead concerned about his/her opponent’s platform. Third party voters believe that these individuals are doing a disservice to themselves and to their country by not treating their limited access to power (typically voting for representatives once every two years) with more respect; basically these voters should be voting for the candidate they want to vote for and will perform well, not a candidate that may not do the job well, but has major party backing. The ‘moral’ third-party voters believe that a vote for the ‘lesser of two evils’ is still a vote for evil.

The problem with the morality argument is it is an abstraction. The mindset that one was not a coward and upheld his/her morals when voting is of little rational comfort if ‘the greater of two evils’ wins the election and begins to systematically lead policy completely away from those morals. Only one of incredible arrogance (or stupidity) would consider such a scenario a victory. In short these third party voters seem to neglect the reality that within this ‘classification’ mindset, evil is going to win the election, thus do you want less evil or more evil?

If third party supporters truly want to make headway they need to start in the non-presidential election cycles. Nominating candidates for the U.S. House of Representatives and Senate through the argument that both major political parties have failed to advance this country in a positive direction thus new ideas and perspectives are required to achieve such a goal. The process of electing these individuals will be easier during non-presidential years due to a smaller voter turnout as well as less money being spent in the political arena. Then after these individuals have demonstrated some measure of success over the two years between elections, supporters should use these successes as a means to introduce the advantage to having a POTUS from the given third party of choice.

Overall in the current environment any individual choosing to vote for a third party candidate because he/she does not believe in Obama is acting akin to the unfortunate reality associated with third party candidates, ‘throwing their vote away’; when one’s only power in the U.S. ‘democracy’ comes once every two years, why waste it sending a message that does not have enough volume to be heard?

Friday, October 28, 2011

A Change in the Supreme Court Environment

When the U.S. Constitution was crafted one of the key components was to ensure that no branch of the government garnered too much power. A neutral judiciary was an essential element to this power balance. The original intended purpose of the judiciary was to have control over whether or not the passage and/or enforcement of a specific law violated the Constitution, a role officially ascribed in Marbury v. Madison. In the vein of judicial view, the personal viewpoints of the judiciary were to remain as muted as possible instead only ruling on how the law and the Constitution interact. While rooting out personal opinion entirely is unrealistic because perception and interpretation is influenced by personal opinion, personal opinion should not be the principle driver in determining how a justice rules in a given court case.

Unfortunately the 21st century has given way to neutral and objective interpretation inviting personal beliefs and abstraction into the judiciary, especially in the U.S. Supreme Court. Too often does the selection of a U.S. Supreme Court justice revolve around political affiliation over legal record and qualification. Also members of Congress affiliated with a political party different to that of the President almost automatically oppose any candidate that is offered for confirmation to the U.S. Supreme Court; this opposition is frequently derived not because they believe that the nominated individual is unqualified, but because they disagree with the way the nominee will rule when on the court as those rulings will be contrary to their own personal and/or political beliefs.

That viewpoint highlights the problem. If the nominated justice would rule properly, then the member of Congress should have no problem with the ruling even if it conflicts with his/her personal beliefs because the Constitution is bigger than any one individual. If the nominated justice would rule improperly, believing his/her viewpoint to be bigger than the Constitution then the individual should never have been nominated in the first place and the President should be condemned for it. Thus the question is why does it appear that U.S. Supreme Court justices are following their personal opinions over the law?

The law is fairly similar to math in its application, heck almost everything is similar to math, but especially the law in that there are very few correct solutions/interpretations, with one usually being superior in accuracy over the others. For example a typical court case can be equated to math by breaking it down into an equation. In one case: X + Y = 7 where X and Y are not negative and are integers. Clearly in this situation X and Y only have a limited number of solutions, but there are multiple solutions. However, rarely do given court cases exist in a vacuum, there is precedent and other legal realities that need to be considered. When properly interpreted these other elements apply other required conditions to the equations such as X > 3.

With the additional condition(s) most other solutions become incorrect and typically only a few of the possible solutions to the equation remain valid, solutions that typically do not differ significantly from one another. With such a deterministic-type flow it is difficult to rationalize why justices would come to an incorrect solution. Of course one or two may decide differently by interpreting the precedent as X = 3 instead of X > 3. However, 5-4 decisions, especially when the same groups of individuals find themselves on the same side of the issue almost all of the time, challenge this differing interpretation condition. How is it that with all of the possibilities and the different ways to coming to a given solution the same two groups almost always end up on the same side of a given issue when 5-4 decisions are rendered?

That uniformity of frequency is the problem with the modern 5-4 decision. Suppose you have nine justices A, B, C, D, E, F, G, H and I. 5-4 decisions would not be viewed as a significant problem if the variance of determination routinely differed between cases. For example in case 1 justices A, B, C, F and H form the majority decision, in case 2 justices A, E, C, G and I form the majority decision and in case 3 justices B, F, G, H and I form the majority decision. In this scenario the majority decision is formed by a variety of justices, there are no ‘groups’.

However, a problem arises if instead in case 1 justices A, B, C, D and E make the majority decision, in case 2 justices E, F, G, H and I make the majority decision and in case 3 justices A, B, C, D and E make the majority decision and so on and so forth. This ‘group-think’ mentality creates a dangerous precedence where the same individuals view the law in the same way, which significantly limits the perception that these individuals are actually looking at the law and not relying on other elements of their personalities and belief structures to lead them to conclusions about how to rule on a given case.

The sad state of affairs is that the 5-4 decision in the modern U.S. Supreme Court, due to this group-think mentality, appear to be have political beliefs as the driving force over actual legal principle. Unfortunately the vitriol of the partisan political climate has torn away the necessary impartiality of the court tainting their decisions. To neutralize this improper behavior a simple majority is no longer a proper means to determine a legal precedent. Instead 6 votes not only 5 must be in favor of a writ in order to validate it. Under this new proposal if the court rules 5-4 on a given case it would be akin to if the Supreme Court never heard the case in the first place, no opinions (majority or minority) would be issued and the ruling of the lower appellate court would stand.

Such a step may seem too extreme, for although the final decision made by a justice can only fall into one of two categories there can be multiple reasons behind the final decision expressed in multiple opinions. In this situation the opinion can be properly viewed as the methodology and the decision as the result where logical and thorough methodology will lead to the correct result and improper and illogical methodology will commonly lead to the incorrect result.

The concern for methodology is limited to the rare case where the incorrect methodology leads to the correct result. An incorrect methodology should always be a concern because even if a correct result is attained once, there is the distinct possibility that an incorrect result will be attained for a future case using the same flawed reasoning. Therefore, it really does not matter that x justices may have y differing opinions/interpretations that lead to the same result beyond the fact that they are wrong and understanding the reason may result in its correction. It is like one person saying 2 + 2 = 7, another saying 2 + 2 = 9 and a third saying 2 + 2 = 19. It does not matter how close someone is to the right answer or really the logic behind how they got the answer, the only thing that matters is that all three answers are wrong.

There are two possible major lingering problems from making this 5-4 decision nullification change. First, a 5-4 nullifier could give too much power to the Appellate courts, especially if the Supreme Court continues to judge based on politics and not the law, but in the end that would be the Supreme Court's fault. Second, should such a decision nullify previous 5-4 decisions when applicable? Clearly it would be impossible to impose any retroactive enforcement on time sensitive rulings like Bush v. Gore from 2000, but what about a case like Kelo v. New London from 2006? One problem stemming from retroactive enforcement is when does one initiate the reversal? One option would be starting in 2000 the time when a number of people believe hyperpartisan began to significantly influence the court.

Another lesser concern may be that the precedent that is being used to decide a present day case was unduly influenced by a personal and political opinion of a past justice, thus contaminating that piece of precedence. Thus if present-day justices need to work from previously tainted rulings without the ability to use their own interpretations to correct those rulings the system will forever be flawed and wrong. The above statement is true, but has no merit against the 5-4 policy proposed because then all of the justices should be able to recognize the error in that precedence during the deliberation of the case where that precedence is relevant.

Overall it appears that regardless of tradition it is now appropriate to begin to think about changing the dynamic of the Supreme Court with respect to narrow 5-4 majority decisions and their justification.

Monday, October 17, 2011

The Correct Contextual Economic Argument for Direct Air Capture

Since the last time DAC was discussed on this blog it has received more attention including a report issued by American Physical Society. Unfortunately, yet not surprisingly, most of these discussions have focused on the economics of developing and deploying such a system over technical/feasibility discussions. Most would argue that the economic element is critically important in the modern capitalistic world as almost all decisions revolve around economics and affordability. Also efforts to reduce costs should aid in the technical development of the overall process. While both these statements are true, the problem is that most of the economic analysis is not being conducted from the proper perspective.

For example most estimates place the cost of removing 1 ton of CO2 from the atmosphere, regardless of specific design, between $450-600. Relate this cost back to the fact that CO2 has traded in the European Trading Scheme rather consistently at $20-30 per ton (meaning that it costs $20-30 dollars for a participant to emit 1 ton of CO2) and clearly application of current DAC models is too expensive.
Compounding the problem is that it is reasonable to conclude that these estimates only calculate the costs associated with capturing 1 gross ton of CO2 because they do not include the CO2 emitted to provide the energy required to capture the CO2 from the atmosphere, thus the costs now are even higher than those stated above. Overall most DAC proponents agree that this cost is too high and believe that through the additions of carbon taxes and trade programs and reductions in technical and development costs due to scale up costs will drop significantly. Also proponents believe that finding a marketplace for the captured carbon will also narrow the cost gap.

This desire for a marketplace is where the economic argument begins to breakdown relative to the environmental strategy that DAC should be following. The principle rationality behind pursuing DAC is to reduce the amount of CO2 in the atmosphere as fast as possible in order to lessen the detrimental influence of global warming, not make money or close the cost gap. Unfortunately the two most cited methods for making money from a DAC system are in contradiction to this principle rationality. The first means of ‘funding’ DAC is to use the captured CO2 in enhanced oil recovery (EOR). The problem with this strategy is obvious. The point of recovering more oil is to burn it in some industrial process or use it for transportation. Thus there is no significant decrease in atmospheric CO2. With the most optimistic scenario such a system could be viewed as very slightly CO2 negative with somewhat unjustifiable costs for that reduction. Unfortunately such a system then realistically wastes energy (energy that is used to extract the CO2 and the oil) and further pollutes the environment (recall that refining and burning oil releases other pollutants in addition to CO2).

The second means of ‘funding’ DAC is to use the captured CO2 as a basis for a ‘carbon neutral’ hydrocarbon or biologically-based fuel. Earlier in the thought process the idea was to create a hydrocarbon-based fuel, but that idea was complicated because such a process typically required high purity streams of CO2 and hydrogen (most sources of hydrogen are currently drawn from fossil fuel combustion). Now this idea has evolved into using CO2 as a highly concentrated feedstock for algae and extracting bio-fuel from the algae. However, the problem with this strategy is the same as using the CO2 for EOR, as long as the fuel produced by the algae is being consumed and the end-products released into the atmosphere, it is another closed CO2 neutral system over a legitimate CO2 negative system.

Again like with EOR due to the other externalities (infrastructure and transportation of inputs and outputs) involved the overall system should add CO2 and other greenhouse gases to the atmosphere. Also the fuel issue may be somewhat irrelevant as well because of the advent of electrical vehicles which could be powered in the future by electricity from trace emission sources (nuclear, solar, geothermal, etc.). If society did not have the technology required to transition away from fossil fuels to electrical vehicles for another couple of decades such a carbon neutral fuel idea may make more sense, but that is not the case. Thus, such a closed loop carbon neutral system seems to have no real benefit and only results in wasted energy and resources.

The waste is especially pertinent to trace emission energy generation if that generation comes from either wind or solar due to limited economically available amounts of various rare earths. Basically if only a certain number of solar panels could be created then there is no reason to waste any by attaching them to a closed-loop system which provides no significant net benefit. The same argument can be made for water as current DAC designs demand significant water consumption. Other ideas surrounding funding are generally small potatoes like selling the captured CO2 to beverage companies. Clearly there is a market for pure CO2 to beverage companies, but not a 7-gigatons of CO2 per year market.

So if there is no viable market for captured CO2 that is also in accordance to the principle reason for establishing a DAC network, then what is economic argument? The response is adjusting how one looks at the economic issue. The economics of DAC is not profitability, but prevention. For example does Person A eat broccoli on a regular basis because Person B pays them a sum of money to do so? No, Person A either consumes broccoli because they like it or because it is a healthy food. For the latter rationality there is reason to believe that the consistent consumption of broccoli will result in a reduced probability of various diseases and ailments in the future relative to a person who does not consume broccoli (all other elements being accounted for). Thus, the economic benefit for consuming broccoli is derived from lower future costs associated with healthcare and perhaps a reduction in lost wages due to less work missed versus immediate short-term incentive/reward.

No reasonable person disputes the fact that global warming will increase the probability and severity of future extreme weather events and will also change general weather patterns which will result in environments receiving much more or much less rainfall leading to an increased probability of flooding or drought (among other damage more notably from increases in temperature). After simply looking at the overall economic damage associated with extreme weather events along with floods/droughts in 2010 and 2011 alone a reasonable person would come to the conclusion that it is important to lessen the probability impacts of global warming as much as possible.

Also such a reduction would result in the savings of billions of dollars in the short-term (10-20 years from now) and trillions of dollars in the long-term (20-50 years from now). Therefore, similar to the broccoli example, the above prevention model is how proponents of the DAC should sell their technology instead of trying to make it short-term profitable. The profitability comes from the money saved in the future by reducing the probability of detrimental outcomes associated with global warming.

Based on this mindset proponents and designers of DAC systems should not be looking towards venture capitalists to fund the development and deployment of DAC systems as acquiring the necessary funds (billions of dollars) will be nearly impossible, unless said venture capitalists are young and have large stock holdings in insurance companies. In a just world every major company in the world would have to pay into a ‘carbon remediation and mitigation’ fund as a consequence to their past actions resulting in a significant percentage of the total amount of human-derived CO2 emitted in the atmosphere. Money from this fund could then be used to reduce human-derived emissions as well as fund DAC.

Unfortunately it is quite obvious that the world is not just; therefore, the argument of self-preservation must be applied and governments must bear the burden of development, a difficult reality due to the general financial crisis that currently exists. From a self-preservation standpoint China and India probably should be first in line with funding because both countries are high on the list of ‘going to have their environments significantly and detrimentally changed due to global warming’, costing them trillions in future economic damages.

Overall the whole point of pursuing DAC is to reduce the probability of detrimental effects from global warming by reducing the amount of CO2 in the atmosphere as fast as possible. The speed requirement limits the argument for planting new trees (although this still should be done). The sheer amount of CO2 that needs to be removed from the atmosphere is tremendous, so much that no CO2 market could be created to absorb anywhere near the amount that needs to be removed. So those looking to try to justify developing DAC under a ‘money in the pocket now’ economic model should fail unless one abandons the point of pursuing DAC in the first place, which then calls into question why one pursues DAC at all. Therefore, proponents must argue not from a viewpoint of how the captured CO2 will be utilized, but from the viewpoint of how much money will be saved when embarking on the program vs. failing to do so.