tag:blogger.com,1999:blog-577196923981525982024-02-19T03:45:32.456-08:00Bastion of Reason13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.comBlogger227125tag:blogger.com,1999:blog-57719692398152598.post-23691985839231871042017-06-27T12:43:00.003-07:002017-06-27T12:44:38.579-07:00The Necessity of Carbon Remediation and its ApplicationOver five years ago I discussed the issue that addressing global warming would involve both the reduction of new human based releases of carbon dioxide (CO2) into the atmosphere (carbon mitigation) and developing a method of increasing the rate of removal of already existing CO2 in the atmosphere either spurred through natural and/or technological means (carbon remediation). This dual requirement is born from the inability of nature to currently manage existing and future CO2 levels to ensure the maintenance of a viable environment to accommodate both the existing global human population and any increases that are seen in the near-future. <br />
<br />
For both carbon mitigation and remediation two elements take precedence: effectiveness and speed. Effectiveness is rather self-explanatory; if the applied strategies are unable to reduce the release of new CO2 concentrations and remove more CO2 from the air versus what is added over the life-cycle of the remediation processes then such strategies are not worth exploring. Speed is necessary because there is already a dangerous amount of CO2 in the atmosphere and the rate of carbon mitigation is not proceeding nearly fast enough relative to the capacity of natural sinks to remove CO2. Basically with each passing year the total concentration of CO2 in the atmosphere is increasing not decreasing and based on current mitigation patterns this reality is not going to change in the near future. Note while both mitigation and remediation are important, the remainder of this discussion will focus on remediation.<br />
<br />
With the idea of speed in mind, while there are more cost-effective (i.e. more economically attractive) remediation strategies available, largely those involving planting trees or synthesizing bio-char, these methods are significantly slower than various technological methods. In addition to the issue of speed, the efficiency of natural methods like planting trees could be called into question for there is the potential for natural sinks to decline in overall CO2 capacity between less CO2 absorption from trees, a more acidic ocean beginning to out-gas due to changes in the concentration gradient or even decreased levels of material weathering.<br />
<br />
Even if there was no threat of lost absorption capacity from natural sinks, it is difficult to conclude that natural sinks will be able to remove enough CO2 from the atmosphere, even in a scenario of rapid emission reduction due to the already existing concentration, before the occurrence of serious negative environmental outcomes. Therefore, while it may not be a popular notion for some environmentalists and some economists, the simple reality is that technology will have to be at the forefront of removing existing CO2 from the atmosphere leaving nature to play more of an auxiliary role. <br />
<br />
Of the two major strategies for large-scale carbon remediation, direct air capture and ocean fertilization, initial tests with ocean fertilization have not been positive. While the initial theory is solid, in practice the increased phytoplankton concentrations have been unable to demonstrate any real gains in CO2 removal, largely due to increased predation from zooplankton.1 These complications have soured the chief advantage of ocean fertilization, simplicity, leaving direct air capture as the theoretical best strategy for carbon remediation. <br />
<br />
To ensure clarity, the term “Direct Air Capture” is being interpreted as: the technological removal of atmospheric CO2 from a non-point source (versus a point source which would be a power plant or automobile) by reacting atmospheric CO2 with a sorbent (usually an alkaline NaOH solution). This reaction with the sorbent typically forms sodium carbonate and water. The carbonate then reacts with calcium hydroxide (Ca(OH)2)) resulting in the generation of calcite (CaCO3) and reformation of the sodium hydroxide. The process of causticization transfers a vast majority of the carbonate ions (»94-95%) from the sodium to the calcium cation and the calcium carbonate precipitate is thermally decomposed to regenerate the previously absorbed gaseous CO2. The final step involves thermal decomposition of the calcite in the presence of oxygen along with the hydration of lime (CaO) to recycle the calcium hydroxide.2,3 Obviously some of the details can differ depending on the type of sorbent utilized and other side elements of the process, but the above description entails the general chemical operation of direct air capture.<br />
<br />
Obviously direct air capture is not without its own challenges mostly due to the incredibly small concentration of CO2 in the atmosphere for while 400+ parts per million (ppm) is very significant from an environmental standpoint, it is clearly not a large amount from a chemical reaction standpoint. This CO2 “deficiency” is largely responsible for the significant costs associated with CO2 removal via direct air capture, which have been estimated at a cost floor of $300 per ton of CO2 (which is optimistic in isolation) to a $1200+ per ton of CO2 ceiling (which is rather pessimistic).4 However, regardless of these potential costs it does appear that whatever the actual costs, it is one that humanity will have to foot the bill for if it wants to maximize its probability of surviving from a societal standpoint into the near future. <br />
<br />
There are three major issues surrounding the proper functionality of the process of direct air capture not involving the specifics of the direct process of capturing the CO2: power use, water use, and end destination of the absorbed CO2. Not surprisingly each of these issues must be addressed to optimize the overall process of CO2 removal from the atmosphere and maximize its overall economics. <br />
<br />
The consideration of the power source is important relative to speed and efficiency regarding the total net CO2 captured and removed from the atmosphere. For example, if a trace emission source is utilized (nuclear, geothermal, wind or solar) then the process can be reasonably estimated as 90-99% efficient (10-100 tons of CO2 will be captured and removed for every 1 ton of CO2 used to power the process). With this estimate the net cost per ton will be 1.01-1.1 times more than the gross cost relative to the power use component. However, if a fossil fuel source is utilized then, largely dependent on the exact fuel mix, the process will be 50-70% efficient and the net cost will be about 1.3-1.5 times larger than the gross estimated cost for the power use component. <br />
<br />
Obviously due to this significant efficiency disparity the utilization of a trace emission source for the process is imperative, but which process is most appropriate? Speed is the most important element in the removal process because of the existing and future damage to the environment, something that money really cannot replace so the process must operate as close to 24 hours a day 7 days a week as possible. This requirement heavily limits the viability of using wind or solar as the energy medium, thus leaving two principal contenders: geothermal and nuclear. <br />
<br />
Now while one could attempt to argue that wind or solar could work with the appropriate level of storage as backup, such an argument does not sit on solid ground with the existing lack of storage options and the empirical track record of such a design. While small pilot plans exist and have received flashy headlines and hype, the output of these plants is basically irrelevant to any expected energy requirements for air capture. Also recall that energy can only stored if it is in excess, which will not be true most of the time, for the solar and/or wind elements are already providing energy to various elements associated with the capture process. Pumped hydro shares the same problem, as well as limiting the location for the process because of its required topography.<br />
<br />
In the past geothermal was thought to be the better choice over nuclear largely due to any potential nuclear waste issues associated with nuclear power, with enhanced geothermal systems (EGS) being the preferred geothermal methodology. Note that the issue of safety regarding nuclear power has long been a foolish reason to oppose it for safety issues only arise when the operator (be it government or corporation) is allowed to cut corners and/or does not adhere to proper and standard safety operating procedures.<br />
<br />
Unfortunately, there has been few rigorous studies concerning EGS especially relative to any expansion of seismic activity pertaining to its application. In short the EGS process can produce an environment that increases seismic activity of low Richter scale earthquakes (the occurrence of 2 to 3 scale quakes appear to increase in probability). However, unlike fracking, which increase both earthquake probability and severity, little is known regarding whether EGS will increase earthquake severity (from 2 or 3 to 4+). This uncertainty, which could have been and should have been studied in earnest years ago, makes it difficult to support going forward with EGS. Thus, nuclear becomes the better choice with at least a generation 2 design as the standard in order to limit or outright eliminate resultant waste or one could utilize a small modular unit design.<br />
<br />
Water utilization is also an important issue for regardless of the system, the chemical reaction involved in the absorption of CO2 from the atmosphere requires water, commonly as a catalyst. However, despite the general nature of a catalyst (lack of consumption at the conclusion of the reaction) the open-air nature of the reaction system results in a significant percentage of the utilized water being lost to the atmosphere as water vapor making inherent water recovery within the process itself more difficult. Therefore, there are two important questions involving water use in the process: 1) How will the initial amount of water for beginning the process be procured? 2) How will atmospheric water losses be minimized? <br />
<br />
The best solution for obtaining the required starting water is from desalination, which is suitable because direct air capture units can be built almost anywhere due to the natural mixing of the atmosphere maintaining relatively constant global CO2 concentrations over the long-term. Regarding the question of how atmospheric water losses will be minimized there are two potential strategies. First, the use of properly placed atmospheric condensers could recover a significant portion of the lost water and recycle it back into the beginning of the process. Second, depending on the economic and environmentally efficiency of the desalination process, there may be no need for any type of recycling, instead drawing all required water from desalination including that which is lost. <br />
<br />
However, this method is inherently risky because of the potential detriments associated with desalination and any potential issues involving the hydrological cycle due to the new levels of water evaporation from the direct air capture process. Overall the better option appears to initially provide water via desalination and allow further desalination to fill-in any gaps in recycling missed by the water condensers. Fortunately, either option seems valid from an energy standpoint with the nearby nuclear reactor powering the direct air capture devices. <br />
<br />
The infrastructure to transport water needs to be considered both from an efficiency and economic standpoint. The two most viable methods for the initial water application would be constructing a piping infrastructure to transport the desalinated water to the direct air capture units or simply using transport vehicles, like large trucks, to move the water to the direct air capture units. An important element to determining which method is best involves the rate of recycling from any water atmospheric collectors near the direct air capture units. The more water recycled the more attractive a less permanent infrastructure appears (trucks) due to the lower overall capital and even maintenance costs. However, while theory is fine, the overall scale requirements of the operation may require a more permanent source of water due to the sheer amount of water required regardless of recycling.<br />
<br />
Also an important consideration is what to do with desalination byproducts, mostly the removed salt, some of the chemicals in the desalination process and the possibility of certain contaminants from pipe and process breakdown (copper, iron, zinc, etc.). At the moment many desalination plants dispose the brine in the ocean or a closed watercourse through a direct disposal strategy sometimes involving salinity concentration reduction by discharging the brine with wastewater or a cooling stream from a power plant.<br />
<br />
Obviously there is concern about releasing a stream of heavily concentrated brine into the ocean for it can produce both eutrophication and significant pH changes creating problems for the local flora and fauna.5 Other common management strategies include minimization or direct reuse.5 Minimization commonly involves membrane or thermal methods whereas reuse involves recovering salts from the waste brine via crystallization or evaporative cooling and utilizing that salt for other processes or goods.5<br />
<br />
While some are high on the idea of selling salt to offset the operation of a desalination plant such an idea seems optimistic due to the overall expected scale of the operation. Some have proposed ammoniating the brine and using it to increase the volume of CO2 capture.5 The concern with that strategy is providing the necessary ammonium to react with the brine to create a consistent and worthwhile process. Another option that has been floated is incorporating the brine into a set of molten salts that would be used in either nuclear power reactors or batteries. However, the viability of such an idea is still questionable.<br />
<br />
Desalination is not the only aspect of the process that produces a byproduct. The more important environmental byproduct is obviously the CO2 that is extracted from the atmosphere. The most important aspect of this absorption is what process will be utilized to ensure that the newly capture CO2 is not reintroduced into the environment? Some of the more desired solutions involve dreaming of using the captured CO2 as an economic product within enhanced oil recovery processes, as a means to producing a methane or hydrocarbon based fuel for vehicles or a marketed product in a commercial industry (soda, etc.). <br />
<br />
Unfortunately, those first two options return the captured CO2 back to the atmosphere at some percentage, which limits the overall efficiency of the CO2 absorption, increasing overall costs and decreasing the speed of net removal. Also the commercial option will not provide sufficient funds to the operation of the process. While this reality eliminates the idea that commercial product distribution can carry the finances of the process, tapping into commercial process should still be worthwhile as a means to eliminate a very minor portion of the captured CO2. <br />
<br />
Another method to remove atmospheric carbon gaining in popularity is the use of bio-char. In essence bio-char is black carbon synthesized through pyrolysis of biomass. Bio-char is effective because it is believed to be a very stable means of retaining carbon, sequestering it for hundreds to thousands of years. Depositing the captured CO2 into one-sided greenhouses could be another method to disposing of some of the captured CO2 then turning the grown flora into bio-char would remove the CO2. While a possibility, again the scale of absorbed CO2 limits the total value of this process. <br />
<br />
A new method for potentially removing CO2 is utilizing it in an electrolytic conversion to create molten carbonates and later converting those carbonates into Carbon Nanofibers and potentially later even Carbon Nanotubes.6 While this process has yet to be scaled to what would be classified as commercial levels, it does demonstrate some level of promise. The versatility and usefulness of carbon nanotubes or fibers have more commercial value than pure CO2 as a commercial product. However, similar to the other potential options listed above, it is difficult to presume that most of the captured CO2 will be eliminated via this process. <br />
<br />
Mineral sequestration via olivine, serpentine or wollastonite has drawn attention as a possible avenue for CO2 “storage”. However, this strategy does not appear economically or rationally viable for natural weathering is too slow and technologically induced weathering, by grinding down these materials to dramatically increase available surface area, is emission inefficient and costly. So despite some of these more flashy or “economic” choices, overall it is reasonable to suggest that a majority of the captured CO2 will be stored long-term in underground rock formations. <br />
<br />
With all of these additional considerations to take into account it does not appear wise to simply build these air capture units at random. These units clearly need to be constructed in an orderly and cohesive manner, perhaps even in a localized autonomous network. This network needs to contain a water source, a power source and a means of utilizing the captured CO2 in addition to having recycling pathways for all necessary materials used in the selected air capture reactions.<br />
<br />
Overall it is also important to understand that one should not attempt to portray this type of above complex or even direct air capture in general as some new budding industry that will produce a profit. While certain elements will provide some form of revenue, envisioning a new profitable industry does not appear appropriate at this time. So if profitability is not viable, what is the economic argument for direct air capture? The response is adjusting how one looks at the economic issue. The economics of direct air capture and any resulting complex is not profitability, but prevention and to some extent, survivability. <br />
<br />
For example, Person A does not eat broccoli on a regular basis because he is paid a sum of money by Person B to do so, but instead consumes broccoli because it is a healthy food and there is reason to believe that the consistent consumption of broccoli will result in a reduced probability of various diseases and ailments in the future relative to a person who does not consume broccoli (all other elements being accounted for). Therefore, the economic benefit for consuming broccoli is derived from lower future costs associated with healthcare and perhaps a reduction in lost wages due to less work missed versus immediate short-term incentive/reward.<br />
<br />
No reasonable person disputes the fact that global warming will increase the probability and severity of future extreme weather events in addition to producing detrimental changes in general climate and weather patterns. These changes will produce significant levels of environmental and economic damage and will eventually threaten the very viability of human society. Therefore, a reasonable person would come to the conclusion that it is important to lessen the detrimental impacts of global warming as much as possible. Such a reduction would also result in the savings of billions of dollars in the short-term (10-20 years from now) and trillions of dollars in the long-term (20-50 years from now). Therefore, similar to the broccoli example, the prevention model is how people should look at direct air capture versus attempting to inappropriately sell it as some form of short-term “money-making” venture. The “profitability” comes from the money saved in the future by reducing the probability of detrimental outcomes associated with global warming.<br />
<br />
With this mindset, how would such projects be funded? It is difficult to see venture capitalists getting involved because most only have a nose for eventual profits and as discussed above, this project will not produce profits in that manner. Ironically the only venture capitalists that might get involved are those who are very young and/or have large stock holdings in insurance companies. In a just world every major corporation in the world would have to pay into some form of “carbon remediation and mitigation” fund as a form of restitution for championing a carbon heavy global economy. Money from this fund would then be used to fund direct air capture in addition to other direct CO2 mitigation projects. One could argue that the funds procured from a carbon tax would also serve this purpose. <br />
<br />
Unfortunately, the likelihood of such a program where corporations foot a lot of the bill is unlikely for it is difficult to envision most multi-national corporations agreeing to fund such a program; most companies typically do not do something unless profit is available, which here it is not, or if government is footing the bill. Therefore, it appears that various world governments will have to foot the bill. With that said what governments should go first so to speak: Well the United States is definitely a candidate as it is responsible for the most cumulative CO2 out of any other country. China is in a very close second being responsible for the most CO2 in the last few decades in addition to choosing coal and oil to grow their economy without taking into consideration the environmental realities of that choice when nuclear, wind, solar and/or geothermal were also valid, albeit slower, choices. However, in the end such funding would have to be worked out by international treaty, which does not lend much confidence when considering the success of past international environmental based treaties.<br />
<br />
In the end, it is understandable that if the economic cost of developing an air capture complex of sorts was quantitatively calculated that it would be high; however, the nature of the complex is that all of these elements will be required in the future based on the current environmental-use path humans have embarked upon with regards to expelling CO2 into the atmosphere, thus the cost is not based on luxury, but necessity. The idea behind such a complex for direct air capture is to lower overall net costs by tying many of the air capture units into the same required operational elements, thus making the direct air capture strategy more economical on an overall scale; saving money for investment in other environmentally necessary avenues like emission reduction. Overall while the manifestation of such a complex may not be exactly as described in this blog post, the reality is that as it current stands such a complex will be needed in one form or another.<br />
<br />
<br />
<br />
Citations – <br />
<br />
1. "Lohafex project provides new insights on plankton ecology: Only small amounts of atmospheric carbon dioxide fixed." International Polar Year. March 23, 2009. <br />
<br />
2. Zeman, Frank. “Energy and Material Balance of CO2 Capture from Ambient Air.” Environ. Sci. Technol. 2007. 41(21): 7558-7563.<br />
<br />
3. Perez, E, et Al. “Direct Capture of CO2 from Ambient Air.” Chem. Rev. 2016. 116:11840-11876<br />
<br />
4. American Physical Society. Direct Air Capture of CO2 with Chemicals: A Technology Assesment for the APS Panel on Public A?airs; APS: 2011.<br />
<br />
5. Giwa, A, et Al. “Brine Management Methods: Recent Innovations and Current Status.” Desalination. 2017. 407:1-23.<br />
<br />
6. Ren, J, et Al. “One-Pot Synthesis of Carbon Nanofibers from CO2.” Nano Lett. 2015. 15:6142-6148.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-42794140217453921602016-10-26T10:09:00.001-07:002016-10-26T10:09:49.915-07:00A Magic Bullet in Pain Relief?<br />
The advancement of medicine has numerous accomplishments; however, one of the slower improvements involves addressing and managing pain. Significant instances of pain, both in acute and chronic form, afflict hundreds of millions of people worldwide, but most modern treatments struggle to demonstrate meaningful improvement versus past treatments. In fact it is estimated that at least half of surgical patients do not receive effective pain control after their treatments.1,2 Also addiction to pain medication has become a mounting problem in recent years making long-term pain management strategies more difficult.<br />
<br />
One potential strategy for managing pain that has gained popularity in recent years is focusing on the activation of analgesic targets like sodium channels Nav1.7, Nav1.8 and Nav1.9. These sodium channels belong to a larger family of voltage gated sodium channels (Nav1.1-1.9) that each has specific locations and functional roles in the body. Among the aforementioned three sodium channels, Nav1.7 is viewed as the most important and its function was first identified from conditional knockout studies in mice expressing Nav1.8 after assumptions were raised from a small family appeared to have significant pain insensitivities via a loss-of-function recessive mutation in Nav1.7.3,4 The resultant study identified Nav1.7 playing a significant role relative to inflammatory pain and the conditional deletion of Nav1.7, not surprisingly, heavily reduced that level of pain to an almost non-registered symptomatic level.3,5,6<br />
<br />
Nav1.7 and its 1.8 and 1.9 cohorts are present near the synapses of neurons that are commonly thought to be responsible for sending and receiving pain signals. Overall Nav1.7 appears to transmit action potentials via neurotransmitter release through a threshold managed by Nav1.9, which receives input from Nav1.8.7-10 However, it does not appear that Nav1.7 activation is exclusively reliant on Nav1.8 or 1.9.7<br />
<br />
While one means to address pain in the past was the utilization of global sodium channel blockers, developing a drug that has strong specificity for Nav1.7 is thought to be a principal strategy for more effective pain management by localizing treatment to increase selectivity and reduce negative side effects, especially those involving the heart since Nav1.7 is not located near the heart. While not all forms of pain involve Nav1.7, which should surprise no one, a significant number of pain processes appear to incorporate Nav1.7, which has produced the aforementioned enthusiasm for producing a target therapy.4,7 <br />
<br />
Of course since the major discovery associated with Nav1.7 occurred in 2006,4 various drug development programs have been underway to produce an appropriate and effective treatment. Unfortunately despite the creation of numerous specific stable antagonists, the general results have been disappointing ranging from non-replicated results to unexpected negative side effects.11 One piece of information from these studies highlights an apparent contradiction where the more selective the antagonist for Nav1.7 the less effective the pain reduction versus less selective molecules like lidocaine being more effective.6<br />
<br />
The major reason behind this result is thought to be a relationship between Nav1.7 and enhanced natural opioid signaling born from studies involving Nav1.7 null mutant CIP.4 Basically in null mutants an unknown biological relationship develops producing the dramatic change in opioid concentrations in a natural/steady-state condition that is responsible for blocking pain. This belief is supported by the ability of Naloxone, an inverse agonist for the u-opioid receptor (MOR) and antagonist for k and d-opioid receptors, to frequently reverse the pain insensitivity born from Nav1.7 null.7,12 However, oddly enough while knocking out SCN9A, a gene responsible for encoding Nav1.7, produces this enhanced opioid concentration state; simply reducing the activation efficiency of Nav1.7 after development does not seem to produce anywhere near the enhancement of opiods. Basically there is no proportional response.<br />
<br />
One explanation for this result is look at how the null creature compensates for the loss of Nav1.7 during development. No Nav1.7 expression commonly results in transcriptional up-regulation of Penk, which is a precursor of met-enkephalin, but Penk was not up-regulated in Nav1.8 or 1.9 nulls.7,13 This result suggests that the neurotransmitter release associated with Nav1.7 is the critical step. Complete channel block of Dorsal Root Ganglion (DRG) neurons via high concentrations tetrodotoxin, this is relevant because a number of the neurons at this location have Nav1.7 channels, also creates a state of enhanced opioid expression.7 However, without a complete channel block there does not appear to be significant increase in opioid or enkephalin expression.7 Overall the increase in opioid concentration within null mice, and probably humans, target nociceptive input consistent with the expression of opioid receptors on small nociceptive afferents.7,14<br />
<br />
This result seems to suggest that there is no middle ground in blocking Nav1.7; either the treatment has to produce a 100% channel block or there is no significant increase in pain insensitivity/pain relief.15,16 This issue is a problem for while some agents attempt to improve selectivity by binding to areas outside the pore-forming region on channels through less effective conservation producing inhibitory action independent of the channel’s functional state,6 it is highly unlikely that even these strategies will develop a molecule to create a 100% selective block without significant negative side effects. This challenge has lead researchers to focus on biologics, like venom toxins, over small molecules due to increased rates of selectivity even incorporating techniques like saturation mutagenesis;17-19 however, at this moment success appears improbable.<br />
<br />
This result regarding full channel block produces two questions: first the interaction of Nav1.7 suggests that sodium can function as a secondary messenger with respects to the expression of enkephalin through the alteration of Penk mRNA expression levels. Such a belief is supported by the behavior of the ionophore monensin, which results in decreased expression of Penk whereas blocking the channel up-regulates Penk mRNA.13<br />
<br />
If this is the case, then the importance of Nav1.7 over that of Nav1.8 and 1.9 may be directly attributable to the level of sodium that passes through Nav1.7, which has a greater effect on overall intracellular sodium concentrations versus other sodium channels. For example HEK293 cell lines with permanent expression of Nav1.7 establish a resting intracellular sodium concentration around double the level of control cells.7<br />
<br />
Second, Nav1.7 could produce a form of some level of natural opioid inhibition or at least a form of negative feedback. This mindset seems to be supported by gain-of-function mutations in Nav1.7 typically producing conditions of erythromelalgia (PE), which is characterized by episodes of symmetrical burning pain of the feet, lower legs, and even hands and is tied to increased Nav1.7 channel activity.6 However, if this is the case it raises an interesting question to why null Nav1.7 seem to produce no inherent negatives born from the additional concentrations of opioids, i.e. no addiction or sensitivity. Perhaps in null cases other pathways form to provide a level of opioid feedback inhibition or “saturation” management.<br />
<br />
Based on the above information it does not appear that producing a molecule to interfere with Nav1.7 activity can be effectively used to treat pain because full blockage is seemingly required to produce conditions associated with pain insensitivity and general pain treatment. Also blocking Nav1.7 over long and consistent periods of time may damage other important sensory processes. The reason Nav1.7 demonstrates success in knockouts, both cultured and natural, may be because the knockout mutation forces the body to focus on other pathways to manage the other systems that Nav1.7 would normally interact with if it existed. However, that does not exclude using information pertaining to Nav1.7 activity to identify a better pain management treatment. <br />
<br />
A better strategy may be to pursue strategies to expand or mimic concentrations of met-enkephalin, which is directly influenced by Nav1.7 activity. Met-enkephalin is a strong agonist for the d-opioid receptor, has some influence on the u-opioid receptor and almost no effect on the k-opioid receptor.7 However, despite its meaningful opioid influence, met-enkephalin has low residence times in the body due to rapid levels of metabolization.20 Thus, simply injecting met-enkephalin into a person would serve little purpose in addressing pain because it would have to be done at large doses and too frequently. However, a synthetic enkephalin, [D-Ala2]-Met-enkephalinamide (DALA) has shown some positive attributes at managing pain by changing its rate of metabolism. <br />
<br />
In the end despite the clear understanding that pain relief can be achieved by blocking a channel like Nav1.7, no compounds have been developed to effectively and easily take advantage of that reality. Due to the requirement of full channel block it is highly unlikely that a treatment involving small molecules will ever be successful, leaving the door open only for modified biologics. However, even with a successful “in lab” molecule the location of Nav1.7 in higher concentrations behind the blood brain barrier may make meaningful treatment difficult without some level of increased blood brain barrier penetration. Overall the allure of channel block pain therapy involving a specific location like Nav1.7 may need to be supplemented by further focus on the more downstream products associated with channel activation or inactivation like Met-enkephalin to complement pain relief strategies. <br />
<br />
<br />
--<br />
Citations – <br />
<br />
1. Chapman, R, et Al. “Postoperative pain trajectories in cardiac surgery patients.” Pain Research and Treatment. 2012. Article ID 608359. doi:10.1155/2012/608359<br />
<br />
2. Wheeler, M, et Al. “Adverse events associated with postoperative opioid analgesia: a systematic review.” Journal of Pain. 2002. 3(3):159–180.<br />
<br />
3. Nassar, M, et Al. “Nociceptor-specific gene deletion reveals a major role for Nav1.7 (PN1) in acute and inflammatory pain.” PNAS. 2004. 101(34):12706-11.<br />
<br />
4. Cox, J, et Al. “An SCN9A channelopathy causes congenital inability to experience pain.” Nature. 2006. 444(7121):894-8.<br />
<br />
5. Abrahamsen, B, et Al. “The cell and molecular basis of mechanical, cold, and inflammatory pain.” Science. 2008. 321(5889):702-5.<br />
<br />
6. Emery, E, Paula Luiz, A, and Wood, J. “Nav1.7 and other voltage-gated sodium channels as drug targets for pain relief.” Expert Opinion on Therapeutic Targets. DOI: 10.1517/14728222.2016.1162295<br />
<br />
7. Minett, M, et Al. “Endogenous opioids contribute to insensitivity to pain in humans and mice lacking sodium channel Nav1.7.” Nature Communications. 6:8967. DOI: 10.1038/ncomms9967<br />
<br />
8. Eijkelkamp, N, et Al. “Neurological perspectives on voltage-gated sodium channels.” Brain. 2012. 135:2585–2612.<br />
<br />
9. Akopian, A, et Al. “The tetrodotoxin-resistant sodium channel SNS has a specialized function in pain pathways.” Nat. Neurosci. 1999. 2:541–548.<br />
<br />
10. Baker, M, et Al. “GTP-induced tetrodotoxin-resistant Naþ current regulates excitability in mouse and rat small diameter sensory neurones.” J. Physiol. 2003. 548:373–382.<br />
<br />
11. Lee, J, et Al. “A monoclonal antibody that targets a Nav1.7 channel voltage sensor for pain and itch relief.” Cell. 2014. 157(6):1393-404.<br />
<br />
12. Dehen, H, et Al. “Congenital insensitivity to pain and the "morphine-like” analgesic system.” Pain. 1978. 5(4):351-8.<br />
<br />
13. Popov, S, et Al. “Increases in intracellular sodium activate transcription and gene expression via the salt-inducible kinase 1 network in an atrial myocyte cell line.” Am. J. Physiol. Heart Circ. Physiol. 2012. 303:H57–H65.<br />
<br />
14. Usoskin, D, et Al. “Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing.” Nat. Neurosci. 2015. 18:145–153.<br />
<br />
15. Minett, M, Eijkelkamp, N, and Wood, J, “Significant determinants of mouse pain behaviour.” PLoS One. 2014. 9(8):e104458.<br />
<br />
16. Minett, M, et Al. “Pain without nociceptors? Nav1.7-independent pain mechanisms.” Cell Rep. 2014. 6(2):301-12.<br />
<br />
17. Shcherbatkok, A, et Al. “Engineering highly potent and selective microproteins against Nav1.7 sodium channel for treatment of pain.” J. Biol. Chem. 10.1074/jbc.M116.725978<br />
<br />
18. Harvey, A. “Toxins and drug discovery.” Toxicon. 2014. 92:193-200<br />
<br />
19. Yang, S, et Al. “Discovery of a selective Nav1.7 inhibitor from centipede venom with<br />
analgesic efficacy exceeding morphine in rodent pain models.” PNAS. 2013. 110:17534-17539<br />
<br />
20. Minett, M, et Al. “Distinct Nav1.7-dependent pain sensations require different sets of sensory and sympathetic neurons.” Nature Communications. 2012. 3(4):791-799.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-63654449774388292272016-09-27T10:09:00.000-07:002016-09-27T10:09:59.219-07:00The Nature of Protesting<br />
As long as opinions exist human beings will engage in protests against those things with which they disagree. Unfortunately for protesters the general rate of success is rather dismal because most protesters have seemingly forgotten the purpose of protesting and its inherent limitations, especially in modern society. How can protesting become a useful tool for establishing change versus simply being a mobile echo chamber of time wasting annoyance and/or criminal behavior? <br />
<br />
The major purpose of protesting is to cast attention to a given issue and either inform others who have the power to influence change or those who are also affected by the issue of its importance and the need for change, but may not already be aware of it. In modern society, especially a Republic or Democracy, the secondary goal of a protest is to act as a persuasion tool to convince others that the issue of the protest is meaningful and worthy of attention. This attention hopefully will lead to a stronger and more unified front for change against the particular issue increasing the probability that there is change. <br />
<br />
One of the chief problems with modern protesting is it is imbued with too much emotion and not enough logic. It is understandable that there is an emotional element to protesting for either the acute veracity of a singular event or chronic weight of numerous smaller events typically produces an emotional driver to facilitate individuals into taking the time and effort to publicly air their grievances. However, this emotional aspect of the event(s) underlying the motivation for the protest has lead protesters to make disadvantageous decisions and actions in the process and/or administration of the protest.<br />
<br />
Emotional responses and drivers apply an illogical conclusion to believe in a greater necessity to increased frequency of protesting, which relative to the purpose of protesting is commonly detrimental. Basically protesters protest action/policy “y” at greater frequency than they should, because the cause is so emotionally important to them. However, when major protest events occur within close temporal proximity, the impact of those protests towards those not already in support of the “cause” is lessened and even potentially damaging to the success of the cause. For example the group known as “Black Lives Matter” have fallen into this pitfall in their recent activity. <br />
<br />
Part of the problem with multiple protest events over a short period of time is it portrays the organization as disingenuous to actively seeking change versus just simply seeking personal attention or notoriety. Most major protests, especially those that spawn organizations to manage the desired change, focus on a meaningful, yet large-scale issue that requires time, resources and effort to produce change. However, multiple protests over a short period of time lead those who do not immediately agree with the protests to conclude, somewhat correctly, that the protesters are not serious about their so-called desire to produce change because they do not understand the process in which that change will occur, if it occurs at all. This attitude will lead individuals to conclude that the organization and perhaps even the cause itself is not worth focusing on, especially in a world where there are already so many other “meaningful” problems. <br />
<br />
Some may counter that protests do not just serve as a means to cast attention on a given issue or even rally like-minded individuals and convince “on the fence” individuals, but also to provide an avenue to a frustrated demographic to vent… so to speak. While this initial argument has some merit, its value is only relevant so long as the protests do not significantly interfere with the lives of others in society, for example by stopping/blocking traffic or reducing the effectiveness of economic activity. One may like to punch the air to vent; however, it is not appropriate to punch air that another person’s face is filling. Using violations of the law as a means to “burn off steam” is clearly inappropriate and heavily limits the credibility of any protest and the individuals and/or organizations responsible for it. Therefore, the argument that mass-scale protests can be used as a means to vent is an invalid one that is simply used as a flimsy excuse. <br />
<br />
Also these types of protests that block traffic and/or generally inconvenience others are rather foolish from a standpoint of cost-benefit. By inconveniencing others, especially numerous times over a short time period, the protesters are significantly increasing the probability of producing more enemies to their cause. This behavior is meaningful because whereas an individual may have remained on the proverbial sidelines for the protester’s fight, now thanks to the slight by the protesters, either directly or indirectly, that individual may work against the motives of the protesters, perhaps simply out of spite alone. Some could counter that “you can’t make an omelet without breaking a few eggs” (i.e. disruption of the status-quo is necessary for change), but there is definitely a difference between intelligent disruption and needless/foolish disruption and most protest organizations seem to not understand the difference limiting the validity of that argument in relation to their activities.<br />
<br />
Overall mass-scale public protesting is only step 1 in the process of producing change by demonstrating that something is a problem and creating a mindset among the populous that the problem must be addressed with haste in the future. However, the real work to change the problem occurs after step 1, for step 1 does not actually achieve any change. Not surprisingly though the steps beyond public protesting are much more difficult both in their initiation and in determining and demonstrating any actual progress towards the goal/change in question. <br />
<br />
Unfortunately these challenges appear to trip up most organizations that materialize in the space of step 1. Either these organizations are not capable of transitioning beyond step 1 or they do not care about the events beyond step 1. This lack of skill, ability, influence, etc. traps most organizations in step 1 for through the act of public protesting, these organizations can continue to demonstrate their so-called relevance for public protesting is easy, especially with access to the Internet and the existence of a non-authoritative government. However, as time goes by these organizations are simply lying to their supporters about their relevance because continued public protests on their own will not produce success towards addressing the change these protests claim to desire. Prominent recent examples of this trap are both Black Lives Matter and Occupy Wall Street. <br />
<br />
Perhaps that is one of the more unfortunate problems with these organizations, the idea that the “leaders” of these organizations realize that the organization is ill-equipped to accomplish the change, yet cannot acknowledge that it is time to disband or evolve the organization under the idea that such action would be regarded as failure by supporters. Recall it is much more difficult to demonstrate success from meetings in a boardroom than holding up traffic on the street. Therefore, these leaders instead aim to maintain their positions and any benefits that come from those positions, by simply continuing to focus on step 1 in an attempt to obfuscate their own lack of ability and competency by turning the attention of their supporters to the “evil” of the so-called opponent. <br />
<br />
While the above position is rather cynical, it is also true that certain organizations function under such a mindset. However, the transition beyond step 1 has also proven difficult for those non-self-aggrandizing organizations. Thus, these organizations must focus not only on pointing out the problem(s), but proposing detailed and valid solutions to the problem. Unfortunately this is not the case for a vast majority of situations. In a sense the step 1 attitude by most of these organizations can be viewed as similar to Homer Simpson’s campaign slogan in “The Simpsons” when he ran for Springfield sanitation commissioner… “Can’t Someone Else Do It”. Basically the organizations state that they have done the “hard” work of pointing out the problem exist, now someone else can actually fix the problem which the organization will take credit for it.<br />
<br />
Even when organizations propose solutions, those solutions are typically lacking with a variety of holes, usually on the details end and probability of application due to the general lack of information and/or bias. For example The Urban League proposed a “10-Point Justice Plan” to address the negative relationship between the black populous and law enforcement. Unfortunately this “solution” was heavily lacking in detail largely associated with general application. It promoted a lot of “universally applied” ideas merely by citing either one program in one particular city or one un-passed existing piece of Federal legislation. Also it was rather bias and generally naïve. A number of elements to the “solution” could be viewed merely as quasi-demands over actual genuine attempts to solve the problem.<br />
<br />
However, for all of the problems of the “10-Point Justice Plan”, at least the Urban League produced a starting point in which to produce solutions. Unfortunately the fact that organizations like Black Lives Matter continue to reside in step 1, protest, draws resources and attention away from that starting point, thereby heavily reducing the probability that a long-term solution even materializes in the first place. This type of behavior goes to demonstrate the disconnect between organizations in step 1 and organizations that have moved beyond it, but claim to be “working” towards a solution to the same concern/problem.<br />
<br />
Another concern with most protests is the tone and lack of awareness for the existing problem. For example the negative relationship between the black populous and police officers in the eyes of the black populous is thought to be entirely the fault of the police. Of course this is not correct for the black populous certainly does not treat the police with the appropriate level of respect and decorum that is expected for the position, which not surprisingly exasperates problems in the relationship. Part of the problem is a number of individuals in the black populous fall into the same pitfall they claim the police do: stereotyping all police as out to get them racist, just as they believe police believe all blacks are scum-criminals up to no good. Until the black populous acknowledges and corrects this behavior of stereotyping police officers as racists, among other things, the relationship between the black populous and the police will remain strained for it is not a one-sided problem.<br />
<br />
Furthermore some may believe that protesting works because they look to the past and see the fruits and successes of protests. Unfortunately in the process of looking upon days long gone there is a lack of understanding in how society has evolved. These successful protest movements were able to demonstrate the power of the protesters to effectively influence society due to their integral role in society. For example The Montgomery Bus Boycott was built entirely around the fact that the general economic survival of the bus company was dependent on its black customers. <br />
<br />
Unfortunately for protesters, over the last few decades economic development and technology has significantly altered the way the economy functions. Globalization and the Internet have generally decoupled major business from their proximity and those local consumers. Therefore, local protests tend to only impact local businesses, which frequently only damages the local infrastructure, which can cause more harm overall than what the protesters are protesting against. So while in the past, protests could apply more direct pressure, now the manner in which society has changed mitigates a lot of that direct influence and power. In some respects it can be argued that there are just too many people for protests and boycotts to really have any significant influence economically. Now such influence is regarded more as mere annoyance to outright criminal behavior that does not win allies.<br />
<br />
In a democracy change demands voting and placing individuals in power that will produce that change. Unfortunately while step 1 attempts to create the necessary attention to get prospective voters to care about the issue, it does nothing beyond this element. A lack of voting is definitely one of the major reasons why despite all of the protesting in the world, so little genuine and meaningful change has actually occurred on most issues.<br />
<br />
This voting issue has been largely noted in minority communities with reference to the local governing body via claims that minority demographic x makes up 72% of the voting eligible population, but the local government is 80% white and how this is wrong. However, this point is rather devious and inappropriate. It is important to note that that it is bias behavior if an individual with demographic characteristic x votes for a candidate solely because he/she shares that demographic characteristic (i.e. a black person votes for a black candidate solely because he/she is black or a Jewish person votes for a Jewish candidate solely because he/she is Jewish, etc.) <br />
<br />
This demographical point is rather idiotic to make because a democracy is not structured in such a way that government officials should proportionally represent the electorate demographic; the point of a democracy is government officials should pass policies and govern in a manner that is approved by the majority of voters. However, the above statement commonly made by minority “activists” regarding certain communities being 72% x, yet 80% of government/civil servant positions being white portrays a racist/bias mindset of x should be represented in more government positions solely because the electorate is some % of x. Therefore, it is important that individuals vote and that are informed enough that they vote for officials that will best represent their interests regardless of whether or not those individuals share certain characteristics.<br />
<br />
In the end individuals/organizations who seek to produce change by initiating protests must understand that protesting can only cast attention to a given issue. Gone are the days when only protesting can produce valid and meaningful solutions. These solutions are produced later through honest detailed analysis of the problem to produce an appropriate guideline and outline of a solution and then hard work and commitment to turning that guideline into a functioning solution. protesters must be wary though of alienating both potential allies and advisories through excessive protesting, especially the latter. Excessive protesting can definitely spur the passions of potential advisories to work harder to defeat the protester(s), not necessarily because they passionately disagree with the idea/object of the protest, but because of scorn directly towards the protesters themselves. Overall protesters must focus on advancing detailed and thorough solutions to issues they view as problems rather than focusing on simply protesting those problems with no or only piecemeal superficial solutions.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-15471249641811677772016-08-17T10:17:00.000-07:002016-08-17T10:17:14.919-07:00Does the Future of Polling Require a Trip to the Past?<br />
One of the hotter somewhat “nerd” topics in politics of late is the rather significant inaccuracies that have been demonstrated in various public polls among numerous credible polling agencies over the last few years. These inaccuracies range from prediction failures in a number of Presidential Primaries and senate elections in the United States to Parliament elections and the British exit from the EU in Europe not withstanding inaccurate polling results in other countries as well. While layman individuals may not be overly concerned about these inaccuracies, those in the business as well as a number of political scientists are concerned for they view polls as an important element to understanding how people view the state of their country and how their values can influence the path of the country. So what are the major problems creating this inaccuracy and what can be done to address them? <br />
<br />
One of the fortunate things about this problem in modern polling is that not only are the authorities on the matter aware that there is a problem, but they seem to have a general idea to the causes. For example two of the biggest trends creating difficulties for producing accurate polling results are: 1) the increased use of cell phones and the resultant decease in the use of landlines making it more difficult and expensive to reach people; 2) people are less inclined to actually answer surveys even when they can be reached. These two reasons are rather interesting and almost ironic in a sense. <br />
<br />
The expansion of technology was though to make polling more convenient and cheaper, yet it seems that the opposite has occurred. The transition from landlines to cell phones has made polling more difficult in multiple respects. First, the general mobility of cell phones creates a problem in that the area code assigned to the cell phone may not match the area code of where the owner now lives. Obviously this is a problem for asking someone who lives in Maryland about a state Senate election in Washington because of their phone has a 206 area code will not produce an accurate or meaningful result. <br />
<br />
Second, increased cell phone use has significantly increased costs associated with polling through the common random means of creating a sample size. While dual sampling frames have addressed the problem of finding the cell phone users, Federal law reduces general polling efficiency. In the past automatic dialers were utilized to speed through the process of numbers that were disconnected or were not answered only passing to a live person when the call was answered. <br />
<br />
However, the FCC has ruled that the 1991 Telephone Consumer Protection Act prohibits calling cell phones through automatic dialers. With common call ratios commonly exceeding at least 10 times the desired end result (i.e. for a survey response of 1000 people at least 10,000 numbers are commonly dialed), these calls having to be made by live people significantly increases costs against auto dialers. Furthermore all survey participants must be compensated for the call resources (commonly cell phone minutes); in a landline dominant world any required compensation was much cheaper relative to a cell phone dominant world.<br />
<br />
Making matters worse the transition from landlines to “cell phone only using” individuals have followed the typical rapid incorporation path of proven technology where in the U.S. the National Health Interview Survey identified only 6% of the public only used cell phones (no landlines) in 2004 with an increase to 48.3% with an additional 18% almost never using a landline by 2015. So in a sense almost 2/3rds (66.3%) of the U.S. population were more than likely not reachable via landline in 2015.1 <br />
<br />
Obviously even if a pollster is able to reach an individual that is only step one in the process for that respondent must be willing to answer the asked questions. Unfortunately for pollsters the general response rates for individuals have collapsed in a continuous trend from about 90% in 1930 to 36% in 1997 to 9% in 2012.2,3 Not surprisingly there is a concern that this lack of success produces an environment where those who do respond do not comprise an accurate representation of the demographic that is pertinent to the idea of polling. While some studies have demonstrated that so far fancy statistical footwork (so to speak) has been able to neutralize these possible holes, most believe that it is only a matter of time before these problems can no longer be marginalized.3<br />
<br />
This dramatic reduction is somewhat ironic, especially in an Internet era; while a number of people are more than content to spill their guts out on various social media sites about the intricate details of their lives and even events that occur day to day including mundane things like pictures of the lunch they’re about to eat, they are less willing to participate in public polling. Some theorize that Americans as a whole are too busy to answer polling questions, but this explanation does nothing but paint most of those Americans as shallow for it would be easy for most of them to make time if so desired. <br />
<br />
Another theory is that the digital age has made actual social interaction more awkward (less comfortable); people are easily able to post various types of information on social networks because the interaction is indirect with a time gap typically with somewhat known individuals, online “friends”, whereas polls are direct interaction in real-time with a stranger. This theory holds much more water than the “not enough time” theory, but is also more problematic because it demands a significant personality shift away from how society seems to be trending. <br />
<br />
For example cell phones offer a more effective means to call screen and a number of individuals are unwilling to answer calls from unknown numbers unless one is expected (like the results from a job interview). This behavior may also explain why older individuals, those born before the digital age, are much more likely to answer pollster question; they live outside this digital bubble and have not had their personalities influenced by it. <br />
<br />
A third theory is that people before the digital age were more likely to respond to pollsters because of the psychological belief that answering those questions granted validity and even importance to their opinions due to nature of the medium, especially over those who were not polled. However, now in the digital age where anyone can have a Facebook page or a blog to post their opinion to the world, there is less psychological value to polling producing a medium for someone to express their opinions. Tie this reality to the fact that the information ubiquitous environment of the Internet has also sullied the waters so to speak regarding what information is important and what information is meaningless. Overall it could be effectively argued that most people do not see an ego boost from participating in polls anymore; therefore little to no value is assigned to that participation, but also people are more socially awkward about participating further driving down participation probabilities. <br />
<br />
What can be done about these issues? The most obvious suggestion is as polling moved from being face-to-face to the telephone thanks to the advancement of technology; polling must once again evolve from telephones to online. While the most obvious suggestion, there are numerous problems with such a strategy. The first and most pressing concern is that Internet polls on meaningful political issues run by reputable companies have similar response rates as telephone polls. However, the level of bias associated with respondents switches from older individuals to younger individuals, for a vast majority of Internet use is performed by younger individuals. Also drawing a statistically random sample through the Internet seems incredibly difficult in general and without a random sample, bias is almost guaranteed.<br />
<br />
Polling can be conducted either based on a probability or non-probability scale. Probability involves creating a sample frame, a randomized selection from a population via a certain type of procedure with a specific method of contact and medium for the questions (data collection method). At times this is easy like using a employee roster at a company A to ask about working conditions; other times it is difficult, especially on larger state/national questions because the sample population is larger and more disorganized creating problems in devising an appropriate sample frame, both logistically and financially. <br />
<br />
Non-probability samples for polling are drawn simply from a suitable collection of respondents with only small similarity, largely involving a convenience sample (i.e. those who can most easily be recruited to complete the survey). Internet polling is largely based on non-probability. This structure has problems because without self-selection it is more difficult to statistically project the opinions of those polled to the general population within the typical margin of error. Also there are problems in comparing the survey population and any target population, creating unknown bias. The inherent age and ethnicity bias with online polling also persists. Some services attempt to overcome bias via weighting, pop-up recruitment and statistical modeling. <br />
<br />
Weighting is commonly used when a sample has a small portion of a particular demographic that is not representative of the total target population (i.e. for a national poll only 17% of the respondents are women). With the national population of women in hovering round 51% the preferences of the women in the sample would be “weighted” three times as much. Obviously the most immediate concern with this method is with the smaller number of respondents the weighting system can “conclude” that more extreme/uncommon views are more widely held if such views are present in the survey. Weighting can also lead to herding and other possible statistical manipulation, especially when compared against other similar polls. Overall one of the biggest problems with weighting is that it is rarely reported directly to the public in the polls that they see presented by media outlets. <br />
<br />
Pop-up recruitment attempts to create a more demographic appropriate sample size by having various polling advertisements for a particular poll appear over a variety of different websites where some of those websites are primarily visited by young black men, others visited by middle aged white women and others visited by gay Hispanic men, etc. hoping to pull in enough diversity to find representation in all parties. These pop-ups also attempt to reduce “busy work” for the participants (i.e. filling out personal information forms) by using proxy demographics based on browser visitation histories. While such a strategy is viable their overall level of consistent and long-term accuracy is questionable. A meaningful problem is that the tools made to smooth out the accuracy of these methods do not appear universally applicable. Another problem is that only more politically engaged individuals bother to take note of pop-up recruitments and may have certain characteristics that skew accuracy.<br />
<br />
Finally some organizations like RealClearPolitics.com and FiveThirtyEight.com use poll averaging including weighting historical accuracy and specific characteristics associated with certain demographics to create election models and “more complete” polls. While some champion these methods as the future, there is the concern that if most of the polls become Internet based then the feedstock for these aggregate polls will have the same general flaws and the aggregate polls will also carry over those flaws resulting in no meaningful improvement in value or accuracy.<br />
<br />
It is interesting to note that the age bias associated with Internet polling is naturally self-correcting. Similar to how telephone bias towards more wealthy households existed in the 1940s and 50s and then self-corrected as telephones became more widespread, Internet polling will also self-correct, but in a little more grizzly fashion. The problem in Internet polling is not a lack of availability, but a lack of usage. As older individuals who have little interest in using the Internet die and have their age group replaced by individuals who became familiar with the Internet in their late 20s, age bias should significantly decrease. However, it is unlikely that polling can wait the two decades+ for this “natural” self-correction and even then there is no guarantee that inherent issues with Internet polling will be solved.<br />
<br />
While producing an accurate and meaningful sample size is becoming more difficult and expensive, it certainly is not impossible and various polls have sufficient size and representation. So what could lead to inaccuracies in these polls outside of sampling issues? <br />
<br />
The two most common problems in polling accuracy are inability to predict how a voter will change his/her mind before actually voting and inaccurate conclusions regarding who will actually vote. Not surprisingly the former is less the fault of the polling organization than the latter. While they can certainly attempt it, it really is not the responsibility of the polling organization to accurately forecast the probability that voter A who reports a desire to vote for candidate A will change that desire and vote for candidate B two weeks later. However, polling organizations can do a better job of determining the likelihood of a particular individual voting and weighting that probability into their polling conclusions. <br />
<br />
For example this “probability of voting” factor is another significantly problem with Internet polling for while 95% of all 18-29 year-olds use the Internet, only 13% made up the total 2014 electorate. However, while only 60% of those 65 and older use the Internet, a significant percentage of those resort to only utilizing email, individuals 65 and older made up 28 percent of the 2014 electorate.2,4 Therefore, Internet polls completely missed a portion of the electorate and heavily overvalued the opinions of another portion. That is not the only problem; a Pew study suggested that non-probability surveys, i.e. Internet surveys, struggle to represent certain demographics, i.e. Hispanics and Blacks adults results have an average estimated bias of 15.1 and 11.3% respectively.2 <br />
<br />
It is important to note that a voter reporting a higher than actualized probability to vote is nothing new. Over the years it is common that 25% to 40% of those who say they will vote end up failing to do so.2 To combat this behavior polling organizations attempt to predict voting probability through the creation of a “likely voter” scale. <br />
<br />
One method polling organizations utilize to estimate the likelihood of voting is to review past turnout levels in previous elections, while applying appropriate adjustments regarding voter interest due to the type of candidates, the type of prominent issues, the competitiveness of the races, ease of voting and level of voter mobilization in the polling area.2 These estimates produce a range for a voting probability, a floor and ceiling, which is used to create a cutoff region. <br />
<br />
A pool of possible voters to compare to the voting range is created based on answers to a separate set of questions. For example a recent Pew analysis utilized the following question based to determine voting probability:2 <br />
<br />
- How much thought have you given to the coming November election? Quite a lot, some, only a little, none <br />
- Have you ever voted in your precinct or election district? Yes, no <br />
- Would you say you follow what’s going on in government and public affairs most of the time, some of the time, only now and then, hardly at all? <br />
- How often would you say you vote? Always, nearly always, part of the time, seldom <br />
- How likely are you to vote in the general election this November? Definitely will vote, probably will vote, probably will not vote, definitely will not vote <br />
- In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote? Yes, voted; no <br />
- Please rate your chance of voting in November on a scale of 10 to 1. 0-8, 9, 10<br />
<br />
From these questions statistical models are created that assign a probability of voting to each participant based on their answers and the weighting of each question. Sometimes these models are also used in other present elections or even future elections, but when this occurs one must be careful to ensure the assumptions remain appropriate for accuracy considerations. This modeling method is viewed as more accurate because it incorporates all of the questions instead of focusing on one or two like the last one “Please rate your chance of voting in November on a scale of 10 to 1.” Also this method still allows for the incorporation of respondents who answer low on one particular question, like they did not vote in the last election, as possible voters. <br />
<br />
While asking these types of questions is appropriate, polling organizations may hurt themselves because while there is no single silver bullet question to determine whether or not person A votes, different organizations use different question to produce their probability results. This lack of standardization can create inefficiencies; it seems to make more sense that all organizations would use the same questions to determine voting probability to better identify questions that are good predictors.<br />
<br />
Past-voter history is not the only meaningful factor, it has been demonstrated as a rather effective means of predicting future turnout.2 However, there is a concern that poll participants may misremember their voting history, especially because it takes place so rarely and is rather an unmemorable event for most. Therefore, pollsters also attempt to measure voting probability by including voter history from voter registration files, but this method is somewhat inconsistent between polling organizations. The reason for this inconsistency is that most surveys still require random phone dialing or Internet recruitment and it is difficult to acquire the necessary names and addresses of the roster to tie back into the voter file due to increased work load or lack of willingness by the respondents. <br />
<br />
Another way that voter registration files could be useful is eliminating some of the randomness when utilizing the phone to produce a poll roster. For example matching telephone numbers to a voter file can produce information that can narrow the number of calls that are needed to fill a poll roster for a certain demographic. Some organizations have claimed to reduce the number of calls required to fill poll rosters by up to 70% using this type of method.5 Such a method is also though to reduce problems associated with sampling error as well.<br />
<br />
Interestingly enough the general response of the polling community to the issue of inaccuracy, smaller sample sizes and increase costs is to depend more on technology, data mining and statistical analysis, which have only demonstrated the ability to “hold-off” worse results, but do not appear to have any direct means at improving the situation. <br />
<br />
However, one wonders why polling organizations do not simply return to their roots in a sense. Instead of resorting to more technology and more statistics why not simply “go out among the people”. What are the negative issues with the larger organizations producing branch offices of sorts where they can setup polling stations in high traffic areas to directly engage individuals instead of calling at awkward times or hoping to get proper sample sizes from various politically motivated Internet users while the rest ignore those pop-ups advertising a poll. <br />
<br />
To facilitate better interaction with possible poll responders instead of an individual standing in a general location with a survey and clipboard which can put a number of people immediately on guard where some purposely alter their paths to avoid the clipboard individual, the polling agents should set up a table clearly labeling their intent. Also to compensate individuals for their time, the polling agents should offer small items in exchange for answered questions: Frisbees, lighters, little Nerf footballs, etc. It would surprise a number of individuals how many people walking down the street for other business would be willing to spend 5-10 minutes asking questions for a free little Nerf football. It would be easy to set up such an environment rather seamlessly at a farmer’s market or in a shopping mall. <br />
<br />
The results of this information could then been reported to a main “data center” for the polling organization and pooled into a single poll relative to a national issue. Such a method should more than likely reduce overall costs while producing more accurate information. Of course this is only one possible means to address the problem without hoping that technology can “magically” fix it. <br />
<br />
In the end the “crisis” in polling might simply be an internal one of little relevance. For example is polling even important anymore with regards to elections? Suppose candidate A has ideas A, B and C and opposes ideas D, E and F. If polling demonstrates that candidate A’s constituency values ideas A, C and F doesn’t candidate A look bad changing his position on idea F from con to pro based on that data? The change would be based on public option not an actual change in the facts surrounding idea F. Typically governance by political polling leads to poor governance.<br />
<br />
Another important question is why is it important that the public have polling information available? Are polls only useful for individuals to have a measuring stick to the level of value that the rest of society places on a particular issue or the popularity of a particular candidate? If so, what is the value that John Q. Public has this information? Certainly person A will not change their value system if a public poll seems to produce a differing opinion. <br />
<br />
The reality of the situation is that for the most part polling information available to most candidates to a particular office is more accurate and advanced than that information given to the public. Also only those who work for a particular issue or candidate seem to have enough motivation to be influenced by a poll result to work harder for their particular issue. Overall is media reported polling just another something for the media to talk about, a time filler? Maybe the real issue with public polling is not how can its accuracy be improved/maintained, but what role does it really serve in society? Perhaps changing the nature of polling back from an indirect activity on a computer screen or telephone to a direct face-to-face exchange between people can help answer that more important question.<br />
<br />
<br />
--<br />
<br />
Citations – <br />
<br />
1. Blumberg, S, and Luke, J. “Wireless substitution: early release of estimates from the national health interview survey, July – December 2015.” National Health Interview Survey. May 2016.<br />
<br />
2. Keeter, S, Igielnik, R, Weisel, R. “Can likely voter models be improve?” Pew Research Center. January 2016.<br />
<br />
3. DeSilver, Drew and Keeter, Scott. “The challenges of polling when fewer people are available to be polled.” Pew Research. July 21, 2015. http://www.pewresearch.org/fact-tank/2015/07/21/the-challenges-of-polling-when-fewer-people-are-available-to-be-polled/<br />
<br />
4. File, T. “Who Votes? Congressional Elections and the American Electorate: 1978–2014.” US Census Bureau. July. Accessed October 7 (2015): 2015.<br />
<br />
5. Graff, Garrett. “The polls are all wrong. A startup called civis is our best hope to fix them.” Wired. June 6, 2016. http://www.wired.com/2016/06/civis-election-polling-clinton-sanders-trump/13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-89609222885295487752016-07-13T10:13:00.000-07:002016-07-13T10:13:03.619-07:00Forming the Battle Plan for Addressing Teaching Reform in the 21st CenturyThe notion of education reform is certainly not a new concept, but it certainly seems to accomplish less and less meaningful and appropriate change as the years advance. One of the major reasons various reform movements appear to produce little success is too much focus on specific “pet” methods without critically analyzing their applicability in large-scale environments. Instead of focusing on how to better fire teachers, lauding some trendy non-scalable niche example as the solution and looking to divert money to charter schools that perform no better to worse than their public school competition, reformists should systematically look at the system, identify the flaws and then act to remove those flaws with scale appropriate solutions. So what are important elements to advancing education that reformers tend to get wrong.<br />
<br />
An important element that must be addressed in education is facilitating student motivation with career prospects at an early age to ensure appropriate enthusiasm. Unfortunately not all students appreciate and understand the underlying benefits to education, the acquisition of information in general, thus they can reject its importance. If a student does not possess the drive to learn through some form of motivation then any teacher, regardless of overall quality, will struggle to transmit knowledge to that individual. Incorrectly most reformists believe that it is the sole responsibility of teacher to nurture and cultivate any motivation potential in a student. The idea that it is the responsibility of teachers to motivate their students is ridiculous solely, but not limited to, the simple vast diversity in psychological make-up of their students. To focus on numerous different strategies to ensure student motivation is asking for something completely unreasonable and untenable.<br />
<br />
Most of the time motivation for learning comes from engaged and caring parents for it is standard psychology that most children want to receive praise from their parents by acting in a manner that will be received positively. Even for those that do not fit this profile, an educationally engaged parent can use his/her position as parent to command the child to “care” somewhat about education via either carrot or stick type motivators. If the parent is not engaged in the value of education the student needs to find motivation elsewhere, either through competition with other students or through their own desires, but not expect such a void to be filled by the teacher. Can it, yes, but it should not be expected. Overall though none of these motivating factors are relevant if not directed towards a meaningful conclusion. <br />
<br />
Therefore, the entire process of education must be more cooperative both from the home environment and the school environment in identifying the passions and interests of students and applying those interests to the education process largely through demonstrating how even so-called “mundane” topics like math and various science tie into those passions. With this methodology, education becomes an amplifying positive force for that particular passion rather than a negative detracting and distracting force. In addition not only will this process provide internal motivational fuel for the student (i.e. “I want to be an astronaut”), but it will also provide a road map of sorts to achieving that passion for in the past there have been plenty of educationally motivated students that have fallen short because they were ignorant of the prerequisites and other requirements demanded by their passion. <br />
<br />
How this methodology should be achieved will highlight the importance of guidance counselors, which has waned in modern times. Early in a student’s academic career (1st/2nd grade) guidance counselors should be the principal actors in identifying the student’s passions and deduce the best career path for that student to exercise those passions. Every two years there should be some “check-in” period to reassess passion and interests and formulate a new path if needed. This method allows guidance counselors to actually perform their assigned role and no longer burdens teachers with a task outside of their intended role, motivating the student. Now teachers can instead focus on providing an optimized educational environment in which to instruct the students, an actually appropriate expectation, rather than play cheerleader to the individual tastes of their students. <br />
<br />
Proper management of student expectations is also important for increasing the effectiveness of education. Course syllabus must be presented early (day 1 or 2) and be transparent in how grades will be produced, what type of class behavior is expected, what students are expected to learn, schedule of events and special projects, etc. Also expectations regarding instruction is essential for despite what some critics would like the public to believe, education cannot be exciting and entertaining all the time, or heck even most of the time. Certainly quality teachers can add certain dynamic elements to lectures to produce a more “inspirational” product, but no one can make teaching something like, a literature review for a research paper to ensure proper background and sourcing, fun. Such a task is one of drudgery that demonstrates the importance of gumption and focus in the educational process. <br />
<br />
Tied to the above point, another important element is to psychologically prepare students to embrace the discomfort of learning. Some argue that learning is not fun and education needs to reflect that, but it can be countered that such an environment for a number of students has already been attained; this is a major problem for if students acknowledge learning and education as painful and frustrating then they will be less interested in engaging in the process and will look for shortcuts (i.e. cheating) just as easily as if they think learning should always be fun and exciting. <br />
<br />
Instead one must focus on the discomfort of learning in the context that it is frustrating when one does not know something one wants to know, but proper instruction and hard/smart work makes that frustration ephemeral. Basically learning is only “not fun” when no progress is being made. If progress is made (i.e. some knowledge being acquired piece by piece) then learning produces a noticeable sense of accomplishment and pain/frustration is limited and short-term. Therefore, one of the chief strategies in the educational process is to focus on why someone is not making progress and rectify it. This is not to say that education and learning is always effortless, but there is always a purpose to the effort.<br />
<br />
One of the more hotly debated elements of education is the structure of how information is transmitted from the teacher to the students. Many modern “educational reformists” lament and criticize the large continuation of traditional education involving a teacher lecturing students on a given topic. These individuals frequently cite the advantages of engaging in teamwork-based activities and focusing on the Socratic Method (SM) of teacher-student engagement in lieu of basic lecturing. <br />
<br />
The most significant advantage of the SM is that the interaction between the teacher and the individual through direct question and answer session increases the probability of understanding due to active learning rather than passive learning. During “traditional” lectures students must focus on self-motivation to ensure dynamic learning rather than hoping for learning through osmosis (in a sense). The SM takes some of the motivation burden off of the student through the direct discussion of the topic with the teacher. <br />
<br />
Unfortunately most “educational reformist” lack classroom experience and seemingly fail to realize that most public schools have large class sizes (25+ students, usually more) that make the administration of the SM rather difficult without utilization of a scattershot strategy (randomly engaging certain individuals not everyone). A meaningful concern with the SM in large groups is that direct one-one engagement can cause other students to lapse in their attention limiting the effectiveness of the current learning experience. One thing that lectures are not given credit for is that they do provide a meaningful focal point for all students that direct one-one discussion can lack. Also too much interaction can lead to time crunches when it comes to instructing on all of the requisite information. <br />
<br />
This misinterpretation of the “universal applicability” of the SM in public institutions largely exists because “reformists” largely focus on viewing the practices of schools with small overall enrollment and class size, typically highly privately funded charter schools, as the bases for determining “what works in the classroom” and what should be applied in public education. This mindset does nothing but continue to make real and appropriate reform more difficult. Overall as noted above the appropriate way to instruct in the modern “educational environment” appears to be the combination of the SM and lecture by periodically and consistently engaging random students in a brief 1-2 question session that captures the individual’s attention, but does not expend enough time to significantly threaten the loss of attention from the rest of the class. <br />
<br />
The matter of teamwork is a little more interesting because the advantages of teaching to teams are significant. For example working in a team can provide a less stressful environment for certain individuals, which can eliminate the detriments of working alone, which could negatively impact the educational process. It can help interpersonal relationship development by giving individuals experience with working through problems with others in low stress/stakes environments. Also it provides growth and intellectual development by exposing individuals to additional and different viewpoints and interpretations of the lessons from other team members that may help augment understanding of the information. <br />
<br />
However, there are some disadvantages to working in a team. The most pressing issue, that most do not either want to talk about or are not aware of, is that most of these above advantages are born from motivated students that want to learn and want to actively interact with their fellow classmates. Without this motivation, weaker and/or less enthusiastic students can hide behind stronger students letting those individuals do the work for the team and not focus on learning the material themselves. This strategy of “let the smarter kids who care about their grades do the work because they don’t want to fail” has always been a problem in teamwork related elements in primary and secondary education, especially for big large-time period projects. <br />
<br />
This behavior is manageable in the scope of small assignments for while homework and in-class work could be performed in groups, quizzes and tests would still be individualized forcing students to limit the practice of the strategy for a vast majority of the grade is still based on their own accumulation and practice of course knowledge. However, for large projects this behavior can be significantly detrimental to the team as well as individuals because it is difficult for the teacher to dissect how important each student’s contribution was to the success or failure of the project. <br />
<br />
One means of addressing this problem has been to have students evaluate the performance of their teammates at the conclusion of any big projects, but such a method always draws concern of bias between teammates. An alternative option for big projects may be weekly evaluations of performance on a 1-10 scale over 3-4 different categories with explanation areas for why the numeric score was given. The teacher can keep these evaluations and then use them as a metric to how the dynamic of the team may have changed and a more accurate assessment of how the students felt the workload was divided instead of relying on a single evaluation at the end of the project when emotion and tensions can influence the product as well as spotty memory interfere with accuracy.<br />
<br />
Another concern with teaching teams is that weaker voiced/low confidence individuals can have their opinions overshadowed by stronger voiced individuals, which can lead to a reduction in their already wavering confidence. Handling this problem can be tricky because dominating personalities are not necessarily malicious and teachers cannot proctor each group to ensure all opinions are being heard and given a fair evaluation. There are two direct ways to lessening problems stemming from this type of personality clash. First, the teacher can periodically poll the group when asking for an answer inquiring how each student views the problem. Fortunately such a strategy does not appear too time consuming because once per class should be enough for more shy students to have their voices heard. Second, allow the students to form their own teams. <br />
<br />
This issue of the origins of team formation creates a third smaller problem. Clearly allowing students to form their own groups can eliminate a large amount of potential interpersonal conflict within the team; however, allowing students to only associate with what is already familiar mitigates a lot of the advantages born from teams through the ability to work with the unfamiliar and understand different types of thought. Overall a middle solution appears most appropriate; before selecting the teams the teacher asks each student to indicate on a piece of paper the 3 classmates he/she would not like to be associated with in a team and then seeks to accommodate as many of these wishes as possible. This strategy limits the amount of interpersonal conflict in a team by eliminating individuals that might have outside conflicts while retaining enough differentiation to ensure value from working in the team. Note it is not the responsibility of the teacher to resolve these conflicts, thus they are best avoided in the classroom.<br />
<br />
Overall with regards to teaching to teams: when possible teams should be used basic lectures, including those with a level of interactivity, but tests should be individually based to ensure a strong motivating “carrot” for individual learning. Team interactivity and creation should follow the above suggestions to maximize learning potential and effectiveness. <br />
<br />
Another element that is widely touted as the “wave of the future” with regards to education is not only in-class teamwork, but also large team projects where the team engages in a multi-week, even multi-month, task. Clearly the motivation behind this idea is that learning by doing is one of the best way to acquire knowledge, especially to practice critical thinking and creativity; in addition such projects can provide a venue to evaluate the depth of that acquired knowledge by applying theoretical concepts in empirical practice. <br />
<br />
Unfortunately while the sentiment is understandable a number of supporters of this methodology fail to acknowledge that such projects are very time consuming and expensive from the school’s perspective, thus such an instructional strategy is an almost guaranteed non-starter for most inner city and rural schools. Also initial project design is important to ensure students stay on task and have organized benchmarks to document progress, thus making the introduction of such a program difficult as well because to test the theory one must put it into practice which takes time and resources and redundant projects may not be valuable depending on the subject matter. <br />
<br />
Proponents will conclude that such projects have succeeded before citing various group projects involving building robots, devising responses to various natural disasters or culturing different types of cells to determine how they interact with various types of bacteria. While there are certainly a number of success stories regarding this method, the failures are less known because they are not made public, so it is difficult to deduce the effectiveness of such programs. Overall it is reasonable for a high school to explore a single elective class that focuses on the completion of large-scale project and introduce smaller two-three week long projects for some other classes, but any expectation that such a methodology will become the norm is foolhardy until the public school system is funded at a much larger level than current.<br />
<br />
The structure of grading is also an interesting issue with regards to the future of education. One of the more prominent discussions over the years has been the amount of homework that should be assigned to students. Before discussing the level or amount of homework it is important to establish the purpose of homework. For the course of this discussion the role of homework will be defined as: a tool to produce a means for a student to genuinely increase the probability of understanding particular concepts in a low stress environment versus proctored on-site examinations. Also for homework to be relevant it must be designed in a way that maximizes its practicality and usefulness. Rarely will reality simply give a person a single equation or thought process that will solve the problem. For example while a common math problem may read: “21 divided by 4 = ???”; this is clearly not how problems are encountered in reality, with 90%+ of the work already done. Instead such a problem should be presented to the students as: <br />
<br />
John and Suzie want to bake some apple pies for their school’s bake sale. John has collected 10 apples from the trees around his house and Suzie has collected 11 apples from the trees around her house. If it takes 4 apples to bake 1 pie how many pies can John and Suzie bake and how many apples will they have left over after all the baking is done?<br />
<br />
From this structure, which is much more akin to reality, a student should create the equation 21 divided by 4 = ???. So step 1 with regards to the homework aspect of knowledge evaluation is make sure homework problems properly represent real life experience.<br />
<br />
Step 2 is to ask how homework should play into the evaluation process. One could inquire about the fairness of homework being a significant portion or even any portion of the grade if its central role is that of a low stress practice tool for understanding the general overarching concepts. What if the student does not need to do the homework to understand the material, the lecture period is enough to achieve understanding? Should that student be, in essence, forced to do the homework when he/she could use that time for other activities, either family-oriented or pleasure based? For example some students may not have a sufficient amount of time to do unnecessary, due to already achieving understanding, homework because of an imperfect family life where brothers/sisters have to take care of younger siblings, go to night work to earn extra money to help support the family, etc.<br />
<br />
One point of argument for a high evaluation metric for homework is that it provides another avenue for students who struggle with communicating acquired knowledge in a testing environment. It cannot be argued that a test in a classroom environment inherently provides more pressure than homework assignments in an environment of the student’s choosing. Some students do not have the ability to effectively manage this increased pressure, thus their ability to demonstrate their knowledge suffers accordingly. The principle characteristic of the grade for a course is to conveniently measure how well a student acquired knowledge in a course, not how well a student can manage a high-pressure situation. Therefore, a high evaluation metric allows the grades for a student that “does not test well” to more accurately reflect the knowledge acquired within the course.<br />
<br />
Some opponents could argue back that while addressing students that “do not test well” is a positive element for a high evaluation metric, it is more probable that highly evaluated homework conceals poor performance. Students can use homework to bolster overall grades that are detrimentally marred by poor examination results; poor results not due to mishandling stress, but simply due to lack of knowledge. Thus, this evaluation structure misrepresents a student’s knowledge in a particular topic portraying that student as more competent than they otherwise are, a disservice to colleges, future employers and the students themselves. However, this analysis only seems valid if the assigned homework is of substandard quality and/or design. If the homework is properly designed to reflect acquired concepts of the class then using homework grades as a counter measure to examination grades is reasonable.<br />
<br />
It must be remembered that the bounds of time do not only impact students. Teachers, especially those with more dynamic topics like history, find themselves having to impart more and more information over the same fixed time period. Unfortunately the total amount of information that needs to be discussed limits the available amount of instruction time for each specific topic. Therefore, without the ability to rigorously cover a particular topic to the point where students have been exposed enough to reasonably understand the topic the probability that the students understand the topic decreases. Homework substitutes for this lack of class time to increase learning and retention probabilities. This supplementary aspect of homework hurts those who argue for no/little homework.<br />
<br />
It can be argued that there is a typical perceived knowledge vs. actual knowledge gap for most students. There are a number of instances in school and, life in general, where an individual may think he/she has sufficient knowledge in a given subject, but when actually tested on that topic this individual quickly realizes that he/she does not have as much knowledge as previously thought. Homework provides a means to address this perception/reality gap before it becomes exposed on a test to a greater academic detriment of the student. Overall, is there a strategy that can provide a motivational aspect to do homework while not burdening those who do not need to take advantage of the practice characteristics of homework? The strategy below seems to be one way to address this issue.<br />
<br />
• Homework is given out on a weekly basis; Every Monday an assignment is given out which will cover all of the scheduled material that will be discussed in class over that same week; the assignment will be expected to be turned in at the beginning of class on the next Monday (for example homework assigned on Oct. 13 would be turned in on Oct. 20 at the beginning of class); answers for the homework would then be posted or handed-out for the last week’s homework at the end of class on Monday.<br />
<br />
• Homework will count for 0% of the grade. The reason is that homework, as previously discussed, is designed to give the student multiple opportunities to practice learning the given material. Taking a grade from material that is supposed to be practice is not very fair. Therefore, because homework does not count for any percentage of the grade the students do not have to do it or turn it in if they do not want to.<br />
<br />
• Grades will be determined by 4 tests; 3 section tests worth 25% of the grade and 1 cumulative final worth 25% of the grade. As a partial motivator to do homework, students may retake one of the section tests if they turned in at least 75% of the assigned homework within the corresponding section and demonstrated a legitimate effort to learn from the homework.<br />
<br />
Overall while the above suggestion is merely that, a suggestion, it appears that the above discussion has focused on two important principle issues in the ‘homework’ discussion. First, is the issue between homework motivation vs. maintaining the practice characteristic of homework designed to enhance learning. Second, is the issue of opportunity cost in doing homework vs. undertaking other activities. The chief element of this issue boils down to immediacy of the opportunity cost. The time crunch created by homework, which is frequently associated with increased stress, is typically developed through two methods. First, most students, especially as they advance in grade, have to deal with multiple subjects demanding multiple solution methodologies. Second, homework frequently functions through daily turnover. While the individual assignments may not account for much having to sacrifice enough of them due to more important tasks (like the job to help your family) can add up quickly damaging the overall grade when using a high evaluation metric (commonly suggested for motivational purposes). <br />
<br />
Unfortunately there does not appear to be a single magic bullet to deal with both issues, but expanding the homework turnover scope could certainly help. As suggested above assigning homework at one particular time to account for the entire week gives the students more flexibility to address the homework. If their time is demanded by a particular activity on a given night, time can be budgeted later in the week to complete homework that would have been missed due to that activity. Another potential advantage to assigning homework in a greater than day-by-day quantity is that it may be easier for students to make connections between building block concepts when doing ‘three days work’ of homework in one sitting instead of doing the work over a three day period with multiple interruptions. Such a system could also encourage more ambitious students to ‘read ahead’ in an attempt to do the homework before the class lesson address the material.<br />
<br />
One question that comes to mind for such a system is how does it change the grading burden on teachers? Under a more expanded turnover system with a firm homework hand-in date teachers may have more homework to grade, but by providing a universal answer key after turning it in, the teacher has more flexibility in allotted time to grade the homework and return it to the student. This increased time flexibility is important for grading homework is one of the most daunting and potentially frustrating tasks for a teacher, one that is commonly overlooked by most education reformers when considering teacher workload. Also teachers have lives outside of the educational environment, just like students, and may want to devote certain periods of that time to other tasks. <br />
<br />
Another useful change to improve the educational experience would be more cooperation among teachers within a given field of instruction. For example synchronizing the free/prep period for all teachers of the same general subject matter, i.e. all English teachers, would provide opportunities for teachers to converse regarding the instruction of certain subjects within the field. In fact it would be appropriate for teachers to have a weekly meeting during one of these prep periods to maximize problem solving and instruction capacity.<br />
<br />
Obviously one of the most critical elements to improve the educational system is to create an environment where the profession of teaching is respected once again. One aspect of this change would require teachers having more power in the classroom to control improper behavior. One means to accomplish this change is to allow teachers to negatively influence a student’s grade when that student provides a disruptive influence to the learning environment. A good pilot program would be that the teacher would have the authority to deduce up to a maximum of 10% from the grade of an individual for misbehavior at certain predetermined intervals. <br />
<br />
Some might immediately object to such a system using the argument that behavior should have nothing to do with determining the class grade because the grade should be exclusively contingent on demonstration of acquired knowledge through prescribed evaluation metrics like homework, quizzes and tests. While on its face this objection may seem appropriate and fair, the problem is that it views the behavior in a vacuum. Basically it suggests the premise that negative behavior only produces a detriment towards the practicing individual and if the individual can perform at a certain level on the evaluation metrics without showing respect or paying attention in class then there should be no punishment. However, such logic is clearly incorrect because in the classroom environment a vast majority of negative behavior provides a detrimental element to the overall environment, disrupting the ability of all parties to learn the information. The behavior commonly produces a detriment towards multiple parties even if it is undesired or unwarranted by those parties.<br />
<br />
For those who attempt to retain the purist assumption from above, Even despite this reality, it is important to acknowledge that tolerance for such negative behavior is typically not allowed in the professional workplace and if one of the chief elements of education is to prepare an individual for a career on some level, then such behavior should not be allowed in the classroom without consequence either. For example if an individual performs his/her job well, but facilitates such a negative environment that it negatively affects the performance of others to the point where the company as a whole suffers, that individual will typically be either told to change their behavior or he/she will be fired. Legal barriers prevent students from “being fired” both from the classroom and the education system in general, thus the best secondary option is affect grades. <br />
<br />
Another possible argument against this strategy is that the individuals who have the highest probabilities for misbehavior are those who care the least about grades and school in general. Therefore, how will this punishment system act as a meaningful deterrent? Well, if the suggestions from above relating to linking various aspects of education to successful advancement of one’s passion then a vast majority of individuals should care about their grades to the point where behavior can be reasonably managed through such a punishment. Even for those who do not accept the link between their passions and education, to simply produce no consequence to disruptive behavior is irrational. For example it is widely acknowledged that various people will exceed legal speed limits over the course of their driving career, so with this reality in mind should there be no punishment for violating these laws? Certainly not for it makes no sense to eliminate a valid and appropriate punishment for the violation of a valid social norm or law. Understand that grade reduction would only be one tool in the toolbox for teachers to address bad behavior.<br />
<br />
Another important issue addressing the improvement of education in modern society is managing the integration of technology into the classroom environment. This point is certainly not unique, however, most individuals who sing the praises of technology as a “revolutionary” force in education are not teachers; instead they are business people, entrepreneurs, educational commentators, etc. and only see the positive elements of technology in education, frequently commenting with annoyance that technology is not more widespread. <br />
<br />
Interestingly enough if these commentators did have teaching experience they would quickly realize that technology has already penetrated almost all classrooms in the form of smartphones. Unfortunately these elements are not positive, but a net negative producing significant distractions and emboldening those who which to cheat on quizzes and tests. It is true that technology can provide a significant boon to education, but it can also provide a significant detriment and it is important that all parties acknowledge this reality. So what can be done to neutralize the detrimental aspects that technology can bring to education?<br />
<br />
The main aspect of this issue is how to manage technological distractions? The best solution is to put instruction into place where there is no legitimate cause to need to utilize the technology and then ban its use for the duration of class time. Now it stands to reason that technophiles would cry foul to this type of strategy once again citing the importance of technology in the classroom, especially in sparking student interest due to the length of time technology is incorporated into student life outside of the classroom. This objection highlights a problem in the presented arguments from those who support technology in the classroom, the general drive to force the influence of technology into all aspects of the classroom. The simple fact is that most classroom activities do not benefit from the incorporation of individualistic technological action. Yes, teachers can typically instruct more effectively using programs like PowerPoint versus transparent slides or a chalkboard, but students are not significantly benefited by following along with the lecture on their smartphones or laptops. <br />
<br />
In essence there needs to be a dividing point when students can use technology and when they cannot and the cannot would occur during the lecture portion of the class. Clearly there are very small and specific exceptions to this principal; for example when lecturing about computer programming it would make sense for students, if applicable, to be at computers applying the elements of the lecture to increase familiarity with the operation of the concepts. However, despite the erroneous beliefs of technophiles, most topics do not lend themselves to this type of interaction, thus the utilization of technology by students during the lecture will result in a reduced probability of comprehension not an increased probability.<br />
<br />
What would possible penalties be for student driven technological distractions? This question leads to two schools of thought relative to the expectation of respect for the instructor. Clearly one can argue that a student that does not pay attention in class, after accounting for outside psychological factors, is not showing proper respect to the instructor. However, if this lack of attention does not create a distracting environment for others (for example the student is doodling in a notebook, but not making enough noise to draw attention to this fact), should such behavior matter? <br />
<br />
The answer to this question boils down to two issues? What is the obligation of the student to demonstrate respect for the teacher and what is the obligation of the teacher to ensure the student pays attention to the instructed material? The simplest philosophy in this issue is that the student is chastised for the lack of attention and told to correct the behavior and the lecture will not continue until the student complies. The general goal of this practice is to reestablish the authority of the teacher in the classroom setting and ensure the student receives some benefit from the lecture.<br />
<br />
A more interesting strategy is if the student is not demonstrating behavior that will actively disrupt class and his/her behavior is on a limited scale (only 1-2 individuals in a class of 30 is not paying attention) then the teacher should not care about the behavior leaving the student to understand the instructed material his/herself. If the individual cannot understand the material then he/she should score poorly on the evaluation metric(s) that cover the particular material, which would be the fault of the student. Again it is not part of the teacher’s job to ensure that all students pay attention. If the individual can understand the material without the assistance of the lecture, why should the student be forced to pay attention to the lecture instead of engaging in a non-class distracting alternative activity? <br />
<br />
A more interesting question is what does the teacher do when a number of individuals demonstrate a lack of attention, which could be viewed as a lack of respect to the authority of the teacher? As from above the teacher has two options: 1) stop lecturing until the class ceases their lack of attention; 2) continue to lecture placing the individuals who are not paying attention at a possible disadvantage for later evaluation metrics. A traditional and even modern viewpoint of teaching would instantly dismiss the latter option and criticize the teacher for not being able to keep the attention of the students. Of course almost all with this opinion have never taught a day in their life in an educational environment, thus the significance of their opinion is heavily marginalized. The problem with the first option is that rarely is the lack of attention from a student acute, but typically is habitual, thus correcting the behavior is more difficult than simply telling the student to pay attention. This reality is what makes the second option interesting when combined with the career affinity option discussed earlier. <br />
<br />
One could argue that most habitual and “disrespectful” lack of attention behavior can be addressed by applying the above strategy of tying the passions of individuals to the subject matter taught in various classes. Thus once again after accounting for outside factors the chief motivation behind a student not paying attention in class would be the internal perception of redundant knowledge. Basically the student already believes that he/she has grasp of the knowledge presented in the lecture and elects to do something else. <br />
<br />
This perception is not a significant problem because either the student is correct and should be spending time in the classroom doing something else while not distracting others, which only arrogant teachers would find fault with (all students should pay attention to me, etc.) or the student is incorrect and this perception and resultant behavior will be corrected after a poor performance on the next evaluation metric. <br />
<br />
The above discussion demonstrates that the important concern is not an individual distracting him/herself, but an individual distracting others. It is this point where individually utilized technology becomes the problem. All rational people will agree that there is a significant difference in noise generation between an individual doodling in a notebook or working on math homework for next period versus an individual incessantly tapping on keys/screen or periodically making a sound like a laugh in response to a piece of video. Basically the utilization of technology as the element of distraction dramatically increases the probability that the distraction distracts others who do not want to be distracted from the content of the lecture. Therefore, individual technology must be appropriately managed though similar penalties as discussed above for behavior infractions.<br />
<br />
Overall the administration of technology in the classroom is the prerogative of the teacher despite complaints from non-teachers. A problem technophiles have with this strategy is the incorrect belief that only technology can make a modern lecture innovative, dynamic and impactful. A quality teacher can give these characteristics to a lecture with just a piece of chalk and a chalkboard and if these non-teacher commentators had any real experience in education they would have a better understanding of this reality.<br />
<br />
One of the improvements that must be made to establish better teachers is changing the means at which training experience is acquired. Overall there is too much single-experience watching/observing versus actual multi-experience hands-on training. For example a number of training programs involve a prospective teacher sitting in and observing the behavior, style and actions of a veteran teacher. However, rarely do these prospective teachers teach in the class while receiving feedback from the veteran teacher, they do little prep-work/grading/discussion and do not interact with other veteran teachers either. <br />
<br />
Instead of this old method, new prospective teachers during their “observation” period should act as teaching assistants doing a significant amount of the grading and preparation work for the veteran instructor and teaching for a set period of time (maybe once per week). Then the prospective teacher should move to another teacher in the same subject to experience a potential different viewpoint in how to manage a class and/or teach the subject matter. Of course the logistics associated with such a new design would require work.<br />
<br />
Another important change to positively advance teaching is to hold charter schools to actual academic standards or disallow public funding. Some love to make the utopian argument that money does not really matter with regards to improving public education, but such arguments are incorrect and self-serving. It makes no sense that charter schools can receive public funds, but have no accountability to those who provide those funds. Therefore, charter schools must either be removed from public funding or be held accountable to the same standards as public schools.<br />
<br />
Similarly the return of respect to the teaching profession can never be achieved as long as organizations like Teach for America are allowed to continue to undermine the profession by introducing unprepared individuals into the profession. Teach for America and similar organizations produce negative propaganda regarding teaching under the motto “its so easy anyone can do it”, but refuse to accept responsibility for the reality that over half of their “qualified” candidates exit the profession after only two years.<br />
<br />
Similar to the general propaganda spread by Teach for America and other similar organizations, one must abandon the idea that teaching is an occupation undeserving of respect due to the perceived hours of operation. A common refrain in public discourse is that teaching is not difficult because “teachers get the summers off”. What these false criticisms fail to acknowledge is the total hours worked versus days worked. Good teachers that care about ensuring a proper learning environment work more hours than average over the course of the week and also work over the summer. Overall quality teachers, those who the public claims to want in schools, do not fit this “not real work” profile and are negatively impacted by its continued propagation. <br />
<br />
It is appropriate to briefly touch on a couple of indirect methods that could improve the educational experience. First, it makes sense to follow scientific research regarding the way lighting and room color influence performance and behavior. For example it has been reported that “warm” yellowish white light supports a more relaxing environment that promotes play and probably material engagement, standard school lighting (neutral white) supports quiet contemplative activities like reading and “cool” bluish white light supports performance during intellectually intensive events like tests.1 Thus equipping classrooms with LED lights that could be changed between these different types of lighting tiers should provide useful advantages to both teachers and students.<br />
<br />
Second, there is sufficient evidence to suggest that early start times in high schools and some middle schools (7:30 and earlier) have negative educational influence on students.2-4 While this issue has received attention in the past and is still receiving some attention here and there, unfortunately it is not as cut-and-dry as simply starting school 30 minutes later for there are significant logistical hurdles to the successful administration of a “later school day” policy. <br />
<br />
One of the major problems is how to manage bus transit for a single fleet of buses tends to service one school district or region. The tiered start times for different schools (high school, middle school/junior high, elementary school) is typically necessary for transit efficiency allowing this single fleet to manage all schools. Change the start time for high school and the efficiency of bus service collapses unless start times for middle schools and elementary schools are also changed. <br />
<br />
However, changing start times for these schools is not beneficial to younger age students because they are already starting at times later than 8:00 am, and starting even later may even be detrimental because of the much later release times (4:00 pm or later). Not surprisingly the solution of “get more buses” is a non-starter for most school districts are already rather cash strapped to begin with due to tax funding dependencies and charter schools taking money from that pie as well. This transit problem and the resultant potential detriment for younger students is exactly what Montgomery Country in Maryland experienced when they changed school hours in 2015. <br />
<br />
Another meaningful logistic hurdle involves the administration of after-school extra-curricular activities and how they could disrupt home life due to students arriving home at 5:30 or 6:00 pm, especially during the late fall and winter months when daylight becomes limited. Also there may be increased costs for the school for heating and cooling, especially cooling for those districts in high temperature regions for starting later in the day means hotter average school hour temperatures. This issue is tough because the costs could be prohibitive for some districts and meaningless for others. Of course one significant problem is that studies involving the incorporation of later school hours only seem to focus on health and/or possible changes in academic achievement and do not address obstacles to applying later school hours, which is rather ridiculous.<br />
<br />
In the end one of the most pressing problems in education is misrepresentation of the overall goal of education. Some reformers seem to think that the most important role for education is to foster a level of knowledge that allows an individual to gain employment in some particular field. While such a role is important, it is not so important that it should displace other important elements to education like: <br />
<br />
1) Produce citizens that can make rational decisions, which will allow them to make positive contributions to society.<br />
2) Produce citizens that can effectively form solutions to both qualitative and quantitative problems. <br />
3) Produce citizens that can use both spoken and written word to effectively communicate their ideas and feelings to other individuals as well as understand and analyze the validity of the ideas and feelings of others. <br />
4) Produce citizens that do not tolerate individuals that attempt to manipulate or deceive society for their own ends or do not tolerate those that practice and/or preach ignorance or idiocy for the sole purpose of satisfying their own personal beliefs and ends. <br />
<br />
Overall blind devotion to test scores and technology will not help achieve these goals and without the ability to produce these types of individuals society becomes vulnerable to manipulators and opportunists that would produce net harms. It is the responsibility of education to produce a society that is not only productive, but also able to protect it from these unscrupulous individuals, thus it is the responsibility of society to ensure an educational environment that accomplishes these goals. Current reformers are not offering solutions that will produce such accomplishment, thus something must change.<br />
<br />
==<br />
Citation – <br />
<br />
1. Suk, H, and Choi, K. “Dynamic lighting system for the learning environment: performance of elementary students.” Optics Express. 2016. 24(10):A907-A916.<br />
<br />
2. Eaton, D, et Al. “Prevalence of insufficient, borderline, and optimal hours of sleep among high school students–United States, 2007.” Journal of Adolescent Health. 2010. 46(4):399-401.<br />
<br />
3. Wahlstrom, K, et Al. “Examining the impact of later high school start times on the health and academic performance of high school students: A multi-site study.” 2014.<br />
<br />
4. Au, R, et Al. “School start times for adolescents.” Pediatrics. 2014. 134(3):642-649.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-34653347310273608292016-06-08T10:08:00.002-07:002016-06-08T10:08:43.279-07:00Food Labels – Do They Properly Inform Consumers?<br />
The unsurprising and non-controversial role of food labels is to present the ingredients and elements of a food product both independently and in the context of dietary guidelines to ensure consumers are informed to what they are purchasing and how “healthy” the product. In the United States the National Labeling and Education Act (NLEA) in 1990 required the inclusion of nutrition information on packaged foods, with a few exceptions, and set the standard for how the information should be presented. This legislation was important because before the NLEA nutritional information was only required when producers wanted to make claims about specific nutritional benefits derived from consuming their product. However, the lack of standardization in presentation of nutritional information made it difficult to contrast and compare even when the information was available. Thus the famous standardized side panel conveying nutritional information for food products was born. <br />
<br />
Interestingly enough very little has changed in the presentation and information of this standard U.S. labeling since the NLEA up until now. Recently the Food and Drug Administration (FDA) released information regarding how this food label would change by 2018; this news was met with cheers from some health circles and jeers from others. Overall the changes are rather uneventful with an increased font size for calorie count, elimination of calories from fat, more nuanced language for per serving and per package identifiers including more empirically based serving sizes, gram amounts in addition to percentages for vitamins (this change seems rather meaningless), Vitamin D and potassium switch required status with Vitamin A and C and more clarification regarding the % Daily Value footnote. The one supposed “big ticket” change is that labels are now required to breakdown the sugar content between natural sugars and added sugars. <br />
<br />
The level of usefulness for the consumer within the divergence between natural and added sugars is questionable because without specifically breaking down the sugar content into its molecular complements: glucose, fructose, maltose, etc. the total sugar amount is still the only real meaningful piece of information. Knowing how much sugar was added versus how much sugar naturally occurs in the product is rather irrelevant regarding how the body will process it. Is someone really going to buy product A over product B because it has 28 grams of natural sugar versus 14 grams of natural sugar and 14 grams of added sugar? For some the answer will be yes, although most will not have a good reason why (the only real valid reason would be the contention that added sugars have a higher probability of being simple sugars and negative for health), but for most the answer will be no. <br />
<br />
Some individuals could counter-argue that various groups like the AHA, AAP, WHO and Institute of Medicine have recommended decreasing intake of added sugars with general estimates that only about 10 percent of total daily calories should come from added sugar. While all of this is true, the problem is that differentiation between added sugar and natural sugar is more propagandized than meaningful. Again without differentiating between the specific molecules that make up the sugar content, total sugar is the only metric regarding sugar that actually matters. The propaganda stems from individuals promoting the reduced consumption of processed foods, which are more likely to have added sugars. Certainly it is true that added sugars do nothing positive to the nutritional content of the food, but again total sugar is what matters without specific differentiation. <br />
<br />
The “debate” about added vs. natural sugar aside, in reality this new label certainly falls short of Michelle Obama’s endorsement “you will no longer need a microscope, a calculator, or a degree in nutrition to figure out whether the food you’re buying is actually good for our kids…” It is difficult to support the accuracy of that statement based on the changes; a sentiment shared by many other individuals who think the food labeling requirements should have been much more substantial. <br />
<br />
For example applying the labeling itself is only a part of the battle for the information on the label is only meaningful when consumers read and understand it. There is some question to whether or not the label needs to change for studies have demonstrated that while a vast majority of individuals in the U.S. read food labels, this information does little to influence their food choices.1-3 However, in the EU, which has a more intensive and thorough labeling system, food labels do influence consumer choice.4 <br />
<br />
While it is difficult to directly determine if the critical factor in this difference in behavior is born from the food labeling methodologies due to cultural differences between the U.S. and the EU, it is also difficult to dismiss the differing labeling strategies as playing an influencing factor. The key difference between the two labeling systems is that the U.S. system places a greater emphasis on the consumer to understand the nutritional context of both product A and how it may differ from the nutritional context of product B versus the more categorized labeling system of the EU in general.<br />
<br />
For example in the UK labeling follows four core principles enumerated by the UK Food Standards Agency (FSA) for front of the package:5,6 <br />
<br />
1) Separate information must be provided on fat, saturated fat, sugars and salt; (this is also a guideline in the U.S. via Facts Up Front, but it is a voluntary program) <br />
<br />
2) A red, amber or green color coding, similar to traffic lights, must be utilized to indicate whether the levels of those elements outlined in the first principle are high, medium or low respectively per 100 g (or ml for liquids) of product content; <br />
<br />
3) Color metrics are established by nutritional criteria set forth by the FSA; <br />
<br />
4) Provide portion ratios relative to the elements outlined in the first principle for color coding;<br />
<br />
When this proposal was first made in 2006 there was significant resistance regarding the traffic light identification system as numerous food manufactures and producers questioned the use of the traffic light system and instead favored more of a U.S. style system using percentage of daily recommended values.5 Furthermore food manufactures also disagreed with the use of 100 g/ml as a standard invoking the argument that consumers think more in portions of the consumed product. It would be difficult for consumers to deduce a gram weight-based portion size. This complaint produced an alternative system using the needlessly large number of portion sizes by various food products creating a much more difficult comparison environment for consumers; such was ironic because it produced the result that was exactly the rational used by food companies to argue against the 100 g/ml standard, an overly complicated system.<br />
<br />
Regardless of the bumpy road to establishing a universal labeling system and the lack of ideal standardization in the UK (note the confliction between points 2 and 4 from above), numerous studies have demonstrated that front of the package simple signal (like color coding) labeling of the “healthiness” of a food product does influence consumer choice both by increasing the probability that the customer purchases healthier products and increasing the probability that food producers create healthier products;7-10 also traffic light systems have proven superior to other systems like single compounded numbers or guiding star type systems.8,11 Therefore, clearly creating regulations regarding the nutritional or “health value” of a food for the front of the package is a meaningful step to increasing the probability of an informed consumer.<br />
<br />
Part of the battle for the front of the package (FOP) is not just to produce a standardized system to convey the health of the product, but to ensure genuine portrayal of the product itself. For example advertising for some food products tends to mislead consumers that the product may contain a larger quantity of a component than in reality. Such is common with fruit juice products; for example in an attempt to draw attention away from the top ingredients commonly being concentrated water and high fructose corn syrup with pictures of fruit. One means that helps support such trickery is that ingredients are only listed in order of percent amount, but the percentages are not given. Actually requiring the percentages may help limit the impact of this type of advertising. <br />
<br />
Ensuring proper labeling design is important because studies have demonstrated that simple, transparent and clear labeling engage subconscious emotional elements in the brain including the amygdala.12-14 Therefore, the FDA may need to properly regulate front of the box labeling because the side panel may be at a psychological disadvantage to the “health proclamations” that commonly adorn the front of the box in stylized and eye-catching presentations.<br />
<br />
In the past some parties have acknowledge the importance of the front of the box and lamented the U.S. government’s acquiescence of its power to corporations. These parties have proposed taking back the front of the box in such a way to “inform” consumers of whether or not their choice is a healthy one. For example one proposal is that the upper right portion of the box should contain the three most prevalent ingredients in the product, the calorie count and the number of total ingredients beyond those first three in bold and clear font.15 <br />
<br />
The proponents of such a system believe that it will produce a fast means for consumers to identify healthy food versus unhealthy food that is so easy it is impossible to ignore. However, the problem with such a system is that it can be easily manipulated where producers can fine-tailor their produces where 3 seemingly healthy ingredients can be the 3 most plentiful ingredients by an incredibly small amount before the “unhealthy” ingredients. <br />
<br />
Also calorie counts in such a system would have to identify serving size to be placed in proper context and even then such counts may prove to complicate the issue. Note that the above proposition suggests posting the calories per serving on the front of the package. However, if all similar products do not have standardized serving amounts (all listing 100 grams for example vs. ½ cup or 8 to a box) then listing the calories is not simplistic strategy to optimize food choices on the basis of health. Varying serving amounts force the consumer to undertake some general arithmetic. For example suppose Cereal A has 120 calories per ½ cup (10 servings) and Cereal B has 160 calories per ¾ cup (7 servings), front of the box labeling would imply that Cereal A has fewer calories, but that is not the case in either equivalent serving calories or total calories for the entire box. Therefore, front of the package labeling must be less simplistic unless a standardized serving metric is established. Facts Up Front suffers from this serving difference problem limiting its effectiveness. <br />
<br />
The possible problems with the above differing option notwithstanding, it is clear that the EU, including the U.K., has a better labeling system than the U.S. with regards to helping consumers acquire and understand nutritional information. So why does so little change in the “updated” U.S. labeling system? Most would argue, probably correctly, that lobbying by food companies prevents the FDA from going further thanks to interference from Congress. If the FDA had the “freedom” to make any changes what changes should they make? <br />
<br />
Obviously it is important for there to be some form of comparison information that goes beyond Facts Up Front. A traffic light system certainly holds promise due to its successful application in the UK. However, it is understandable that food companies would balk at such a condition, especially those with significant “red” light products. The proper response to such complaints is two-fold: first, who cares if the food companies have complaints again the proper utilitarian construct of ensuring transparent information. Second, one could attempt to lessen the impact of the traffic light system in the context that a primarily “red” food should not be viewed as something that should never be consumed, otherwise no one would ever eat something like a piece of cheese cake, but instead a food that should be consumed rarely in the context of good health. Thus, the green, yellow and red lights simply transition into anytime, once-a-day and rare, food choices. <br />
<br />
Another interesting idea would be to establish a standardized declaration system for the front of the package involving commonly referred health terms against an empirically derived metric. Basically it is commonplace for food producers to put labeling on the front of a package that states: “high in fiber”, “low sodium”, “x number of essential vitamins and minerals”, etc. This newly proposed system would eliminating that ability of food producers to make such claims and instead replace this system with a five or six bullet point checklist in the upper right corner of the package confirming a given “positive health feature”. A check would be earned by meeting a standard floor or ceiling for the given attribute per 100 g of product; where the FDA would establish the standard. For example the “high fiber” box would be checked if a food contained 3 grams of fiber per 100 g of product, not checked otherwise. Five possibilities for such a checklist are shown below.<br />
<br />
1) High Fiber; <br />
2) Low Sodium;<br />
3) Whole Grain;<br />
4) Low Sugar; <br />
5) Low Saturated Fat<br />
<br />
In the end both of these strategies: the traffic lights and the checkbox, should significantly increase the probability that consumers are informed about the general nutritional value of their food product choices without an unreasonably long analysis period. Overall there is no good reason that the FDA and its surrogates should not establish and enforce such a labeling system.<br />
<br />
<br />
<br />
Citations – <br />
<br />
1. Cha, E, et Al. “Health literacy, self-efficacy, food label use, and diet in young adults.” Am. J. Health. Behav. 2014. 38(3):331-339.<br />
<br />
2. Campos, S, Doxey, J, and Hammond, D. “Nutrition labels on pre-packaged foods: a systematic review.” Public Health Nutr. 2011. 14(8). 1496-1506.<br />
<br />
3. Huang, T, et Al. “Reading nutrition labels and fat consumption in adolescents.” J. Adolesc. Health. 2004. 35(5):399-401.<br />
<br />
4. Storcksdieck, G, and Wills, J. “Nutrition labeling to prevent obesity: reviewing the evidence from Europe.” Curr Obes. Rep. 2012. 1(3):134-140.<br />
<br />
5. Lobstein, T, and Davies, S. “Defining and labelling ‘healthy’ and ‘unhealthy’ food.” Public Health Nutrition. 12(3):331-340.<br />
<br />
6. Food Standards Agency. Board Agrees Principles for Front of Pack Labelling. 2006. Food Standards Agency. <br />
<br />
7. Lobstein, T, Landon, J, and Lincoln, P. “Misconceptions and misinformation: the problems with guideline daily amounts (GDAs). A review of GDAs and their use for signaling nutritional information on food and drink labels.” National Heart Forum. 2007.<br />
<br />
8. Temple, N, and Fraser, J. “Food labels: a critical assessment.” Nutrition. 2014. 30:257-260.<br />
<br />
9. Hersey, J, et Al. “Effects of front-of-package and shelf ntrition labeling systems on consumers.” Nutr. Rev. 2013. 71:1-14.<br />
<br />
10. Hawley, K, et Al. “The science on front-of-package food labels.” Public Health Nutr. 2013. 16:430-439.<br />
<br />
11. Sutherland, L, Kaley, L, and Fischer, L. “Guiding Stars: the effect of a nutrition navigation program on consumer purchases at the supermarket.” Am. J. Clin. Nutr. 2010. 91:1090S-1094S. <br />
<br />
12. Grabenhorst, F, et Al. “Food labels promote healthy choices by a decision bias in the amygdala.” NeuroImage. 2013. 74:152-63.<br />
<br />
13. Pessoa, L, and Adolphs, R. “Emotion processing and the amygdala: from a ‘low road’ to ‘many roads’ of evaluating biological significance.” Nat. Rev. Neurosci. 2010. 11:773-783.<br />
<br />
14. Seymour, B, and Dolan, R. “Emotion, decision-making, and the amygdala.” Neuron. 2008. 58:662-671. <br />
<br />
15. Kessler, D. “Toward more comprehensive food labeling.” N. Engl. J. Med. 2014. 371(3):193-195.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-57186811282518107912016-05-24T10:04:00.001-07:002016-05-24T10:04:12.022-07:00Addressing the HDL Problem in High Cholesterol Treatment<br />
Cardiovascular disease is still the biggest cause of death in the developed world including the United States. One of the critical elements that influence this rate of death is the disruption of cholesterol homeostasis, especially in the context of increasing the risk of arteriosclerosis.1,2 One of the current principal medical therapies for managing high cholesterol is the administration of statins. However, while statins have demonstrated a relatively strong safety profile with minimal sides effects, there are individuals who are unresponsive to treatment or may prefer a different option. The chief influence of cholesterol concentrations is tied to both high-density lipoproteins (HDL) and low-density lipoproteins (LDL) concentrations. Stanins address the LDL side of the equation through their inhibition of HMG-CoA reductase; it makes sense that the next step in producing another effective form of cholesterol treatment is to focus on HDL.<br />
<br />
HDL is one of five major lipoprotein groups that are responsible for transporting lipids like cholesterol, phospholipids and triglycerides. Both apolipoproteins, apoA-I and apoA-II, are required for normal HDL biosynthesis with apoA-I making up 70%.3 In contrast to LDL, HDL is responsible for moving lipids from cells, including within artery wall atheroma, to other organs for excretion or catabolism, most notably the liver.4 Both HDL and LDL concentrations are indirectly measured through the concentrations of HDL-C and LDL-C due to difficulties and costs associated with direct measurement. Since the 1970s HDL has been acknowledged as having a direct inverse relationship regarding risk for cardiovascular disease (CVD).5 This HDL-CVD relationship has also been conserved over different racial and ethnic populations.6 A seminal study known as the Framingham study also identified high LDL-C and low HDL-C levels as a strong predictor of CVD risk.7 Finally it has also been noted that close to 30% of lipids are transported by HDL in healthy individuals.8<br />
<br />
The general belief is that HDL is able to lower the risk of cardiovascular disease through the inhibition and even reversal of atherogenesis via initiating the process of reverse cholesterol transport (RCT).9-11 RCT is the common term for the removal of cholesterol from peripheral cells and transport to the liver. While RCT involves multiple steps, the major ones involve the transfer of cholesterol from peripheral cells to HDL by ATP-binding cassette transporter (ABCA1) through apoA-I interaction and phospholipid interaction, conversion of cholesterol to cholesteryl esters by lecithin-cholesterol acyltransferase (LCAT) and the removal of these esters by interaction with either a direct removal pathway, scavenger receptor class BI (SR-BI), or the indirect removal pathway, cholesterylester transfer protein (CETP).4,10,11<br />
<br />
CETP interaction involves the exchange of triglycerides of VLDLs with the cholesteryl esters of HDL resulting in VLDL being converted to LDL that later enters the LDL receptor pathway where the triglycerides are degraded due to their instability in HDL resulting in a smaller HDL lipoprotein that can begin to reabsorb new cholesterol molecules.12<br />
<br />
The strategy of manipulating HDL concentrations or interactions to produce better health outcomes is certainly not unique and has not gone unnoticed by the pharmaceutical community. For example one of the initially more promising therapeutic treatments for high cholesterol was increasing expression of endogenous apoA-I due to its role in HDL synthesis. To this end some research has focused on using PPARgamma agonists to increase APOA1 gene transcription to eventually increase apoA-I concentration.13,14 ApoA-II has also received some attention because it appears required for normal HDL biosynthesis and metabolism. Increasing either apoA-I or apoA-II concentrations produce an increase in HDL-C levels and supposed HDL levels. <br />
<br />
In contrast to increasing apoA synthesis rates, there is already effective means for increasing HDL-C levels via the supposed reduction of apoA catabolism through increasing nicotinic acid (niacin) concentrations.15 Niacin has demonstrated the ability to reduce HDL apoA-I uptake in hepatocytes in vitro.16 Whether or not this influence occurs via interaction with a HDL receptor or G protein-coupled receptors (most notably GPR109A), is unclear,16,17 but what is known is that niacin reduces apoA catabolism and increases HDL-C concentration.15<br />
<br />
In addition to research on increasing HDL synthesis, other research has focused on reducing the degradation/loss of HDL through focusing on influencing the esterifcation and de-esterifaction HDL pathways. As mentioned above HDL-C is esterified to HDL-CE by LCAT. Low concentrations of LCAT in both humans and mice produce significant drops in HDL-C concentration and rapid catabolism of apoA-I and apo-II whereas high concentrations of LCAT result in significantly increased HDL-C concentrations.18,19 These results are more than likely due to feedback systems in that increased LCAT activity via higher LCAT concentrations increase conversion of HDL-C to HDL-CE, thus increasing the demand for HDL-C and its reactants (HDL and apoA-I/apoA-II). <br />
<br />
Of the two major ending points for HDL-CE, labeling studies suggest that a majority of HDL-CE is transported to the liver via CETP exchange instead of through direct liver uptake via SR-BI.20 Therefore, CETP inhibitors, like JTT-705 and torcetrapib, are also viewed as an effective means of increasing HDL-C (and by association) HDL concentrations.21-23 Interestingly enough there also appears to be a negative influence on LDL-C concentrations.4,21 However, despite this increase in HDL-C concentration from CETP inhibition, there is a question of whether or not this pathway actually reduces CVD. For example large genetic and observational studies have contrasting results,24 but lean towards increased CETP concentrations increasing CVD probability, but inhibition of CETP does not seem to reduce CVD beyond standard rates (the reduction seen from not having elevated concentrations); this behavior may occur due to CETP negatively interacting with RCT.20<br />
<br />
Overall despite the notion that higher native HDL levels (and higher HDL-C levels) are associated with lower rates of CVD and that all of the above methods have some ability to increase HDL-C concentration levels, pharmaceutical derived increases of HDL-C levels, be it from HDL-C direct increases, niacin, or CETP inhibition, do not instill the same CVD health benefits as native levels.25,26 Isolated genetic variants also appear to have little to no effect; for example a loss-of-function variant in LIPG raises HDL-C, but did not change CVD probability.26,27 So what could be a reason behind this inability of HDL-C concentration alone to decrease CVD probability? <br />
<br />
One important element in the HDL pathway that has only been alluded to so far with regards to pharmaceutical intervention is the expression of the direct removal pathway through SR-BI. Various studies have identified that overexpression of SR-BI reduces HDL-C concentration and under-expression of SR-BI increases HDL-C concentration.28-31 Neither of these two results should be surprising as SR-BI is an end point pathway for eliminating HDL-C and/or HDL-CE converting it back to HDL. However, the interesting aspect of this change in SR-BI expression is that increased SR-BI reduces the rate of arteriosclerosis and decreased SR-Bi expression increases it.26 So how could SR-BI have this effect? <br />
<br />
SR-BI, which is encoded by the gene SCARB1, was identified as the primary liver related HDL receptor decades ago.32 The principal role of SR-BI is to selectively uptake HDL-CE into hepatocytes and steroidogenic cells as well as, to a lesser extent, HDL-C.4,32 Most importantly the interaction between SR-BI and HDL-C(E) results in the internalization of the whole HDL resulting in the removal of the cholesterol and return of the non-cholesterol carrying HDL into the bloodstream.33<br />
<br />
This absorption of HDL-C(E) and associated return of HDL could explain the reduced rate of arteriosclerosis over CETP interaction for among other things the SR-BI and HDL relationship trigger macrophage derived RCT.34,35 Basically SR-BI is returning HDL, not HDL-C(E) to the bloodstream which is ready to absorb more cholesterol; this readiness somehow signals the associated macrophages to induce greater rates of RCT. Whereas CETP does not reduce the cholesterol load of HDL as much due to the reliance on other limiting factors making them less capable of increasing RCT rates due to reduce cholesterol absorption capacity. <br />
<br />
With this information about the functionality of SR-BI, a theory can be posited regarding why increasing HDL-C does not result in improved health outcomes. It makes sense to consider the idea that SR-BI is a form of limiting factor in the capacity of HDL to reduce the risk of CVD. Due to the fact that CETP appears to manage a majority of HDL-C(E) reduction it stands to reason that SR-BI expression is not significantly tied to HDL, HDL-C or HDL-CE concentrations. Therefore, when HDL or its cholesterol variants increase in concentration there is no corresponding increase in SR-BI. One possible explanation for this outcome is that a certain minimum concentration of cholesterol is required to circulate in the blood, which is managed by a level of negative feedbacks that maintain SR-BI expression levels at a certain floor and ceiling.<br />
<br />
So why is SR-BI more important overall than CETP if CETP manages a majority of HDL reduction/eliminations? Perhaps CETP has a limit to what type of HDL it can manage. If HDL gets too “big” via its total level of cholesterol absorption the only means to remove that cholesterol could come from the direct pathway, i.e. SR-BI. However, if HDL concentrations outpace SR-BI expression by a significantly higher than normal level then it stands to reason that significant amounts of HDL will become too big for CETP to manage. Eventually these HDL molecules can breakdown (i.e. explode in a sense) while still circulating in the blood stream releasing all of the previously absorbed cholesterol and transformed cholesterol-esters. If this happens the cholesterol is not properly managed and can result in the increased the rate of arteriosclerosis and associated CVD despite the higher HDL concentrations. <br />
<br />
In the end while in general statins have been impressive at controlling high cholesterol and its associated detrimental side effects to health, it is always wise to have alternative strategies. The best sought alternative to statins is a pharmaceutical agent that increases HDL(-C) concentrations due to their positive relationship with quality health outcomes to cholesterol related events. However, numerous studies have produced disappointing results for agents that increase HDL levels with regards to cholesterol related health outcomes, including the potential that more negative events become more probable. So what can be done about this issue? <br />
<br />
Clearly if the above proposed theory regarding SR-BI as a limiting factor in the effectiveness of HDL is accurate then if one wants to raise HDL pharmaceutically to produce some form of health benefit, one must also increase SR-BI expression to properly manage the increased HDL and cholesterol associate concentrations. This process on its face should not be difficult as there are already existing pharmaceutical agents as well as natural agents that appear to increase SR-BI expression, but will demand proper study to identify its viability and safety over the long-term. <br />
<br />
<br />
Citations – <br />
<br />
<br />
1. Koyama, T, et Al. “Genetic variants of SLC17A1 are associated with cholesterol homeostasis and hyperhomocysteinaemia in Japanese men.” Nature: Scientific Reports. 2015. 5:15888-15899.<br />
<br />
2. Arsenault, B, Boekholdt, S, and Kastelein, J. “Lipid parameters for measuring risk of cardiovascular disease.” Nat. Rev. Cardiol. 2011. 8:197-206. <br />
<br />
3. Lewis, G, and Rader, D. “New insights into the regulation of HDL metabolism and reverse cholesterol transport.” Circ. Res. 2005. 96:1221-1232. <br />
<br />
4. Rader, D. “Molecular regulation of HDL metabolism and function: implications for novel therapies.” The Journal of Clinical Investigation. 2006. 116(12):3090-3100.<br />
<br />
5. Miller, G, and Miller, N. “Plasma-high ensity-lipoprotein concentration and development of ischaemic heart disease.” Lance. 1975. 1:16-19.<br />
<br />
6. Goff, D, Jr, et Al. “2013 ACC/AGA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation. 2014. 129(2):S49-S73.<br />
<br />
7. Kannel, W. “Lipids, diabetes, and coronary heart disease: insights from the Framingham Study.” Am. Heart J. 1985. 110:1100–1107.<br />
<br />
8. Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III). Executive summary of the third report of the National Cholesterol Education Program (NCEP). JAMA. 2001. 285:2486–2497.<br />
<br />
9. Ross, R, and Glomset, J. “Atherosclerosis and the arterial smooth muscle cell: proliferation of smooth muscle is a key event in the genesis of the lesions of atherosclerosis. Science. 1973, 180:1332–1339.<br />
<br />
10. Barter, P, et Al. “Anti-inflammatory properties of HDL.” Circ. Res. 2004. 95:764-772.<br />
<br />
11. Mineo, C, et Al. “Endothelial and anti-thrombotic actions of HDL.” Circ. Res. 2006. 98:1352-1364. <br />
<br />
12. Agellon, L, et Al. “Reduced high density lipoprotein cholesterol in human cholesteryl ester transfer protein transgenic mice.” J. Biol. Chem. 1991. 266. 10796-10801.<br />
<br />
13. Tangirala, R, Regression of atherosclerosis induced by liver-directed gene transfer of apolipoprotein A-I in mice.” Circulation. 1999. 100:1816-1822. <br />
<br />
14. Mooradian, A, Haas, M, and Wong, N. “Transcriptional control of apolipoprotein A-I gene expression in diabetes.” Diabetes. 2004. 53:513-520. <br />
<br />
15. Carlson, L. “Nicotinic acid: the broad-spectrum lipid drug. A 50th anniversary review.” J. Intern. Med. 2005. 258:94–114.<br />
<br />
16. Meyers, C, Kamanna, V, and Kashyap, M. “Niacin therapy in atherosclerosis.” Curr. Opin. Lipidol. 2004. 15:659–665.<br />
<br />
17. Tunaru, S, et Al. “PUMA-G and HM74 are receptors for nicotinic acid and mediate its anti-lipolytic effect.” Nat. Med. 2003. 9:352–355<br />
<br />
18. Kuivenhoven, J, et Al. “The molecular pathology of lecithin:cholesterol acyltransferase (LCAT) deficiency syndromes. J. Lipid Res. 1997. 38:191–205.<br />
<br />
19. Ng, D. “Insight into the role of LCAT from mouse models.” Rev. Endocr. Metab. Disord. 2004. 5:311–318.<br />
<br />
20. Schwartz, C, VandenBroek, J, and Cooper, P. “Lipoprotein cholesteryl ester production, transfer, and output in vivo in humans. J. Lipid Res. 2004. 45:1594–1607.<br />
<br />
21. De Grooth, G, et Al. “A review of CETP and its relation to atherosclerosis.” J. Lipid Res. 2004. 45:1967–1974.<br />
<br />
22. Kuivenhoven, J, et Al. “Effectiveness of inhibition of cholesteryl ester transfer protein by JTT-705 in combination with pravastatin in type II dyslipidemia.” Am. J. Cardiol. 2005. 95:1085–1088.<br />
<br />
23. Clark, R, et Al. “Raising high-density lipoprotein in humans through inhibition of cholesteryl ester transfer protein: an initial multidose study of torcetrapib.” Arterioscler. Thromb. Vasc. Biol. 2004. 24:490–497.<br />
<br />
24. Boekholdt, S, et Al. “Plasma levels of cholesteryl ester transfer protein and the risk of future coronary artery disease in apparently healthy men and women: the prospective EPIC (European Prospective Investigation into Cancer and nutrition)-Norfolk population study.” Circulation. 2004. 110:1418–1423.<br />
<br />
25. Rader, D, and Tall, A. “The not-so-simple HDL story: Is it time to revise the HDL cholesterol hypothesis?.” Nature medicine. 2012. 18(9):1344-1346.<br />
<br />
26. Zanoni, P. “Rare variant in scavenger receptor BI raises HDL cholesterol and increases risk of coronary heart disease.” Science. 2016. 351(6278):1166-1171.<br />
<br />
27. Haase, C, et Al. “LCAT, HDL cholesterol and ischemic cardiovascular disease: a Mendelian randomization study of HDL cholesterol in 54,500 individuals.” The Journal of Clinical Endocrinology & Metabolism. 2011. 97(2):E248-E256.<br />
<br />
28. Wang, N, et Al. “Liver-specific overexpression of scavenger receptor BI decreases levels of very low density lipoprotein ApoB, low density lipoprotein ApoB, and high density lipoprotein in transgenic mice.” Journal of Biological Chemistry. 1998. 273(49):32920-32926.<br />
<br />
29. Ueda, Y, et Al. “Lower plasma levels and accelerated clearance of high density lipoprotein (HDL) and non-HDL cholesterol in scavenger receptor class B type I transgenic mice.” Journal of Biological Chemistry. 1999 274(11):7165-7171.<br />
<br />
30. Varban, M.L, et Al. “Targeted mutation reveals a central role for SR-BI in hepatic selective uptake of high density lipoprotein cholesterol.” PNAS 1998. 95(8):4619-4624.<br />
<br />
31. Brundert, M, et Al. “Scavenger Receptor Class B Type I Mediates the Selective Uptake of High-Density Lipoprotein–Associated Cholesteryl Ester by the Liver in Mice.” Arteriosclerosis, thrombosis, and vascular biology. 2005. 25:143-148.<br />
<br />
32. Acton, S, et Al. “Identification of scavenger receptor SR-BI as a high density lipoprotein receptor.” Science. 1996. 271(5248):518-520.<br />
<br />
33. Silver, D, et Al. “High density lipoprotein (HDL) particle uptake mediated by scavenger receptor class B type 1 results in selective sorting of HDL cholesterol from protein and polarized cholesterol secretion.” J. Biol. Chem. 2001. 276:25287–25293.<br />
<br />
34. Zhang, Y, et Al. “Hepatic expression of scavenger receptor class B type I (SR-BI) is a positive regulator of macrophage reverse cholesterol transport in vivo.” J. Clin. Invest. 2005. 115:2870–2874. doi:10.1172/JCI25327.<br />
<br />
35. Rothblat, G, et Al. “Cell cholesterol efflux: integration of old and new observations provides new insights.” J. Lipid Res. 1999. 40:781–796.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-54611818685258175552016-04-26T10:06:00.002-07:002016-04-26T10:06:27.768-07:00What is the Future of Education and the Workforce in the United States<br />
In recent years certain parties have begun to question the economic value of a college education based on changes in both the available number of jobs as well as the success rates for those of various majors acquiring their intended and/or desired job. Overall it seems foolish to argue that a college education does not have any value. In fact one could argue that a college education has never been more valuable relative to the ability to acquire a “quality” job. For example a recent Pew Research Center study determined that an individual with only a high school graduate level education will only earn, on average, about 62% of what an individual with a four-year degree will earn down from the high school graduate earning 77% back in 1979.1<br />
<br />
Of course statistics like this appear impressive; typically significant problems come from the utilization of averages. Few will argue that certain jobs have significantly higher financial windfalls than other jobs both of which may require an individual to have at least a bachelor’s degree. Basically the wage curve for jobs in the United States has not increased in a linear manner; the wages associated with the top portion of “quality” jobs have dramatically increased over the years leaving the rest behind in the proverbial dust. Therefore, if one were to remove the top 5% and bottom 5% of wages for jobs held by those with four-year degrees it stands to reason that this 62% number would significantly increase to a number much closer to the 1979 derived 77%. Overall due to education displacement and college graduates taking jobs they are “over-educated” for, it would not be surprising if overall bachelor’s degree wages have decreased in recent years, especially since wage growth in general has been so poor.<br />
<br />
This element is an important consideration when contemplating the economic value of a college degree. One of the problems in modern society is the rise of the “gig” economy; basically due to a much lower pool of full time jobs with benefits, individuals now must instead engage in numerous part-time short-term tasks/opportunities with few to no benefits at much less money both in total value and consistency of acquisition. This trend bucks those of the past where the advancement of technology would facilitate increased job growth both in number and salary. <br />
<br />
Unfortunately technology may have begun to reach the tipping point where it is a net detriment to job growth instead of a net benefit. Combining this reality with the continued outsourcing of jobs by numerous corporate interests to lower cost environments in other countries, it becomes fair to question the panacea of education that some tout with respects to job prospects. For example there is still this simplistic notion that when individual A loses a job to another individual in a lower cost environment or to a piece of technology, then that individual A simply needs to acquire new skills and that education will “magically” produce a new quality job for this person. Clearly this belief is not supported by reality for numerous well-educated individuals do not have quality jobs despite their determination and skill sets and this trend appears to be worsening not improving. <br />
<br />
Based on these conditions instead of “upgrading” one’s position in the job market, the college degree has almost become a prerequisite to compete for a quality job, thereby making the college degree important even for those who are not prepared to excel in college. Strangely enough the “importance” of a college degree has evolved rather inexplicably for even those jobs that in the past were filled by those without college degrees. The general expectations and duties of these jobs have not changed, yet a larger number of companies expect applicants to have college degrees for jobs that involve secretarial or simple logistics work. Why, what does the college degree bring in modern society that in the past was “excluded”? Overall unfortunately with this increase in the general employment “importance” of a college degree the costs associated with acquiring one have also increased, thereby dramatically increasing the risks associated with failure, not only in acquiring the degree, but also in acquiring a job after earning the degree.<br />
<br />
To better identify these new risks one looks to the old adage that education is akin to investing in one’s future. In the past one could view going to college as investing 10 dollars for a 80% chance at making 50 dollars (a high quality job) otherwise one could frequently, but not always, still acquire 15 dollars (a lower quality job) even upon failure versus the ability to acquire 15 dollars without having invested that 10 dollars. Now going to college is akin to investing 30 dollars for a 50% chance at making 50 dollars otherwise one is limited to a lower probability than in the past at making 10 dollars (lower quality job with lower acquisition probability). While simplistic this analogy basically demonstrates the almost irrational change that has occurred in the job market in modern society within the United States, more money must be invested for a lower probability of success at achieving a smaller average payday. Note that success is not only acquiring a degree, but also acquiring a job that is appropriate for that degree. <br />
<br />
One interesting aspect of this degree acquisition are those who feel community colleges should no longer be used as jumping off points for traditional four-year educations, but instead should focus more on becoming vocational schools to train individuals for specific jobs. Such a strategy is inherently questionable because the purpose of education in general is to produce individuals that can effectively rationalize, make positive contributions to society and responsibly participate in government whether it is as an elected official or a voting citizen. A narrow educational experience in a vocational institution hardly has the capacity to aid in the achievement of such a goal. Thus, converting community colleges into “cogs in the machine” educational environments does not appear to be a responsible choice at this point.<br />
<br />
Furthermore it is the second portion of this equation that has failed in the modern economy. Acquiring college degrees is not the problem with more degrees awarded, both proportionally and in total number, than at any other time; the problem is that such accomplishment has lost significant meaning for outsourcing and technology have shrunk the “quality” job pool. Thus, numerous individuals with four-year degrees have had to settle for jobs below what would be expected of individuals with such a level of education. <br />
<br />
In fact the unemployment rate for recent college graduates remains higher than years before the Great Recession in 2008 and higher for all other age groups. For those graduates that have jobs they are more likely to be underemployed than past younger college graduates as well as other age groups.2 Wages are also down for them relative to other age groups and their past peers.2 Further exasperating these problems is that these types of jobs seem to no longer have an effective and transparent advancement track. Basically the ability to steadily advance in the company both in responsibilities and salary have ceased to exist in part due to the “gig” economy and in part due to unknown reasons (maybe in the name of more profits). <br />
<br />
Some might argue that this conclusion of a shrinking quantity of “quality” jobs is erroneous due to continuous claims of the need for qualified individuals in the STEM (science, technology, engineering, mathematics) fields. Unfortunately this counter-argument is complicated by the expected requirements of those jobs. First, not everyone is attending college for the purpose of receiving a STEM-based degree. Basically the “quality” job demand is not large or widespread, thus should everyone simply attempt to acquire one of these degrees even if it is not in their field of interest or expertise? The problem with chasing the “hot” or “trending” degree, especially for some the very specific fields within the STEM umbrella, is that those areas can contract in a blink of an eye based on changes in market conditions leaving individuals with a near worthless degree (as far as the job market is concerned). Part of the problem in this respect is that not surprisingly the public typically only hears about the successful individuals in these fields not those who struggle or fail.<br />
<br />
Second, and more importantly, the limiting factor for acquiring a vast majority of the available STEM-based jobs is not education, but experience. Basically these jobs are reserved for individuals who have degrees in the appropriate field as well as at least 5 years of experience in that field. The problem is that there are not enough entry-level jobs in advanced fields so individuals can acquire this desired experience. Another problem is that the entry-level jobs that exist do not shut out those with experience. Thus, there can be individuals with 3 or less years experience vying for these entry-level jobs as well and not surprisingly they have a higher probability of getting them over one without any experience. In short a young graduate can receive the necessary education, but will still lack the most desired element, experience. Overall the fad in the early 21st century was referencing this glut of demand for STEM qualified individuals and a cooling science and tech environment has reduced this demand, yet the public line has yet to acknowledge this reduction.<br />
<br />
This new mindset of a college degree being a prerequisite for a quality job, both in the present and in the future due to the lack of advancement opportunities, has hurt the process of education itself. Education has become a means to an end or a de facto commodity instead of a tool to enrich the life of an individual, thereby turning education into a rote activity with reduced levels of creativity, fun and enjoyment for a number of individuals. This issue is further complicated by the fact that if everyone is acquiring an advanced education then its value becomes diluted psychologically in that it is not viewed as a boon or achievement, but instead as “just something that you do” stripping even more enjoyment from the educational process, especially for those who are not naturally motivated to learn. A side issue is that so many people have advanced degrees employers are less likely to reward applicants for having one in addition to it being expected.<br />
<br />
With education becoming more of a commodity it is treated more deterministically, in that people attempt to devise the “optimal” way to provide education forgetting that the students, especially in their formative years, do not learn the same way and have their own strengths and weaknesses. This mindset produces a “one size fits all” methodology, which due to the aforementioned differences will inherently produces winners and losers based on who is best supported by this universal education method. Also this mindset has the significant potential to produce a single blueprint for the future of society, which is dangerous for inherent flexibility and diversity is superior to rigidity. This new mindset is demonstrated in the new testing culture that has arisen in the last 15 years in public education as well as the quasi-retreat from public schools to private and specialized charter institutions by wealthier families.<br />
<br />
This testing culture also facilitates a sprinter’s mindset where the only thing that matters is preparing and succeeding on the next test or the next quiz, thus focusing on skills that are only relevant to acquire in the “now” instead of a marathoner’s mindset where long term retention of knowledge and skills is important in order to provide foundations to build upon when acquiring new and more complicated skills (i.e. the basis of quality learning and education). This mindset has also seeped over to the “future” of education in MODO courses and micro-degrees, which can be largely viewed as “cram” courses where 1-2 years of knowledge is comprised into a 2-3 month period. One wonders how effectively this strategy will be in the long-term, but it is difficult to see it as a net positive.<br />
<br />
Not only are students hurt by this new culture both in education and the job market, but so are teachers. The single mindset of “make sure students can pass the test” and the seemingly incessant testing schedule has fostered the idea of teachers as simple “cogs in the machine” as well, further eroding the importance placed on the position and diminishing unique and inventive teaching methodologies. Groups like Teach for America and their supporters contribute to this mindset by selling the idea that quickly trained amateurs, most of whom will be out of teaching in five years, can be as successful, if not more successful, than fully trained and/or experienced teachers by simply following “the playbook”. <br />
<br />
A sad state of affairs is that these programs, in addition to charter schools, on average have not produced higher test scores or even more “educated” students with the exception of those privately funded institutions with much more money than public schools that hand-pick their student body ensuring high student quality and potential from the start. Failing to produce improved results is not the only accomplishment of these institutions; they have also driven a continuing public and private lack of respect for the professional teaching position, which when combined with continuing negative elements of institutional control as well as negative financial incentives, has created an environment where fewer individuals want to be teachers, thereby inherently reducing the total number of quality teachers in the educational pool. <br />
<br />
Not surprisingly teachers, like any occupation, gravitate towards the best financial, occupational and social environments and with less quality teachers entering and remaining in the profession society has created small concentrated environments of universal teaching quality leaving other areas significantly devoid of this element. The current environment regarding education has only exacerbated this division. This exodus of quality teachers and administrators to environments that frankly do not need their talents combined with the lack of respect for the process of teaching has created a serious financial issue for a number of schools. The idea that “anyone can teach” has created the erroneous philosophy of “schools can operate on shoestring budgets”. <br />
<br />
This philosophy is dangerous for two reasons: 1) a teaching environment with limited resources place unnecessary pressures on the staff and limits their ability to evolve and grow; evolution that would create a more positive and meaningful educational environment; 2) limited resources place psychological burdens on students, which will commonly result in less motivation, shorter attention spans and greater levels of misbehavior because such an environment facilitates the idea of future prospects seeming so bleak that the overarching mindset is “what is the point?”; <br />
<br />
The idea that outside environment matters is a convenient and common exclusion made by teacher critics. When students are motivated, work hard and follow instruction, as well as are curious and ask questions because they want to learn, then even a mediocre teacher appears to be a great and inspirational teacher. When students are frustrated, starving, concerned for their safety, do not respect the institution or its instructors, then even a creative and visionary teacher appears to be a failure unable to “reach” his/her students.<br />
<br />
This above discussion has raised two unquestionable issues regarding future job markets and education. 1) the number of available quality jobs relative to the number of applicants is trending downwards in both respects (fewer quality jobs and more applicants) and it is difficult to see a scenario in which this trend changes in the near future if government does not act; 2) the demand of a college degree and the deemed “tools to succeed in college” for even a sufficient opportunity to acquire a “quality” job has produced a single-mindedness relative to the educational process destroying teaching diversity, prestige, and importance; this loss makes the evolution and maturation of a divergent number of intelligent personalities much more difficult, thus diminishing the overall quality of society; <br />
<br />
Focusing on the second issue first, some argue that the problems associated with any quality teacher exodus can be overcome by technology. Proponents of MODO and other online instruction methods are already hyping its transformational potential, especially in the avenue of lowering class sizes. However, applying these mediums to education raises questions regarding personal instruction. For example what happens when a student has a question? Basically what is the dynamic of how a teacher is supposed to interact with a classroom of 20 students and other 10-15 “tele-educating” from their homes? What happens if the teacher is the one “tele-educating”? It is difficult to scale any success at the college level to the middle or high school level. <br />
<br />
While this is one a single issue of many involving “tele-education”, an interesting element is how does this inherent increased complexity of student/teacher interaction reflect the new modern narrative/trend in education of “anyone can teach”. Others talk of providing personalized education through technology. Such a goal seems incredibly ambitious and complicated, especially since proponents do not provide a clear definition for what a personalized education or personalized educational experience is supposed to represent. <br />
<br />
At the moment there are two significant problems facing the incorporation of technology into the current educational system. First, the incorporation of technology appears to be following an inverted need curve, basically the communities that could benefit the most from technology are the least likely to incorporate it due to lack of finances and lack of specialists; instead more wealthy communities have and will continue to incorporate technology while poorer and more rural communities will not. Therefore, instead of acting as an equalizing force in education narrowing the education gap between the rich and the poor or between whites and non-whites, as envisioned, technology is turning out to be simply another means in which the “haves” separate themselves from the “have nots”. Unfortunately for technology adopters this general funding issue appears to be semi-permanent until the financial culture surrounding the importance of education changes.<br />
<br />
Second, the utilization of technology as a negative force on education has been widely demonstrated numerous times already. Simply ask almost any teacher to name the most detrimental device to teaching and it stands to reason that a vast majority will reply “the smart phone”. Not only are smart phones huge sources of distraction during class both for the directly interacting student and any others he/she may choose to communicate with via text, but they have also been demonstrated to be useful tools for those who attempt to cheat on assignments and tests. Unfortunately most of the “expertise” that the younger generations are supposed to possess regarding the application of technology appears front-loaded in the entertainment field and lacking in the education and enlightening discovery fields.<br />
<br />
For example it could be successfully argued that technology as a whole has fostered an increased level of laziness and carelessness as students simply plagiarize from online pieces regardless of their accuracy. A number of students seem to believe that because something is online then it must be true; or just “Wikipedia” something and call it a day refusing to engage in any deeper level of analysis and understanding. Understand that Wikipedia can be a valuable resource for background as a jumping off point, but too many students view it as the only needed reference. Part of the reason for this increased detrimental behavior in education is the fact that, as mentioned above, education itself has been marginalized to a results-only system, thus the actual process or methodology behind education is viewed as irrelevant, just “getting it down” is important. Overall before technology can actually become a boon in education these problems must be address for technology cannot solve them, only exacerbate them. <br />
<br />
So what can be done about the marginalization of education? If one of the hallmark traits of the United States is the idea that anyone can rise from nothing to be something and that education is a central element increasing the probability of such success then establishing a firm set of rules to foster an educational environment that favors some and opposes others, the system that current exists, flies directly in the face of such an ideal. The overall solution is not a difficult one in that successful change needs to involve re-establishing education as a meaningful learning and growth experience, not as a means to an end (i.e. a job). <br />
<br />
The first step is to grant methodological freedom back to teachers decoupling their wages and tenure from standardized testing results. Instead evaluations can take place through exit interviews, written student evaluations and final test performance. Finals in given subjects should represent a cumulative knowledge and retention of what was supposed to be taught, thus final grades would be a meaningful measurement element regarding both teaching and learning success. Remember education is a two-way street between the teacher and students, quality instruction and quality reception/retention.<br />
<br />
Another part of accomplishing this larger goal is to reintroduce the value of previously vilified as meaningless elements like recess for elementary school level students and a wider array of liberal and performing arts for older students. Basically make education a more global and involved process rather than systematic rote analysis of narrow topics that for most students have little excitement, i.e. create a situation where individuals see value in actually going to school rather than having an environment that could be easily replicated at home through simple focal study. <br />
<br />
A meaningful element to stimulate the above process is for both schools and families to better identify the passions and interests of the students and then correlate those elements into the learning process by demonstrating how learning “perceived” mundane things like math and chemistry tie into those passions. This way education becomes an amplifying force for that passion rather than one that detracts and potentially changes it to something that may be more ill-suited for the individual.<br />
<br />
Another important element is to psychologically prepare students to embrace the discomfort of learning. Some argue that learning is not fun and education needs to reflect that, but it can be counter-argued that such an environment for a number of students has already been attained; this is a major problem for if students acknowledge learning and education as painful then they will be less interested in engaging in the process and will look for shortcuts (i.e. cheating). Instead one must focus on the discomfort of learning in the context that it is frustrating when one does not know something one wants to know, but proper instruction and hard/smart work makes that frustration ephemeral. <br />
<br />
Basically learning is only not fun when no progress is being made. If progress is being made (i.e. some knowledge being acquired) then learning produces a noticeable sense of accomplishment and pain/frustration is limited and short-term. Therefore, one of the chief strategies in the educational process is to focus on why someone is not making progress and rectify it. This is not to say that education and learning is always effortless, but there is always a purpose to the effort.<br />
<br />
In summary some of the most important first steps to resolving the second problem… on its face at least, is to increase the value for students in attending school itself, couple individual passions with more perceived mundane topics to demonstrate their overall value, and enhance the educational process by psychologically prepping students that learning is short-term frustrating long-term satisfying and that adversity makes the process worth it. Also while not discussed above it is important to honest evaluate students as well, if they are not ready to move on to the next level then they should not move on to the next level. However, while these goals are noteworthy and commendable, one could question if they even meaningful? <br />
<br />
A central problem in education is the influence of the future job market for the sad reality is if one’s education cannot be parlayed into meaningful employment then most will look unfavorably upon that education. The current dictum of society has deemed that diversity in education is not applicable to “maximizing” output efficiency for future employment, thus diversity in education is not properly funded. Another concern is that just having a degree may not be enough as studies have shown that most employers in the “quality” job fields focus on candidates with degrees from elite institutions foregoing even quality well-known and regarded public universities. So how can the process of education be decoupled from the job market? <br />
<br />
The most obvious solution is the return of a number of “quality” jobs that do not require a college degree, which would allow more freedom in education instead of degree chasing and resume padding, especially at the high school level (in order to gain entry into those elite colleges). Unfortunately achieving this solution appears very unlikely due to the continuing march of technology and the reduced spending power of young adults limiting the further expansion of the last holdouts of quality jobs not requiring an advanced degree, jobs in the entertainment industry. <br />
<br />
There was an initial belief for a significant resurgence in domestic manufacturing, a past bastion of “quality” jobs, at some time in the future driven by dramatically increasing oil prices born from supply issues (i.e. peak oil), which would increase transportation costs to the point where cheaper outsourced manufacturing environments would become prohibitively expensive. However, the surprising significant drop in oil prices in recent years has dramatically decreased the probability of this hope coming to pass, in the near future at least. Overall as long as capitalism fuels the idea of chasing profits as the most important thing to a corporation, it is difficult to anticipate a change in the trend of fewer “quality” jobs available to those without advanced degrees. <br />
<br />
A second option is significantly increasing the salaries for various service jobs and the like (the “non-quality” jobs) until those jobs become “quality” jobs. There has been a significant movement towards increasing minimum wages in various cities and a smaller movement for increasing the Federal minimum wage up to $12-$15 an hour. However, while California recently boasted the greatest success for this movement, it is difficult to see any type of major Federal legislation regarding wages in the near future and a number of states are taking steps to neutralize the ability of cities to independently change wage policy. Incidentally the new law in California will be an interesting test case for the viability of increasing wages on service jobs, but because the increase is incremental quality data will probably not be available until the early 2020s. Overall though regardless of the results from California it is difficult to conclude that increasing minimum wage is a valid overarching strategy, it will take hold in certain regions, but not others, which unfortunately may further complicate income inequality.<br />
<br />
A third option is changing the admissions process for college by lessening the value of standardized testing and grades and increasing the value of interviews and critical thinking questions. Some would argue that such a process has taken hold in certain universities with optional weighting of SAT or ACT scores and more “holistic” admissions methods. This weighting change would also make sense with regards to grades; for example grades are significantly arbitrary based on some numerous uncontrollable environmental and academic circumstance; i.e. an A at high school x does not always carry the same weight as an A at high school y and some high schools allow students greater amounts of extra credit which conceal their actual knowledge of the subject through grade inflation.<br />
<br />
However, the chief problem with these current holistic methods is that universities are not transparent in their application, thus stripping significant credibility from the methods themselves. Also one could argue that these holistic methods are not tangible enough to identify individuals with distinctive and valuable viewpoints in order to validate selecting a high achiever from a less difficult environment versus a lower achiever from a more difficult and/or diverse environment, which could create problems. Whether this problem corrects the “degree value” issue is questionable, but it can aid educational diversity and creativity at the middle and high school level, which is an important element in solving the overall problem between education and employment. <br />
<br />
It must be noted that changing the admissions process does nothing to manage increasing tuition costs, especially at elite universities, which is another problem unto itself. However, due to the inherent economic demand detriments associated with most universities due to difficulties meeting enrollment goals from less interest in college and higher levels of competition, it is highly unlikely that significant drops in tuition will be seen in the near future. Government intervention may come from lower interest rates on various student loan programs, but without changes in the general economic system, increasing government grant programs seems wasteful.<br />
<br />
Another possible solution is to significantly limit the risk associated with taking more indirect educational pathways that society may find “inefficient”. One of the easiest ways to accomplish this diversification of educational philosophy is by ensuring individuals have available resources to pursue their own educational identities. The chief aspect of risk in modern society is financial; the idea of absorbing risk is one of the elements that produces unbalance in society between rich and poor individuals where individuals with connections and/or resources are typically not punished when engaging in risky ventures and failing whereas those without connections are commonly severely punished when engaging in the exact same behavior and also failing. <br />
<br />
One can point out numerous instances when an individual has talent in subject matter x, but needs to take a job in a much less desired and low talent field in order to “pay the bills” because the potential risk associated with attempting to gain employment in the talent field is too great, in part to “secret handshake” connections having nothing to do with skills or education. People like to believe that talent eventually wins out, but modern society has demonstrated too many times that such is not the case.<br />
<br />
The most direct way of limiting the negative repercussions of risk is establishing a guaranteed basic income (GBI) which would provide basic living resources for all adult citizens with minor to clean criminal histories within a certain income bracket eliminating the need to take jobs solely to survive. A GBI creates freedom that will increase the probability that individuals will maximize their educational opportunities, talents and passions because of a mitigation of risk. While increasing wage is a nice idea there is question to how much it will actually affect poverty due to potential reduced work hours and the possibility of increased taxes due to tax bracket changes whereas a GBI will definitely eliminate poverty on a meaningful level.<br />
<br />
Some may attempt to argue that a GBI will foster laziness and a lack of ambition if individuals receive a certain amount of money for simply existing, but such arguments fail to acknowledge that a GBI will only contribute to survival, it would be very difficult to live comfortably, enjoy luxury and make a mark on the world by forgoing a job only to live on the GBI. Also without things to do, individuals would quickly become board and would look to accomplish things to productively fill their time. Finally a GBI will allow individuals to better “invest” in themselves by providing them the necessary seed money to get started. Realistically there is no accuracy in the argument that a GBI would “corrupt” the work ethic of society and anyone who argues that such corruption would occur is simply wrong.<br />
<br />
While the application of a GBI would be the most effective means to decouple the restrictions and risks of employment on education, it would definitely be a significant task to accomplish due to the preconceived notions regarding the application of capitalism in human society. This perception is a problem, for a number of individuals only regard a task as having value when someone is paid to do it, which is a dangerous attitude to have. Overall despite a GBI making sense to all major political parties and supporting most of their fundamental philosophical economic beliefs, its actual passage is presumed to be difficult, in part because no one has the guts to try.<br />
<br />
A concerning element with the issue of employment and education is that it should be clearly obvious to anyone that the current method is not productive, does not benefit most individuals or society as a whole and cannot hold. So why did the current method ever rise in the first place and why does anyone actually support it? A cynic might think that such a system is desired by the powerful, those who have championed it in the first place by passing legislation like No Child Left Behind or Race to the Top, because it benefits their children, but provides far less benefit and even potential detriment to less wealthy families and children. Therefore, in such a system individuals who could become competition for these wealthier children are handicapped before the competition really begins. Hopefully this is not the intent of the supporters of the current system, but unfortunately there have been too many scenarios in human history where one individual/group has had no problems “screwing over” another to get ahead even when they do not need to.<br />
<br />
In the end a change needs to be made at the basic economic level because even if the appropriate changes are made at the educational level at this point in time, the end result will simply be a society filled with a larger number of well-educated under-employed, if employed at all, frustrated and even possibly angry individuals. Overall it does not appear that the form of capitalism currently practiced by the United States will be able to properly manage this trend and therefore, must evolve in some manner. Whether this evolution is the administration of a GBI, the re-localization of manufacturing, the general abandonment of the “absolute profit above all else” mindset for most corporations or some other significant change is uncertain, except for the fact that such a change must happen. The increasing detrimental economic link between employment and education should simply be viewed as a significant and early warning signal.<br />
<br />
<br />
==<br />
Citations – <br />
<br />
1. The Rising Cost of Not Going to College. Pew Research Center. Feb 11, 2014. <br />
http://www.pewsocialtrends.org/2014/02/11/the-rising-cost-of-not-going-to-college/<br />
<br />
2. Davis, A, Kimball, W, and Gould, E. “The Class of 2015: Despite an Improving Economy, Young Grads Still Face an Uphill Climb.” Economic Policy Institute. May 27, 2015. http://www.epi.org/publication/the-class-of-2015/13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-14095976398539562642016-03-23T10:06:00.003-07:002016-03-23T10:06:26.912-07:00Why is Society Ignoring the Easiest Path to a Low Carbon Energy Infrastructure by Rejecting Nuclear Power<br />
For decades certain parties have dreamed of the reality of “renewable” energy generation with the sun and/or wind providing the lion’s share, if not all, of the energy for a given society. Unfortunately decades removed from those initial dreams, society is little closer to that reality. Solar and wind proponents would argue that such a statement is foolhardy for the total percentage of energy generation from these sources rises ever higher year after year. However, these same proponents fail to acknowledge, or even realize, that neither solar or wind have had to face any real test supporting their viability as the chief energy generator. Can one say that an individual is really closer to passing a test when his percent correct has increased from 1% to 6%? <br />
<br />
The lack of sufficient penetration has tabled effective identification of what type of integration methodologies will be required to evade consistent brown outs due to the intermittency of these technologies. However, it is known that battery technology for storage is still in its infancy, especially on a mass scale, and little discussion is given towards the significant shortfall in numerous rare earths to ensure solar and wind economic viability relative to the scale demanded; for solar economic viability is questionable even with these rare earths. Also there is a lack of general understanding regarding the required levels of redundancy to create the storage reserve. Despite these real unanswered questions where theory is stacked against solar and wind supporters, groups like ARPA-E continue to search for the “next energy breakthrough” commonly to support the expansion of wind and solar while seemingly ignoring the fastest and most stable route to a no/low carbon emission energy future… nuclear power.<br />
<br />
No one can dispute the stability, low to no carbon emission and base-load power generation ability of nuclear power. The failures associated with the widespread adoption of fission based nuclear technologies, including the development of breeder reactors, have not be the result of technical flaws, roadblocks produced by the laws of physics, safety profiles or even overall capital and operational costs, but instead has been the result of a direct campaign against nuclear power based only upon paranoia, overreaction, fear and opposing economic interests. <br />
<br />
Some may argue against nuclear power by citing certain projects that experienced large delays in construction and cost overruns. This criticism has valid and invalid points. The problem with simply citing a construction delay or cost overrun is that almost no construction project in the history of humanity be it a complex structure like a nuclear power plant or wind farm or a more simplistic structure like a corner grocery store have come in on-time and on-budget. The entire predictive process for the construction is consistently fraught with optimistic estimations and assumptions in effort to win the “bid” for the project either through associated agencies like subcontractors or to win approval for the project as a whole. Therefore, time and cost overruns should be treated as the norm, not the exception for any construction project.<br />
<br />
However, optimistic estimations cannot explain all of the cost overruns. Another reason nuclear power appears more expensive than it actually should be is the lack of uniformity/standardization in design. For example when considering breeder reactors several different reactor prototypes have been proposed and even had initial construction periods. Anyone with any design experience knows that the most expensive type of product is the first working prototype (i.e. version 1.0). Due to the lack of coordination and cooperation between nations, instead of six or seven countries working together on one universal reactor design, economic competition has created an environment with numerous high level generation II to generation III breeder reactor version 1.0s, which has further increased costs. <br />
<br />
Another rationality for cost increases with regards to nuclear power, especially breeder reactors, is simple short-sighted analysis regarding long-term cost benefit analysis. Basically breeder reactors remain more expensive (i.e. not directly cost-competitive) with more standard thermal reactors because research and development into breeders was quasi-sabotaged for decades by cheap uranium prices and corresponding economic incentives. So instead of acknowledging a time in the future when uranium may not be cheap due to potential shortages or more expensive extraction methods or simply understanding that nuclear power needed to evolve to be more effective in general and preparing for this reality with proper planning, both private corporations and government elected to take advantage of short-term gains that have now created long-term losses. <br />
<br />
Basically capital costs associated with breeder reactors have been heavily influenced by the lack of standardization and the lack of a devotion to the continuous evolution of their design and construction. Any economist will sing the praises of assembly line and scale economics at dramatically reducing costs. Nuclear, especially breeders, has not been able to engage in these types of processes because of this “start-stop” mentality due to uranium prices, lack of long-term thinking, which is still plaguing the energy environment with so much short-term focus on solar and wind, and lack of cooperation among companies and governments.<br />
<br />
Another issue that has been blown out of proportion is the danger of reprocessed material being siphoned off and/or stolen for the production of nuclear weapons. One of the original reprocessing methodologies, PUREX, certainly warranted concern because it is able to produce concentrations of pure plutonium after completion; however, PUREX is certainly not the only reprocessing method. There are a number of other methods most of which make plutonium isolation and extraction nearly impossible, thus making weaponizing the reprocessed material nearly impossible. Also appropriate safety measures can easily be applied to eliminate the potential seizure of any “weaponized” material. If terrorists acquire nuclear weapons it would be from some secret lab in Iran or from North Korea over a modern nuclear breeder reactor.<br />
<br />
The final issue is the most depressing one when it comes to nuclear opposition, the overreaction to a meltdown. Overall there have only been two legitimate meltdowns in history, Chernobyl and Fukushima Daiichi. The events on Three Mile Island actually demonstrated what is supposed to happen when safety procedures are properly applied. The “demonization” of nuclear power at the hands of Chernobyl is especially ridiculous when considering both the technology at the time and the circumstances of the meltdown. If similar consideration was given to the airline industry then modern aviation would shutdown because a Wright Brothers’ era plane happened to crash. Of course that would never happen, which demonstrates the serious bias towards nuclear power possessed by certain entities.<br />
<br />
Concerning Fukushima Daiichi, a power plant from the 50s built in one of the worst regions of the county it could have been relative to safety, it still required a once in a 1000-year natural disaster event to produce any negative outcome, which was in large part thanks to a lack of basic contingency safety protocols; yet these failures were heavily unjustifiably propagandized as inherent to nuclear technology instead of what they actually were: simple economic laziness/greed.<br />
<br />
If nuclear power is the answer to addressing global warming what does that make of the other contenders? Clearly anything that produces significant quantities of CO2 or other greenhouse gases is out due to global warming issues, thus coal, oil and natural gas are non-starters. The idea of natural gas as a “bridge” from coal to a low-CO2 emission source may have been an option two or three decades ago, but is certainly not a cost-effective transition option now, despite the money the U.S. is wasting, relying on natural gas is a fool’s errand. <br />
<br />
Geothermal is an option that would have been interesting to study regarding the enhanced geothermal systems (EGS) methodology as a realistic competitor to nuclear, but with the pertinent issue involving the potential progression of tectonic activity (periodic 2-3 Richter scale earthquakes under initial EGS tests, with time would this magnitude increases?) there does not appear to be adequate time to return to the start so to speak if earthquake magnitude progression was indeed a feature of EGS. Pipe dreams like tidal power and microwave/satellite solar are either boondoggles or do not have nearly enough momentum and potential to even be considered viable responses. Fusion, either of the hot or cold variety, seems no significantly closer now than two/three decades ago. Thus, the only valid competitors for nuclear appear to be terrestrial solar and wind power. <br />
<br />
The biggest problem with both wind and solar is the intermittency associated with their energy generation. Try as they might to mitigate its importance, wind and solar proponents cannot in good conscious ignore the additional costs, maintenance, storage and redundancies required to compensate for this deficiency, which raise the costs associated with both solar and wind to levels that far exceed nuclear power. Without the need for storage and redundancy capacity to fill that storage then solar and wind are cheaper, which is the story solar and wind proponents sell the public; however, without storage and fill redundancy, it is logical to suggest that solar and wind will do nothing but produce rolling brownouts to blackouts as the principal energy provider. Unfortunately the current penetration structure of wind and solar does not provide any test cases to demonstrate these realities.<br />
<br />
Another problem associated with wind and solar is that measuring their production via nameplate capacity commonly results in optimistic to unrealistic analysis. For example a wind farm reporting a nameplate capacity at 200 MW means that it produces 200 MW when functioning at optimal capacity. Unfortunately to actually achieve this maximum generation result, the wind needs to be blowing within the optimum speed range over the entire farm simultaneously, which is a meaningful statistical achievement; can it happen… yes; does it happen frequently, not even close. Furthermore the statistical probability of this occurring over multiple wind farms is even more unlikely. Basically the greater nameplate capacity built into this type of system, either within a single farm or throughout multiple farms, will result in an overall reduction in the expected maximum capacity that can be feasibly attained relative to the actual nameplate capacity.<br />
<br />
In short it is unrealistic for a large wind producer to ever reach 100% nameplate at any given time and the more capacity that exists the lower percentage of the maximum that can actually be reached. For example (note these numbers are for explanation purposes not empirically derived, but accurately demonstrate the trend) a wind system with 3000 MW of nameplate will be able to achieve an average maximum generation of 2500 MW (83%) whereas a wind system with 4000 MW of nameplate will be able to achieve an average maximum generation of 3100 MW (77.5%). Of course these are only maximum values that are attained for a few seconds to minutes at a time; actual average wind capacity values for days to months range from 25-35% and have remained within this range for decades and show little sign of changing, despite certain levels of hype, hence the need for storage and redundancy to fill that shortage.<br />
<br />
Another concern with both wind and solar generation is that their production potential changes significantly during winter months. The loss of solar during the winter is of no surprise to anyone that actually pays attention to general climate patterns; however, wind is trickier because while the overall average “amount” of wind does not seem to have any significant level of variance between seasons, its daily levels typically vary more during the winter than other months. Basically during winter months there is a higher probability that wind values depart from the mean both in magnitude and direction (i.e. positively or negatively). These larger departures place greater pressure on plant operators to smooth power curves and properly incorporate the energy produced from wind into the mix with other energy mediums. Remove those other more stable energy mediums and integration becomes even more difficult.<br />
<br />
A number of solar and wind proponents have put forth the idea that smart grids will act as a panacea of sorts for the issues associated with integration addressing load balancing, peak curtailment and demand response among other potential problems. However, the scale of application associated with smart grids has been much lower than expected over the last decade despite attempts to invest billions of dollars in the process. Part of this significant delay is that some communities are rebelling against the installation of smart meters, central elements to the smart grid, even when costs of maintenance and installation are deferred to the utility company. While most of the reasons for the rejection of smart meters are thought to be questionable, it does not appear that smart meter detractors will be easily convinced off of their current position. <br />
<br />
For example a portion of this resistance is the concern about the safety of potential electromagnetic and/or radiation that could emanate from the smart meter. Unfortunately smart meters may have entered that cell phone zone when it comes to radiation in that even if they are safe it may be impossible to convince some people of that fact and you can easily have an environment of “dueling” experts. Also unlike cell phones, smart meters do not have that “necessary for existence in society” reputation that cell phones seem to have.<br />
<br />
Another problem for smart meters is a resistance by utility companies themselves to install them unless someone else is paying the bill due to a lack of standards through how the devices are connected to grid and communicate with each other. Basically no utility company wants to commit to a given format/design because that format may not be the one that “wins”, thus that preemptive commitment will result in significant financial losses. The situation is similar to the problem with the expansion of electric cars. Currently the existing infrastructure to support electric cars is basically non-existent outside of certain areas in California because those responsible for building it are waiting for electric car sales to increase to the point that justify building it, but without an infrastructure few individuals have interest in buying an electric car in part due to the worry that the infrastructure will never be built to support the purchase. One side has to take the leap, but neither side is willing to do so. <br />
<br />
Even if smart meter installation was as widespread as hoped, smart grid proponents have acknowledged the problems associated with securing the flow of information and energy within the system. Currently there are valid concerns regarding how prone the system is to being hacked, which raises questions regarding the long-term security and safety of a smart grid. This is not to say that smart meters, and in large part a smart grid, do not have a role to play or cannot be safe, but the issues associated with their adoption and safety place a burden on their speedy application and mass testing that significantly damages the viability of a dominant wind and solar energy infrastructure.<br />
<br />
Another issue with wind power that is not commonly considered is whether or not the general price of wind power is close to its minimum in that with a vast majority of the high-value wind collection land masses already being utilized, newer wind turbines will have less naturally efficient areas to generate power. Realistically this issue should not produce an environment were traditional wind power will significantly start increasing in price, but instead it would counteract any cost savings from any further technological advancement in wind turbines. The real question regarding future costs associated with wind power is storage level and medium. <br />
<br />
Further problems for solar/wind supporters is even some of the “champion countries” of renewables are not seeing the carbon emission reduction numbers theory and general behavior would suggest. While in isolation Denmark’s wind generation numbers look impressive, they are not consistent, to the point where Denmark relies heavily on energy transfers to and from neighboring countries. Basically if these transfers did not exist Denmark would be in a state of constant brownout due to wind intermittency. <br />
<br />
Currently this transfer process is stable because of the more consistent generation mediums possessed by other European countries, most notably natural gas and Swedish and Norwegian hydropower. At the current time and in the foreseeable future the ease of transfer to reduce volatility in Denmark’s energy markets would become incredibly difficult, if not impossible, if Europe adopted similar wind percentage generation profiles. Basically while wind proponents like to cite Denmark as the poster child for “what wind can do for you” its close proximity to Swedish and Norwegian hydropower provides a very unique environment that is not technically or economic replicable for other countries. <br />
<br />
Also despite investing heavily in wind and solar power over the last decade Germany has not meaningfully reduced the level of coal and natural gas derived energy production. In fact for Germany CO2 emissions in the energy sector, the most critically relevant area for judging the impact of renewables, have increased relative to the past year (2012 vs. 2011, etc.) in 3 (2012, 2013 and 2015) of the last 4 years for when information is available. The reduction of CO2 emissions in 2014 relative to 2013 is also somewhat marred for it is highly probable that these reductions occurred because of lower energy consumption during the winter due to much warmer than average temperatures over that winter. So while the share of renewable sources of energy in Germany continue to expand, the CO2 emissions from its represented sector are not dropping, which speaks poorly towards the ability of renewables like solar and wind to quickly drop energy derived CO2 emissions, which is exactly what needs to occur to combat global warming.<br />
<br />
Note that the issue concerning winter temperatures is also a big deal in Germany because of the lack of available renewables during that time period; solar is almost non-existent in Germany during the winter netting a typical average capacity of 10-11% and wind generation is rather erratic.<br />
<br />
Some could argue that this result has been heavily influenced by the decision to suspend operation of the German nuclear power plant fleet with the intent of its future decommission. While this decision certainly has resulted in greater coal and natural gas use, the problem is that there was little reduction of energy derived carbon emissions even before the decision to suspend nuclear power use in Germany instead most of the overall reduction stemmed from the measurement point being 1990 right after the integration of heavily industrialized East Germany into West Germany producing an artificially high point of reference. <br />
<br />
Finally one of the troubling aspects of the solar and wind proponent argument is a questionable interpretation of time. They properly acknowledge that ceasing carbon emissions must occur quickly, yet do not acknowledge that creating the type of solar/wind energy infrastructure to actually accomplish this reality will take a long time. Part of this apparent contradiction is that supporters are emboldened by the solar and especially wind percentage growth rates over the last decade as justification for the superiority of wind and solar despite these growth rates not representing meaningful penetrations into global energy markets. Basically wind and solar are still at best small supplemental energy producing elements.<br />
<br />
Furthermore another problem, as mentioned before, is a number of proponents believe that once society “actually” commits to a solar/wind energy infrastructure future, the problems and issues associated with this system will magically disappear with Master Plan #1 succeeding without qualm or fail. It is akin to attempting to build a railroad track ahead of a speeding train… everything must go perfectly for it to work and anyone who thinks that any of the current infrastructure plans pushed by solar and wind proponents is anywhere remotely viable is, quite frankly, a fool. <br />
<br />
At the present time the best idea to combat global warming is for the entire global community to agree on a single design for a nuclear fission breeder reactor and then allocate resources to begin the specialization required for manufacturing the required components and training the necessary construction and operational personnel. The simple fact is that too many questions and inefficiencies exist in any feasible plan to defeat global warming via the utilization of mass solar and wind energy generation; so much so that foregoing nuclear in favor of solar and wind is a recipe for disaster. Overall global cooperation through the initiation of a real and new nuclear renaissance is the most effective, economical and direct way to combat global warming while maintaining a consistent and reliable energy infrastructure in the developed world as well as allowing energy impoverished nations the ability to advance their energy consumption profiles without endangering the environment.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-71001244257188691992016-02-09T10:05:00.000-08:002016-02-09T10:05:23.862-08:00Treating Cancer Through Metastasis Neutralization and Possible Activation<br />
While cancer in any form is potentially dangerous to a patent, it is widely acknowledged that only a small percentage of primary tumors are threatening to the life of a patient in the interim. A vast majority of cancer patient deaths occur due to cancer metastasis. Metastasis is a complex pathway of molecular interactions that produce an end result that involves the departure of a group of tumor cells from the primary tumor into the bloodstream and eventual invasion into other tissues resulting in the formation of additional tumors. Not surprisingly the process is governed by a number of complex pathways that are not fully understood. Despite this lack of knowledge regarding metastasis, is it clear that one of the best strategies for addressing cancer would be to create a therapeutic regimen that would prevent metastasis from occurring in the first place allowing physicians ample time to eradicate the primary tumor with no legitimate threat of reoccurrence. <br />
<br />
Of the agents thought to be involved in cancer metastasis chemokine receptor CXCR4 is a promising agent of study and potential therapeutic target. Chemokines are a group of low molecular weight cytokines that induce chemotaxis, most of the time as chemoattractants, largely in leukocytes, endothelial and epithelial cells.1 Chemokines are commonly classified into CC, XC, CXC or CX3C designations based on the positioning of their respective conserved cysteine residues.1 One of the key normal functions of chemotaxis is to facilitate the movement of pro-inflammatory cells to the site of inflammation, including immune cells, normally after some form of injury. CXCR4 is an attractive target because of both strong anecdotal and experimental evidence regarding its overall expression and role in tumor malignancy and metastasis.1-5<br />
<br />
CXCR4 functions as a G protein-coupled receptor (GPCR) that principally binds stromal cell-derived factor 1 (SDF-1), which is also known as CXCL12. With regards to cancer, CXCR4 plays a significant role in directional migration though activation of actin polymerization6,7 as well as invasion and adhesion, which influence the overall level of aggression for a tumor. There is also some evidence suggesting that CXCR4 plays a role in angiogenesis as well.4,5<br />
<br />
CXCR4 can undergo four major changes to influence its functionality: homo or heterodimerization (with CCR2, CCR5, CXCR7 or CD4), phosphorylation, glycosylation, or sulfation.8-14 Unfortunately limited information is known about functional changes associated with dimerization in cancer for almost all studies involving CXCR4 dimerization relate to HIV, but it is thought that such changes enhance CXCL12 binding. It is also believed that dimerization typically occurs internally before CXCR4 is expressed on the cell surface, typically as oligomers. This oligomeric structure persists in the plasma membrane.14 <br />
<br />
Phosphorylation occurs principally at serine residue number 339 (Ser339) after exposure to either CXCL12, epidermal growth factor (EGF), or phorbol ester and it is believed that phosphorylation may also occur to a much smaller extent at Ser324, Ser325, and Ser330.12 Phosphorylation is important for increasing the probability of receptor internalization and secondary messenger activation. On a side note mono-ubiquitination occurs at Lys327, Lys331 or Lys333.13 Glycosylation of human born CXCR4 only appears to occur at Asn11 and seems to serve no unique function other than stabilizing CXCL12 binding (lack of glycosylation reduces binding efficiency).1 <br />
<br />
Sulfation, which takes place primarily at Tyr21 and does not appear to occur on two other potential sites (Tyr7 and Tyr12),15 may the most interesting modification regarding CXCL12 binding probability and functionality.16 When CXCL12 binds to CXCR4 there is a specific site interaction between sulfated Tyr21 on CXCR4 and Arg47 on CXCL12.15 One of the reasons that sulfation appears important is that one study demonstrated that while highly metastatic NPC cells and non-metastatic NPC cells expressed similar levels of CXCR4, both via mRNA and protein, only high levels of sulfinated CXCR4 resulted in high metastatic potential.17<br />
<br />
While there are a number of tyrosine sulfation pathways, with regards to CXCR4 one of the more prominent interactions involves the action of the latent membrane protein 1 (LMP1). While LMP1 concentration changes are not universal to CXCR4 activity increases, LMP1 interacts with EGF receptors, which is thought to be one of the early steps in inducing Tyrosylprotein sulfotransferase-1 (TPST-1) dependent tyrosine sulfation of CXCR4.15 However, there is the lingering question of whether sulfation is the chicken or the egg, is it a chief agent in how CXCR4 activation influences the potency of cancer or is it a result of that activation?<br />
<br />
CXCR4 activation also significantly increases the overall expression of matrix metalloproteinases (MMP), especially MMP-2, MMP-3, MMP-7 and MMP-9.18,19 One of the major triggers for this action could involve the activated CXCR4 guiding Bone marrow-derived cells (BMDCs) to their pre-metastatic niche, which then triggers BMDCs to initiate the release various metastatic positive elements including various MMPs most notably MMP-7 and MMP-9.20,21 CXCR4 is also involved in the homing of cells into the endosteal HSC,22 which facilitates the expression of SUMO-specific protease 1 that regulates MMP-9 as well.23<br />
<br />
As stated above one of the key steps in metastasis is the proteolytic degradation of the extracellular matrix (ECM) in which various MMPs are critical agents. Initially MMPs were thought to be degenerative proteases that were limited to cleaving matrix components, but that role has expanded to include the release of growth factors and other bioactive peptides localized at cleaved extracellular matrices.24-26 While there are up to 26 known MMPs (MMP-1, MMP-2, MMP-3, …) only a few have demonstrated significant roles in both cancer growth and cancer metastasis. Of these select few MMPs that play a prominent role in cancer, MMP-3 and MMP-7 appear to be the most important. <br />
<br />
MMP-7, (a.k.a. matrilysin) is the smallest MMP and is commonly expressed in epithelial tumor cells instead of interstitial cells27 and has numerous substrates in the ECM namely collagen fibers, laminin, gelatin, proteoglycan and elastin, etc.28,29 MMP-7 is commonly over-expressed in various types of cancer including, but not limited to non-small cell lung, pancreatic, oral squamous cell carcinoma, colorectal, prostate, stomach and papillary thyroid carcinoma.28,30-34 In addition to its ECM degradation role, MMP-7 can also breakdown cell surface proteins, which aids cancer cell proliferation through the regulation of apoptosis and angiogenesis as well as help evade immune system detection.34-36 <br />
<br />
Furthermore, as mentioned above, MMP-7 is thought to increase expression of MMP-2 and MMP-9 to aid in ECM degradation and other pro-cancer actions.37,38 Finally not surprisingly MMP-7 levels increase in response to decreased blood glucose levels for this increase is tied to a low quality or deteriorating principal environment for the tumor, which should trigger elements responsible for assisting metastasis like MMP-7. However, while MMP-7 appears to be the most active MMP in most cancers, its activation may only be a downstream event caused by MMP-3.<br />
<br />
An early action taken by a member of the MMP family typically involves MMP-3 cleaving decorin, releasing growth factor-beta and cleaving transforming growth-factor-alpha (TGF-a), which activates the MAP-kinase pathway.39 This activation can later activate MMP-7, which as previously mentioned leads to the activation of MMP-2 and MMP-9.40 In addition to activating other MMPs like -2, -7 and -9, MMP-3 can also promote genomic instability and epithelial–mesenchymal transition (EMT) through the activation of Rac1b, which stimulates both the production and release of intercellular mitochondrial superoxide.41,42 MMP-3 appears to be the dominant expression route for Rac1b.41<br />
<br />
Due to the important role MMPs play in inducing both metastasis and possible anti-apoptosis protection, a number of researchers have thought of MMP inhibition as a promising treatment option. However, clinical trials investigating the viability of MMP inhibitors, commonly known as tissue inhibitors of metalloproteinases or TIMPs, have not proven successful.43 One of the major theories behind this failure is that during tumor development MMPs have different roles depending on tumor progression and the other molecules present in the tumor microenvironment. Some information has demonstrated anti-tumor effects for certain MMPs, most notably MMP-3, MMP-8, MMP-9, MMP-12, and a newer MMP, MMP-26, which may be a natural protectorate MMP.43-47<br />
<br />
Clearly this dual behavior makes targeting MMPs directly difficult for therapeutic reasons, as demonstrated clinically, thus it could be more important to focus on important MMP triggers like CXCR4, which appear to activate MMPs during a time when their interaction with the tumor microenvironment will produce a net negative for the patient.<br />
<br />
CXCR4 also can activate the P110-beta isoform of PI3K resulting in the eventual synthesis of phosphatidylinositol (3,4,5)-triphosphate, which leads to the phosphorylation of protein kinase B/Akt and mTOR pathways most notably activation of p70S6K and 4E binding protein 1.7,48,49 Not surprisingly mTOR inhibitor, rapamycin, reduces the extent of p70S6K and 4E binding protein 1 activation in a CXCL12/CXCR4 environment.7,19,50 Furthermore CXCR4 also activates elements in the Src family of protein tyrosine kinases, which aid the activation of focal adhesion elements like Crk, paxillin, and tyrosine kinase/Pyk2.51<br />
<br />
For a long time CXCR4 was thought to be a unique target for CXCL12 until CXCR7 was identified, potentially complicating the role of CXCR4. Similar to CXCR4, CXCR7 is expressed at a much higher rate in malignant cancer cells versus normal cells and binds CXCL12 with high affinity.10,52 However, despite the significant similarities between CXCR4 and CXCR7, CXCR7 does not appear to play a meaningful role in cancer development or metastasis.53,54 The role of CXCR7 appears to involve the migration of primordial germ cells or interneurons.55 Its increased expression simply may be the result of dramatically increased levels of CXCL12 in the localized environment. Interestingly enough CXCR7 may prove to be a possible therapeutic element as an indirect natural CXCL12 competitive inhibitor of sorts for every CXCL12 molecule that binds to CXCR7 is no longer available to bind to CXCR4.<br />
<br />
Earlier it was mentioned that CXCR4 might have a relationship in tumor growth and/or angiogenesis. If true, then this result is complicated because cancer growth at established tumor sites is more rapid in the absence of CXCR4 rather than its presence.53 Therefore, it may be that while CXCR4 assists growth immediately after invasion, its continued presence becomes detrimental for the tumor because it helps induce metastasis, thereby diverting resources like recruited and differentiated endothelial cells or progenitors from the original tumor towards elements that will be involved in the metastasis or even the attraction of metastatic elements already in the bloodstream. <br />
<br />
CXCR4 also appears to interact with another potential important factor in cancer metastasis, macrophage migration inhibitory factor (MIF). MIF is a pro-inflammatory cytokine that plays an important role in inflammation and immune response and is expressed at a higher than normal rate during numerous cancer stages like cell proliferation, angiogenesis and anti-tumor immune interaction.56-58 High MIF concentrations have also been associated with poor outcomes in lymphoma, melanoma and colon cancer.59,60 CXCR4 interacts with MIF through the formation of a MIF receptor complex with CD74, which further enhances MIF-stimulated AKT activation.61<br />
<br />
There is some thought that when MIF lacks its traditional activation pathway it requires caspase-1 activity for proper secretion.62 Also Golgi-associated protein p115 may be essential for the transport of MIF from the perinuclear ring to the plasma membrane and then out of the cell.63 In addition to aiding metastasis, MIF is also thought to apply some level of apoptosis resistance to cancer cells, favoring those with androgen-dependency over those with androgen-independence, but that resistance may be tied to CXCR4 interaction.64<br />
<br />
With CXCR4 having its “fingerprints” over a number of pro-cancer processes, fortunately an additional element that makes it an attractive therapeutic target is its natural role in the body. In non-cancerous tissue CXCR4 is expressed on hematopoietic cells like CD34+ HSC, B-lymphocytes, neutrophils, monocytes, macrophages, and microglia, etc.65 CXCR4 or CXCL12 knockouts in mice result in impaired hematopoiesis through reduced hematopoietic stem cell (HSC) trafficking, which results in heart and brain defects as well as vascularization commonly producing embryonic death;66 in adults CXCR4 is important in HSC homing for the bone marrow microenvironment and lymphocyte trafficking.65 However, most of the time CXCR4 expression in normal cells is low, unless the body has been recently injured. Therefore, treatments that limit CXCL12/CXCR4 pathway activation should result in limited negative side effects for healthy non-injured individuals. <br />
<br />
Another potential benefit from CXCR4 inhibition could involve reducing the probability of chemotherapy agent resistance, including Docetaxel (DTX) resistance. Some research suggests that the CXCL12-CXCR4 pathway interacts with p21-activated kinase 4 (PAK4)-induced LIM domain kinase 1 (LIMK1) via phosphorylation to reduce the ability of DTX to destabilize microtubules, which typically results in cell cycle arrest during the G2/M phase.67 Basically CXCR4 activation provides additional protection against cell death for tumors when exposed to DTX. This result suggests that LIMK1 could have a role similar to microtubule-associated protein (MAP) depending on whether or not it is phosphorylated. Therefore, this chemotherapy resistant pathway has two principal inhibition targets in CXCR4 or PAK4 to negatively influence prospective chemo resistance.<br />
<br />
Existing potential therapies involving CXCR4 have focused largely on inhibiting the binding capacity of CXCR4 most notably either through the use of AMD3100, a specific CXCR4 antagonist, or synthetic peptide TM4.14,68,69 AD3100 (a.k.a. (Plerixafor) is a small molecule with two cyclam rings connected by a phenylene linker that have nitrogens on each ring that have charge-charge interactions with carboxylate groups on CXCR4, which inhibits CXCL12 binding.70-72<br />
<br />
Plerixafor is most commonly used as a pre-treatment element for chemotherapy where, as mentioned, CXCR4 disruption reduces the probability of hematopoietic stem cells homing to bone marrow, thus it increases their circulation in the blood stream allowing for their collection for transplantation after chemotherapy regimens.73,74 Plerixafor has also proven promising as an anti-cancer treatment via its ability to reduce cancer cell chemotherapy resistance by either neutralizing the CXCL12-CXCR4 pathway or reducing the physical attachment of various micro-environment critical cells, similar to what is expected for a CXCR4 inhibitor.<br />
<br />
An interesting side effect in plerixafor treatment is that surface expression of CXCR4 increases both in vitro and in vivo.65 One possible explanation for this outcome could be that principal signals that induce CXCR4 expression continue while plerixafor prevents CXCL12 from binding CXCR4; CXCL12 binding leads to internalization and activation of secondary pathways. When there is no CXCL12 binding there is no CXCR4 internalization, but the pathways governing CXCR4 expression towards the cell surface continue, thus explaining the overall increase in CXCR4 expression. If this tendency is accurate then long-term treatment with plerixafor alone may not be beneficial because the increased surface expression will substitute for the CXCR4 “removed” by plerixafor interaction. Basically plerixafor works well alone in the short-term, but may not work well alone in the long-term, which may be the same fate of all substrate based CXCR4 inhibitors. <br />
<br />
Apart from preventing metastasis, treating the primary tumor is also an important task, especially when surgical options are unavailable. One promising potential therapeutic agent that can influence both primary tumors and metastasis is salinomycin (SAL), which has demonstrated effective ability to kill cancer via a perceived mixture of apoptotic and autophagic cell death in breast, prostrate, brain, blood, liver, pancreatic and lung cancers with no immediate lethal toxicity.75-77 Initially it was reported that SAL was toxic to certain neuronal cells (dorsal root ganglion in mice) at 1 uM, but this toxicity was neutralized when paired with factors that inhibited mitochondrial Na+/K+ exchangers with no resultant change in cancer cell cytotoxicity.78,79<br />
<br />
One of the chief advantages of SAL in treating cancer is that it uses a different methodology apart from more common chemotherapy drugs like Doxorubicin, Cisplatin, Gemcitabine, Temozolamide, Tratsuzumab, Imatinib, etc.75,80-82 SAL has a preference for targeting cancer stem cells (CSCs), which reduces the probability of cancer reoccurrence after its primary removal.75 CSCs are important to address in treatment because they are commonly thought of as another element responsible for driving the core of cancer metastasis after responding to various signal triggers as well as driving cancer recurrence after the primary tumor is eliminated due to their ability to more frequently resist anti-cancer therapies. Thus, addressing CSCs, either directly or indirectly, is a critical part to addressing both cancer itself as well as its metastasis.<br />
<br />
Another interesting behavioral aspect of SAL is amplified effectiveness under hypoxia or starvation conditions. This result makes sense on two different levels: first, it is thought that a means in which SAL triggers cell death is through damage to the mitochondria, in part to being a potassium ionophore, promoting hyperpolarization in the mitochondria, which decreases ATP availability and triggers caspase-3, 8 and 9, a consequence worsened by starvation conditions.76,83 Second, its effectiveness against CSCs is enhanced by the reaction of the tumor to hypoxia. In hypoxia the primary tumor will begin to focus an effort to metastasize due to the negative environment that currently exists; one step in this process requires the recruitment and creation of CSCs which reduce resource availability for the primary tumor, yet those CSCs are more effectively eliminated by SAL versus the cells that comprise the primary tumor. <br />
<br />
The ability of SAL to “cooperative” with other anti-cancer drugs directly is questionable for most of the benefit from co-therapy between SAL and given drug x appears indirect.84 This result is somewhat interesting because there is some evidence to suggest that SAL can also function as an efflux pump inhibitor, which is commonly operated by a p-glycoprotein;85-87 efflux pumps increase the ability of cancer cells to remove chemotherapy agents before the inducement of cell death, so inhibiting them would increase tumor susceptibility to these chemotherapy agents. However, Metformin (METF), which is though to lower circulating insulin levels and stimulate AMPK-mediated suppression of mTOR, along with having some anti-cancer properties in thyroid, prostate, gastric, breast and glioblastoma,88-90 seems to have some form of direct enhancing cooperative relationship with SAL.84<br />
<br />
This combination activity results in the “unspecific” inhibition of EGFR and HER2/HER3 leading to reduced concentrations of AKT and ERK1/2 via an unknown mechanism.84 However, it seems appropriate to suggest that based on SAL activity when acting alone that this “inhibition” is born from a reduction in available receptors due to cancer cell or associated cell death. <br />
<br />
Another mechanism for inducing cancer death that includes SAL interaction is autophagy. In its most basic form autophagy involves the “self-digestion” of intracellular elements via the vacuolar lysosomal degradation pathway to recycle cytoplasmic constituents.91 Autophagy is typically used to prevent the accumulation of damaged proteins and organelles, largely born from cell damage due to outside agents; for cancer it would be anti-cancer drugs. This process also reduces the production potential of reactive oxygen species (ROS) that negatively impact cell survival. <br />
<br />
There is reason to believe that SAL can interfere with autophagy in cancer cells by inhibiting lysosomal activity driven by cathepsins.85 Interestingly enough this activity occurs without impacting the lysosomal compartment.85 This aspect of SAL interaction is not surprising due to structural similarities with both nigericin and monensin, which have similar behavior as anti-porters themselves, but the lack of change in pH of the lysosomes from SAL treatment belies a different pathway. <br />
<br />
Also the behavior of SAL runs contrast to ATG7 expression where ATG7 acts as a protectorate of sorts ensuring the proper functionality of autophagy. For breast cancer cells and more than likely other forms of cancer, aldehyde dehydrogenase 1 positive cells (ALDH+) promote autophagy.85,92 Thus, SAL directly works against ALDH as well as competes to “thwart” ATG7 autophagy protection. Therefore, ATG7 in some instances is able to neutralize the dual ability of SAL to kill cancer cells via induction of apoptosis while inhibiting autophagy inhibition. Therefore, inhibiting the activity of ATG7 may provide a useful co-therapy with SAL to significantly neutralize the ability of cancer cells to build resistance to SAL-related apoptotic activity.<br />
<br />
Another popular method of action for SAL against cancerous agents is thought to be its interaction with Wnt signaling and its relation to b-catenin. The interaction between Wnt and b-catenin begins when Wnt binds to frizzled (Fzd) receptor and then that complex binds to lipoprotein receptor-related protein 5 or 6 (LRP5/6) co-receptors leading to a ternary complex that typically exists at the cell surface.93 The presence of this complex can trigger phosphorylation of either LRP leading to the recruitment of axin, which then undergoes endocytosis.93 The end result of this entire process is the breakdown or inactivation of the Adenomatous polyposis coli (APC)-Axin complex, which is responsible for b-catenin elimination. <br />
<br />
This processes is important because b-catenin accumulation leads to its nuclear translocation and can even increase expression of Wnt genes via those tumors located at the invasive front, which have more interaction with growth factors and cytokines including hepatocyte growth factor.94 This interaction may even create a positive feedback loop of sorts.94,95 This nuclear translocation of b-catenin is thought to play some role in tumor cells experiencing cell-cycle arrest and EMTs via the loss of E-cadherin expression creating some form of cancer stem cell, with increased migration/metastasis potential.96,97<br />
<br />
SAL interferes with the Wnt pathway by degrading the LRP6 protein and possibly LRP5 protein, which obviously reduces the probability that they form a complex with Wnt and are later phosphorylated activating the complex.85 LRP protein importance is further supported by a level of suppression in breast cancer tumor growth after treatment with a LRP antagonist, Mesoderm development (Mesd).98,99<br />
<br />
However, SAL is not a cure-all when it comes to this supposed pro-cancer pathway, for Wnt and its associated complex is not the only significant destruction inhibition interaction experienced by b-catenin. Expression of platelet-derived growth factors (PDGF) can induce the tyrosine phosphorylation of p68 via c-Abl kinase.100 After phosphorylation p68 can bind b-catenin and inhibit GSK3b mediated phosphorylation reducing the probability that it is eliminated, thereby increasing the probability of b-catenin nuclear location.100 It is also thought that EGF and TGF-b can induce p68 phosphorylation via receptor tyrosine kinases.100,101<br />
<br />
Another possible strategy to deal with LRP protein complex interaction involves the inhibition of vacuolar H+-adenosine triphosphatase using an agent like archazolid.102 LRP6 phosphorylation and internalization appears to require V-ATPase. The general role of V-ATPase is transport of both intracellular and extracellular organelles near the plasma membrane. Not surprisingly it also pumps protons leading to the acidification of vesicles, which promotes endocytosis.103<br />
<br />
Another element in cancer development that has garnered attention for metastasis is the role of carcinoma-associated fibroblasts (CAFs). Tumor invasion is heavily influenced by the tumor microenvironment, especially the types of non-tumor cells. Various types of fibroblast recruitment lead to the production of soluble factors and extra-cellular matrix (ECM) remodeling usually through actin changes and cell migration born from MMPs, Rho targeted via ubiquitination and SUMO pathways104 as well as global DNA hypomethylation and recruitment of mesenchymal stromal cells; these changes increase the viability of future invasion.105-108<br />
<br />
The cancer stroma is typically populated by various concentrations of fibroblastic cell groups that make up CAFs and are commonly divided into myofibroblast (MFs) and non-myofibroblast populations (non-MFs).105 MF populations have received much more attention than non-MF populations more than likely due to the diversity of the non-MF populations. It is thought that CAFs differ significantly from normal fibroblasts and myoblasts, but there is little information regarding the extent of these differences.105,109<br />
<br />
CAFs are heavily involved in various pro-cancer pathways like tumor necrosis factor alpha (TNFa), IL-1 and IL-6,105,110,111 which lead to promoting invasion, immune suppression and angiogenesis through the secretion of SDF-1, TGF-b, hepatocyte growth factor (HGF), PDGFs, or vascular endothelial growth factors (VEGFs) principally driven by FSP-1- or PDGF receptor alpha-positive stromal fibroblasts.112-116<br />
<br />
Not surprisingly if CAFs are thought to play a role in all of these pro-cancer processes, targeting them would prove useful for developing effective therapies. A number of proposals have been made regarding PDGF receptor inhibitors, SUMO inhibitors, Met receptor inhibitors or HGF inhibitors; however, on its face it is difficult to envision how to effectively target the “right” CAFs due to the widely diverse population of cells within the stroma. Interestingly enough some possibly contradictory research could prove some insight. <br />
<br />
As mentioned above CAFs in the stroma are typically defined as either MF or non-MF, but both of these groups can be activated and/or transformed as well. One particular group to question is activated non-transformed MFs, which express alpha-smooth muscle actin (a-SMA). There are some that believe that these cells have “anti-cancer” activity instead of “pro-cancer” activity. Support for this mindset comes from studies of early and late stages of pancreatic cancer outcomes, clinical correlation between high a-SMA levels and improved survival on a general level, and studies of resected tumors.117-120 Also there is some question to whether or not a-SMA positive MF cells increase hyaluronic acid concentration.121 The anti-cancer attributes of MFs appear to stem from aiding both innate and adaptive immune response via increased fibrosis.117<br />
<br />
Whether or not these MFs are pro-cancer or anti-cancer elements is important to deduce because anti-cancer therapies tend to kill indiscriminately around the principal tumor and its microenvironment, including a-SMA positive MFs. If these particular MFs are anti-cancer then these drugs are inherently less effective because while they are killing cancerous elements they are also killing anti-cancer elements. <br />
<br />
The final possible important element to CAFs and their role in cancer is their ability to produce exosomes. For example in breast cancer CAFs secrete Cd81+ exosomes that can induce the planar cell polarity (PCP) signaling pathway targeting Wnt and influencing the polarity of carcinoma cells.122 Internalization of these exosomes also promotes Wnt11-PCP induction via autocrine through Frizzled receptor signaling leading to increased probability of pulmonary metastases.112 Targeting these exosomes may be a valid therapeutic strategy for reducing cancer potency.<br />
<br />
As mentioned above directly targeting CAFs via therapies may be difficult, but one potential candidate could be TNF receptor associated factor 6 (TRAF6). At least for squamous cell carcinoma (SCC), TRAF6 plays a role in enabling nuclear factor kappa beta (NF-kB) signaling to activate a number of downstream pathways for CAFs, like Akt, Src-family kinases, IKK, IL-1beta and p38 and can regulate the formation of Cdc42-dependent F-actin microspikes.105,107,123 While the role of Cdc-42 is exactly unclear, TNFa plays a large role in promoting invasion in SCC and TRAF6 plays a significant role in producing sufficient TNFa concentration. The reduction of either one of these pathways significantly reduces cancer invasion, possibly due to the K63 ubiquitin ligase activation associated with TRAF6.124<br />
<br />
The final issue when addressing cancer metastasis is developing a strategy to promote delivery of the anti-cancer agents to increase the probability of positive action, especially through ensuring proximity action. A reason behind this strategy is that some agents, like the rather useful SAL, demonstrate low quality aqueous solubility, which restricts their ability to be injected through a more standard IV strategy.125 Not surprisingly nanoparticles have become the most attractive vessel for transporting anti-cancer agents to cancer sites. <br />
<br />
Overall nanoparticles are advantageous due to their low to non-immunogenic activity reducing complications and increasing their lifespan in the bloodstream, their natural and generally safe biodegradability and biocompatibility and their general design flexibility for producing the right type of particle for the given job. For example some nanoparticle structures use polypeptides with elastin and hydrophilic properties in effort to produce immune-tolerant elements. These elements are commonly referenced with the acronym, iTEP.126 However, nanoparticles need a form of “navigation” system to reach the appropriate target. The two most common targeting strategies are the use of antibodies or the use of aptamers. <br />
<br />
Antibody targeting in some respects is the “old reliable” while aptamer targeting is somewhat new. Aptamers are comprised either of oligonucleic acid (DNA or RNA) or a peptide that are able to bind a specific target molecule. The major advantages of aptamers are their molecular specificity, their lack of immunogenicity, and their low molecular weights. These two latter advantages along with ease of production have further increased the popularity of aptamers versus antibodies regarding therapeutic targeting strategies including those dealing with cancer. A number of aptamers have already been developed for use in cancer treatment.127,128<br />
<br />
Regardless of navigation methodology, the drug delivery vessel must have the right navigation point. One molecule that has drawn interest for potential cancer targeting ability is hyaluronic acid (HA). HA typically binds to CD44, a receptor commonly over-expressed on numerous types of tumors.129 Furthermore HA is frequently broken down by tumor cells by hyaluronidases (Hyals), which is widely thought to experience concentration increases in various cancers including prostate, bladder, colorectal, brain and breast due to the presence of increased low weight HA fragments found in these tumors versus normal cells.130-133<br />
<br />
The general process of HA catabolism involves binding to CD44 resulting in its breakdown into smaller elements by Hyal-2 while still on the cell surface forming what is known as a caveolae. This caveolae eventually becomes an endosome that fuses with lysosomes resulting in the further degradation of HA fragments into tetrasaccharides by Hyal-1.134,135 Based on this process one could theorize that a self-assembled nanoparticle comprised of HA would serve as an effective means of drug delivery to the tumor site both tracking and through its degradation, a belief that has been supported with early empirical results.129<br />
<br />
In addition to HA, it is widely thought that cluster of differentiation 133 (CD133) is a positive stem cell marker for both normal and cancerous tissue and is thought to be a critical agent in identifying CSCs. For example it is common for CD133+ cancer cells to form mammospheres that can initiate tumor growth in non-tumor cells. Due to the importance of CD133 expressing cells, an RNA aptamer (A15) has already been developed that binds to CD133 for use as a “tracking” marker of sorts and some groups have already explored the idea of using A15 as a drug delivery targeting agent.136<br />
<br />
However, it must be noted that while CD133 expressing cells appear to be the most important in the CSC pool, tumors do produce CSCs that express other surface receptors while not expressing CD133. For example in osteosarcoma, CD133, CD117 and Stro-1 are all considered to be legitimate CSC markers.48 Additionally there is some evidence to support the idea that CSCs can convert to non-CSCs and back again.137 Therefore, while targeting CD133 is clearly an appropriate strategy for treating CSCs, it may not be the only targeting strategy necessary to eliminate CSCs.<br />
<br />
The type of nanoparticle is only part of the issue involving drug delivery. Another important element is whether or not the principal drug should have other elements encapsulated with it to increase efficacy. For example SAL delivery involves a charged hydrophobic drug trapped inside a hydrophobic core of micelle-like nanoparticle; these interacting charges can increase destabilization potential, leading to ineffective drug application, which has been seen in past studies.125,126 Therefore, this charge interaction needs to be neutralized. <br />
<br />
An early candidate for cooperation with SAL stability was N,N-dimethyloctadecylamine (DMOA) due to its similar hydrophobic nature, yet positive charge which is obviously counter to the negative charge of SAL. Unfortunately DMOA proved too toxic for this role.126 Fortunately it has a less toxic analogue in N,N-dimethylhexylamine. However, this reduced toxicity comes at the price of reduced hydrophobic strength due to a shorter hydrocarbon chain.126 Thus, researchers have added alpha-tocopherol as a second hydrophobic agent to enhance internal hydrophobicity to increase stability, which appears to work well.126<br />
<br />
In the end while metastasis is still a process with a number of question marks associated with its occurrence and action there does appear to be certain elements that have important roles in its successful occurrence and function regardless of these question marks. First, it is quite clear that any therapy will have to involve some form of drug cocktail to cover multiple metastasis pathways including treatment of the principal tumor via either drugs or surgery. With this strategy in mind one interesting combination would involve the use of SAL in HA nanoparticles, some form of CXCR4 inhibitor, something like Plerixafor should be sufficient when not applied by itself, and a standard chemotherapy drug like Docetaxel.<br />
<br />
Another possible addition to this cocktail could be an anti-angiogenesis drug. In recent treatment history anti-angiogenesis drugs have had a negative history of being useful anti-cancer agents despite the sound theoretical reasoning that reducing growth resources should reduce cancer growth potential. The failure of anti-angiogenesis drugs more than likely occurred due to the induced hypoxia environment increasing rates of metastasis. However, this increased metastasis may be a benefit when the anti-angiogenesis drug is used in combination with a CXCR4 inhibitor and/or SAL, which could speed cancer death by eliminating the metastatic elements versus attempting to eliminate the principal tumor. Of course this combination and the possible positive outcome is only theoretical, without appropriate empirical evidence the addition of an anti-angiogenesis agent may not provide a benefit, similar to how it functions currently. <br />
<br />
While promising gains have been made in recent years in immunotherapy-based techniques to combat cancer, it is important to acknowledge that overall there is no magic bullet, but the above potential cocktail should be able to overlap the important negative cancerous element of both primary tumor elimination through multiple destruction pathways and metastasis neutralization via elimination of CSCs as well as the major pathways the drive the preparation and activation of metastasis itself. This method alone or in combination with a proven effective immunotherapy technique could provide a legitimate anti-cancer therapy for various stages of cancer development. <br />
<br />
<br />
<br />
Citations – <br />
<br />
1. Deng, X, et Al. “Posttranslational modifications of CXCR4: implications in cancer metastasis.” Receptors and Clinical Investigation. 2014. 1-6:e63. <br />
<br />
2. Zlotnik, A. “Chemokines and cancer.” Int. J. Cancer. 2006. 119:2026-2029.<br />
<br />
3. Muller, A, et Al. “Involvement of chemokine receptors in breast cancer metastasis.” Nature. 2001. 410:50-56. <br />
<br />
4. Furusato, B, et Al. “CXCR4 and cancer.” Pathol. Int. 2010. 60:497-505.<br />
<br />
5. Liekens, S, Schols, D, and Hatse, S. “CXCL12-CXCR4 axis in angiogenesis, metastasis and stem cell mobilization.” Curr. Pharm. Des. 2010. 16:3903-3920. <br />
<br />
6. Oh, Y, et Al. “Hypoxia induces CXCR expression and biological activity in gastric cancer cells through activation of hypoxia-inducible factor-1alpha.” Oncol. Rep. 2012. 28:2239-2246.<br />
<br />
7. Lee, H, et Al. “CXC chemokines and chemokine receptors in gastric cancer: from basic findings towards therapeutic targeting.” World. J. Gastroenterol. 2014. 20(7):1681-1693.<br />
<br />
8. Rodriguez-F, J, et Al. “Blocking HIV-1 infection via CCR5 and CXCR4 receptors by acting in trans or the CCR2 chemokine receptor. EMBO. J. 2004. 23:66-76.<br />
<br />
9. Sohy, D, et Al. “Hetero-oligomerization of CCR2, CCR5, and CXCR4 and the protean effects of “selective” antagonists.” J. Biol Chem. 2009. 284:31270-31279.<br />
<br />
10. Levoye, A, et Al. “CXCR7 heterodimerizes with CXCR4 and regulates CXCL12-mediated G protein signaling.” Blood. 2009. 113:6085-6093.<br />
<br />
11. Basmaciogullari, S, et Al. “Specific interaction of CXCR4 with CD4 and CD8alpha: functional analysis of the CD4/CXCR4 interaction in the context of HIV-1envelope glycoprotein-mediated membrane fusion.” Virology. 2006. 353:52-67.<br />
<br />
12. Woerner, B, et Al. “Widespread CXCR4 activation in astrocytomas revealed by phospho-CXCR4-specific antibodies.” Cancer Res. 2005. 65:11392-11399.<br />
<br />
13. Marchese, A, Benovic, J. “Agonist-promoted ubiquitnation of the G protein-coupled receptor CXCR4 mediates lysosomal sorting.” J. Biol. Chem. 2001. 276:45509-45512.<br />
<br />
14. Wang, J, et Al. “Dimerization of CXCR4 in living malignant cells: control of cell migration by a synthetic peptide that reduces homologous CXCR4 interactions.” Mol. Cancer. Ther. 2006. 5(10):2474-2483.<br />
<br />
15. Veldkamp, C, et Al. “Recognition of a CXCR4 sulfotyrosine by the chemokine stromal cell-derived factor-1 alpha (SDF-1alpha)/CXCL12). J. Mol. Biol. 2006. 359:1400-1409.<br />
<br />
16. Xu, J, et Al. “Tyrosylprotein sulfotransferase-1 and tyrosine sulfation of chemokine receptor 4 are induced by Epstein-barr virus encoded latent membrane protein 1 and associated with the metastatic potential of human nasopharyngeal carcinoma.” PLoS ONE. 2013. 8(3):e56114. <br />
<br />
17. Hu, J, et Al. “The expression of functional chemokine receptor CXCR4 is associated with the metastatic potential of human nasopharyngeal carcinoma.” Clin. Cancer Res. 2005. 11:4658-4665.<br />
<br />
18. Fanelli, M, et Al. “The influence of transforming growth factor-alpha, cyclooxygenase-2, matrix metalloproteinase (MMP)-7, MMP-9, and CXCR4 proteins involved in epithelial-mesenchymal transition on overall survival of patients with gastric cancer.” Histopathology. 2012. 61:153-161.<br />
<br />
19. Hashimoto, I, et Al. “Blocking on the CXCR4/mTOR signalling pathway induces the anti-metastatic properties and autophagic cell death in peritoneal disseminated gastric cancer cells.” Eur. J. Cancer. 2008. 44:1022-1029.<br />
<br />
20. Cui, K, et Al. “The CXCR4-CXCL12 pathway facilitates the progression of pancreatic cancer via induction of angiogenesis and lymphagiogenesis.” Journal of Surgical Research. 2011. 171(1):143-150.<br />
<br />
21. Kaplan, R, et Al. “VEGFR1-positive haematopoietic bone marrow progenitors initiate the pre-metastatic niche.” Nature. 2005. 438(7069):820-827.<br />
<br />
22. Taichman, R, et Al. “Use of the stromal cell-derived factor-1/CXCR4 pathway in prostate cancer metastasis to bone.” Cancer Research. 2002. 62(6):1832-1837.<br />
<br />
23. Wang, Q, et Al. “SUMO-specific protease 1 promotes prostate cancer progression and metastasis.” Oncogene. 2013. 32(19):2493-2498.<br />
<br />
24. McCawley, L and Matrisian, L. “Matrix metalloproteinases: multifunctional contributors to tumor progression.” Mol. Med. Today. 2000. 6: 149–156.<br />
<br />
25. Sternlicht, M and Werb, Z. “How matrix metalloproteinases regulate cell behavior.” Annu. Rev. Cell Dev. Biol. 2001. 17: 463–516. <br />
<br />
26. Egeblad, M and Werb, Z. “New functions for the matrix metalloproteinases in cancer progression.” Nat. Rev. Cancer. 2002. 2: 161–174.<br />
<br />
27. Leeman, M, Curran, S, and Murray, G. “New insights into the roles of matrix metalloproteinases in colorectal cancer development and progression.” J. Pathol. 2003. 201:528-534.<br />
<br />
28. Yang, B, et Al. “Expression and prognostic value of matrix metalloproteinase-7 in colorectal cancer.” Asian Pacific J. Cancer Prev. 2012. 13:1049-1052.<br />
<br />
29. Woessner, J, Jr. and Taplin, C. “Purification and properties of a small latent matrix metalloproteinase of the rat uterus.” J. Biol. Chem. 1988. 263:16918-16925.<br />
<br />
30. Ito, Y, et Al. “Inverse relationships between the expression of MMP-7 and MMP-11 and predictors of poor prognosis of papillary thyroid carcinoma.” Pathology. 2006. 38:421-425.<br />
<br />
31. de Vicente, J, et Al. “Expression of MMP-7 and MT1-MMP in oral squamous cell carcinoma as predictive indictor for tumor invasion and prognosis.” J. Oral. Pathol. Med. 2007. 36:415-424.<br />
<br />
32. Liu, H, et Al. “Predictive value of MMP-7 expression for response to chemotherapy and survival in patients with non-small cell lung cancer.” 2008. Cancer Sci. 99:2185-2192.<br />
<br />
33. Koskensalo, S et Al. “MMP-7 overexpression is an independent prognostic marker in gastric cancer.” Tumour. Biol. 2010. 31:149-155.<br />
<br />
34. Davies, G, Jiang, W, and Mason, M. “Matrilysin mediates extracellular cleavage of E-cadherin from prostate cancer cells: a key mechanism in hepatocyte growth factor/scatter factor-induced cell-cell dissociation and in vitro invasion.” Clin. Cancer Res. 2001. 7:3289-3297.<br />
<br />
35. Mitsiades, N, et Al. “Matrix metalloproteinase-7-mediated cleavage of Fas ligand protects tumor cells from chemotherapeutic drug cytotoxicity.” Cancer Res. 2001. 61:577-581.<br />
<br />
36. Li, Q, et Al. “Matrilysin shedding of syndecan-1 regulates chemokine mobilization and transepithelial efflux of neutrophils in acute lung injury.” Cell. 2002. 111:635-646.<br />
<br />
37. Noe, V, et Al. “Release of an invasion promoter E-cadherin fragment by matrilysin and stromelysin-1.” J. Cell. Sci. 2001. 114:111-118.<br />
<br />
38. Lynch, C, et Al. “MMP-7 promotes prostate cancer-induced osteolysis via the solubilization of RANKL.” Cancer Cell. 2005. 7:485-496<br />
<br />
39. Imai, K, et Al. “Degradation of decorin by matrix metalloproteinases: identification of the cleavage sites, kinetic analyses and transforming growth factor-beta1 release.” Biochem. J. 1997. 322: 809–814.<br />
<br />
40. Vandooren, J, Van den Steen, P, and Opdenakker, G. “Biochemistry and molecular biology of gelatinase B or matrix metalloproteinase-9 (MMP-9): the next decade.” Crit Rev Biochem Mol Biol. 2013. 48: 222–272.<br />
<br />
41. Radisky, D, et Al. “Rac1b and reactive oxygen species mediate MMP-3-induced EMT and genomic instability.” Nature. 2005. 436:123-127.<br />
<br />
42. Kheradmand, F, et Al. “Role of Rac1 and oxygen radicals in collegenase-1 expression induced by cell shape change.” Science. 1998. 280:898-902.<br />
<br />
43. Khamis, Z, et Al. “Evidence for a pro-apoptotic role of matrix metalloproteinase-26 in human prostate cancer cells and tissues.” Journal of Cancer. 2016. 7:80-87.<br />
<br />
44. Martin, M, and Matrisian, L. “The other side of MMPs: protective roles in tumor progression.” Cancer metastasis Rev. 2007. 26(3-4):717-724.<br />
<br />
45. McCawley, L, et Al. “A protective role for matrix metalloproteinase-3 in squamous cell carcinoma.” Cancer Res. 2004. 64(19):6965–6972.<br />
<br />
46. Kerkelä, E, et Al. “Metalloelastase (MMP-12) expression by tumour cells in squamous cell carcinoma of the vulva correlates with invasiveness, while that by macrophages predicts better outcome.” J. Pathol. 2002. 198(2):258–269.<br />
<br />
47. Vilen, Suvi-Tuuli, et Al. “Fluctuating roles of matrix metalloproteinase-9 in oral squamous cell carcinoma.”<br />
<br />
48. Balkwill, F. “The chemokine system and cancer.” J. Pathol. 2012. 226:148-157.<br />
<br />
49. Burger, J. “Chemokines and chemokine receptors in chronic lymphocytic leukemia (CLL): from understanding the basics towards therapeutic targeting.” Semin Cancer Biol. 2010. 20:424-430.<br />
<br />
50. Chen, G, et Al. “Inhibition of chemokine (CXC motif) ligand 12/chemokine (CXC motif) receptor 4 axis (CXCL12/CXCR4)-mediated cell migration by targeting mammalian target of rapamycin (mTOR) pathway in human gastric carcinoma cells. J. Biol. Chem. 2012. 287:12132-12141.<br />
<br />
51. Luker, K, and Luker, G. “Functions of CXCL12 and CXCR4 in breast cancer.” Cancer Lett. 2008. 238:30-41.<br />
<br />
52. Lee, H, et Al. “Chemokine (C-X-C motif) ligand 12 is associated with gallbladder carcinoma progression and is a novel independent poor prognostic factor.” Clin Cancer Res. 2012. 18:3270-3280. <br />
<br />
53. Choi, Y, et Al. “CXCR4, but not CXCR7, discriminates metastatic behavior in non-small cell lung cancer cells.” Mol. Cancer Res. 2014. 12(1):38-47.<br />
<br />
54. Carbajal, K, et Al.”Migration of engrafted neural stem cells is mediated by CXCL12 signaling through CXCR4 in a viral model of multiple sclerosis.” PNAS 2010. 107(24):11068-11073.<br />
<br />
55. Sanchez-Alcaniz, J, et Al. “CXCR7 controls neuronal migration by regulating chemokine responsiveness.” Neuron. 2011. 69(1):77-90.<br />
<br />
56. Tawadros, T, et Al. “Release of macrophage migration inhibitory factor by neuroendocrine-differentiated LNCaP cells sustains the proliferation and survival of prostate cancer cells.” Endocrine-Related Cancer. 2013. 20:137-149.<br />
<br />
57. Calandra, T, and Roger, T “Macrophage migration inhibitory factor: a regulator of innate immunity.” Nature Reviews Immunology. 2003. 3:791–800.<br />
<br />
58. Bucala, R, and Donnelly, S. “Macrophage migration inhibitory factor: a probable link between inflammation and cancer.” Immunity. 2007. 26:281–285.<br />
<br />
59. Meyer-Siegler, K, Leifheit, E, and Vera, P. “Inhibition of macrophage migration inhibitory factor decreases proliferation and cytokine expression in bladder cancer cells.” BMC Cancer. 2004. 4:34.<br />
<br />
60. Muramaki, M, et Al. “Clinical utility of serum macrophage migration inhibitory factor in men with prostate cancer as a novel biomarker of detection and disease progression.” Oncology Reports. 2006. 15:253–257.<br />
<br />
61. Schwartz, V, et Al. “A functional heteromeric MIF receptor formed by CD74 and CXCR4.” FEBS Letters. 2009. 583:2749–2757.<br />
<br />
62. Keller, M, et Al. “Active caspase-1 is a regulator of unconventional protein secretion.” Cell. 2008. 132:818–831.<br />
<br />
63. Merk, M, et Al. “The Golgi-associated protein p115 mediates the secretion of macrophage migration inhibitory factor.” Journal of Immunology. 2009. 182:6896–6906.<br />
<br />
64. MIF induces cell proliferation via sustained activation of ERK1/2 MAPKs and promotes cell survival through the inhibition of p53 and the activation of PI3K/AKT signaling.<br />
<br />
65. Teicher, B, and Fricker, S. “CXCL12 (SDF-1)/CXCR4 pathway in cancer.” Clin. Cancer Res. 2010. 16(11):2927-2931.<br />
<br />
66. Ratajczak, M, et Al. “The plieotropic effects of the SDF-1 CXCR4 axis in organogenesis, regeneration and temorigenesis.” Leukemia. 2006. 20:1915-1924.<br />
<br />
67. Bhardwaj, A, et Al. “CXCL12/CXCR4 signaling counteracts docetaxel-induced microtubule stabilization via p21-activated kinase 4-depednent activation of LIM domain kinase 1.” Oncotarget. 2014. 5(22):11490-11500.<br />
<br />
68. Yasumoto, K, et Al. “Role of the CXCL12/CXCR4 axis in peritoneal carcinomatosis of gastric cancer.” Cancer Res. 2006. 66:2181-2187.<br />
<br />
69. Burger, J, and Peled, A. “CXCR4 antagonists: targeting the microenvironment in leukemia and other cancers.” Leukemia. 2009. 23:43–52.<br />
<br />
70. Rosenkilde, M, et Al. “Molecular mechanism of AMD3100 antagonism in the CXCR4 receptor.” J. Biol. Chem. 2004. 279:3033–3041.<br />
<br />
71. Hatse, S, et Al. “Chemokine receptor inhibition by AMD3100 is strictly confined to CXCR4.” FEBS Lett. 2002. 527:255–62.<br />
<br />
72. Fricker, S, et Al. “Characterization of the molecular pharmacology of the G-protein coupled chemokine receptor, CXCR4.” Biochem Pharmacol. 2006. 72:588–96.<br />
<br />
73. Flomenberg, N, et Al. “The use of AMD3100 plus G-CSF for autologous hematopoietic progenitor cell mobilization is superior to G-CSF alone.” Blood. 2005. 106:1867-1874.<br />
<br />
74. DiPersio, J, et Al. “Plerixafor and G-CSF versus placebo and G-CSF to mobilize hematopoietic stem cells for autologous stem cell transplantation in patients with multiple myeloma.” Blood. 2009. 113:5720-5726.<br />
<br />
75. Jaganmohan, R., et Al. “Glucose starvation-mediated inhibition of salinomycin induced autophagy amplifies cancer cell specific cell death.” 2015. Oncotarget. 6(12):10134-10146.<br />
<br />
76. Jangamreddy, J, et Al. “Salinomycin induces activation of autophagy, mitophagy and affects mitochondrial polarity: differences between primary and cancer cells.” Biochimica et biophysica acta. 2013. 1833(9):2057-2069.<br />
<br />
77. Ghavami, S, et Al. “Autophagy and apoptosis dysfunction in neurodegenerative disorders.” Progress in neurobiology. 2014. 112:24-49.<br />
<br />
78. Boehmerle, W, and Endres, M. “Salinomycin induces calpain and cytochrome c-mediated neuronal cell death.” Cell death & disease. 2011. 2:e168.<br />
<br />
79. Boehmerle, W, et Al. “Specific targeting of neurotoxic side effects and pharmacological profile of the novel cancer stem cell drug salinomycin in mice.” Journal of molecular medicine. 2014. 92(8):889-900.<br />
<br />
80. Oak, P, et Al. “Combinatorial treatment of mammospheres with trastuzumab and salinomycin efficiently targets HER2-positive cancer cells and cancer stem cells.” International journal of cancer. 2012. 131(12):2808-2819.<br />
<br />
81. Parajuli, B, et Al. “Salinomycin inhibits Akt/NF-kappaB and induces apoptosis in cisplatin resistant ovarian cancer cells.” Cancer epidemiology. 2013. 37(4):512-517. <br />
<br />
82. Zhang, G, et Al. “Combination of salinomycin and gemcitabine eliminates pancreatic cancer cells.” Cancer letters. 2011. 313(2):137-144.<br />
<br />
83. Jangamreddy, J and Los, M. “Mitoptosis, a novel mitochondrial death mechanism leading predominantly to activation of autophagy.” Hepatitis monthly. 2012. 12(8):e6159.<br />
<br />
84. Xiao, Z, et Al. “Metformin and salinomycin as the best combination for the eradication of NSCLC monolayer cells and their alveospheres (cancer stem cells) irrespective of EGFR, KRAS, EML4/ALK and LKB1 status.” Oncotarget. 2014. 5(24):12877-12891.<br />
<br />
85. Yue, W, et Al. “Inhibition of the autophagic flux by salinomycin in breast cancer stem-like/progenitor cells interferes with their maintenance.” Autophagy. 2013. 9(5):1-16.<br />
<br />
86. Fuchs, D, et Al. “Salinomycin overcomes ABC transporter-mediated multidrug and apoptosis resistance in human leukemia stem cell-like KG-1a cells.” Biochem Biophys Res Commun. 2010. 394:1098-104.<br />
<br />
87. Riccioni, R, et Al. “The cancer stem cell selective inhibitor salinomycin is a p-glycoprotein inhibitor.” Blood Cells Mol Dis. 2010. 45:86-92.<br />
<br />
88. Chen, G, et Al. “Metformin inhibits growth of thyroid carcinoma cells, suppresses self-renewal of derived cancer stem cells, and potentiates the effect of chemotherapeutic agents.” Journal of Clinical Endocrinology & Metabolism. 2012. 97(4):E510-E520.<br />
<br />
89. Brown, K, et Al. “Metformin inhibits aromatase expression in human breast adipose stromal cells via stimulation of AMP-activated protein kinase.” Breast cancer research and treatment. 2010. 123(2):591-596. <br />
<br />
90. Isakovic, A, et Al. “Dual antiglioma action of metformin: cell cycle arrest and mitochondria-dependent apoptosis.” Cellular and molecular life sciences. 2007. 64(10):1290-1302.<br />
<br />
91. Mizushima, N. “Autophagy: process and function.” Genes Dev. 2007. 21:2861-73;<br />
<br />
92. Mortensen, M, et Al. “The autophagy protein Atg7 is essential for hematopoietic stem cell maintenance.” J Exp Med. 2011. 208:455-67.<br />
<br />
93. Lu, D, et Al. “Salinomycin inhibits Wnt signaling and selectively induces apoptosis in chronic lymphocytic leukemia cells.” PNAS. 2011. 108(32):13253-13257.<br />
<br />
94. Tamai, K, et Al. “A mechanism for Wnt coreceptor activation.” Mol Cell. 2004. 13:149–156.<br />
<br />
95. Zeng, X, et Al. “A dual-kinase mechanism for Wnt co-receptor phosphorylation and activation.” Nature. 2005. 438:873–877.<br />
<br />
96. Fodde, R, and Brabletz, T.”Wnt/b-catenin signaling in cancer stemness and malignant behavior.” Current Opinion in Cell Biology. 2007. 19:150-158.<br />
<br />
97. Jung, A, et Al. “The invasion front of human colorectal adenocarcinomas shows co-localization of nuclear b-catenin, cyclin D1, and p16INK4A and is a region of low proliferation.” Am J Pathol. 2001. 159:1613-1617.<br />
<br />
98. Liu, C, et Al. “LRP6 overexpression defines a class of breast cancer subtype and is a target for therapy.” PNAS. 2010. 107:5136–5141.<br />
<br />
99. Zhang, J, et Al. “Wnt signaling activation and mammary gland hyperplasia in MMTV-LRP6 transgenic mice: Implication for breast cancer tumorigenesis.” Oncogene. 2010. 29:539–549.<br />
<br />
100. Yang, L, Lin, C, and Liu, Z. “P68 RNA helicase mediates PDGF-induced epithelial mesenchymal transition by displacing axin from b-catenin.” Cell. 2006. 127:139-155.<br />
<br />
101. He, X. “Unwinding a path to nuclear b-catenin.” Cell. 2006. 127:40-42.<br />
<br />
102. Schempp, C, et Al. “V-ATPase inhibition regulates anoikis resistance and metastasis of cancer cells.” Mol. Cancer. Ther. 2014. 13(4):926-937.<br />
<br />
103. Marshansky, V, and Futai, M. “The V-type H+ ATPase in vesicular trafficking: targeting regulation and function.” Curr. Opin. Cell Biol. 2008. 20:415-426.<br />
<br />
104. Ridley, A. “Rho GTPases and actin dynamics in membrane protrusions and vesicle trafficking.” Trends Cell Biol. 2006. 16(10):522–529.<br />
<br />
105. Chaudhry, S, et Al. “Autocrine IL-1beta-TRAF6 signalling promotes squamous cell carcinoma invasion through paracrine TNFalpha signaling to carcinoma-associated fibroblasts.” Oncogene. 2013. 32(6):747-758.<br />
<br />
106. Gaggioli C, Hooper S, Hidalgo-Carcedo C, Grosse R, Marshall JF, Harrington K, et al. Fibroblast led collective invasion of carcinoma cells with differing roles for RhoGTPases in leading and following cells. Nat Cell Biol. Dec; 2007 9(12):1392–1400.<br />
<br />
107. Wang K, Wara-Aswapati, N, et Al. “TRAF6 activation of PI 3-kinase-dependent cytoskeletal changes is cooperative with Ras and is mediated by an interaction with cytoplasmic Src.” J Cell Sci. 2006. 119(Pt 8):1579–1591.<br />
<br />
108. Nystrom, M, et Al. “Development of a quantitative method to analyse tumour cell invasion in organotypic culture.” J Pathol. 2005. 205(4):468–475.<br />
<br />
109. Kalluri, R, and Zeisberg, M. “Fibroblasts in cancer.” Nat Rev Cancer. 2006. 6(5):392–401.<br />
<br />
110. Erez, N, et Al. “Cancer-Associated Fibroblasts Are Activated in Incipient Neoplasia to Orchestrate Tumor-Promoting Inflammation in an NF-kappa B-Dependent Manner.” Cancer Cell. 2010. 17(2):135–147.<br />
<br />
111. Stuelten, C, et Al. “Breast cancer cells induce stromal fibroblasts to express MMP-9 via secretion of TNF-alpha and TGF-beta.” J Cell Sci. 2005. 118(Pt 10):2143–2153.<br />
<br />
112. Polanska, U, and Orimo, A. “Carcinoma-associated fibroblasts: non-neoplastic tumor promoting mesenchymal cells.” J. Cell. Physiol. 2013. 228:1651-1657.<br />
<br />
113. Guo, X, et Al. “Stromal fibroblasts activated by tumor cells promote angiogenesis in mouse gastric cancer.” J Biol Chem. 2008. 283:19864–19871.<br />
<br />
114. Hanahan, D, and Coussens, L. “Accessories to the crime: Functions of cells recruited to the tumor microenvironment.” Cancer Cell. 2012. 21:309–322.<br />
<br />
115. Polanska, U, Mellody, K, and Orimo, A. “Tumour-promoting stromal myofibroblasts in human carcinomas.” Cancer Drug Discov Dev (Springer Chapter). 2010. 16:325–349.<br />
<br />
116. Togo, S, et Al. “Carcinoma-associated fibroblasts are a promising therapeutic target.” Cancers. 2013. 5:149–169.<br />
<br />
117. Ozdemir, B, et Al. “Depletion of carcinoma-associated fibroblasts and fibrosis induces immunosuppression and accelerates pancreas cancer with reduced survival.” Cancer Cell. 2014. 25:1-16.<br />
<br />
118. Armstrong, T, et Al. “Type I collagen promotes the malignant phenotype of pancreatic ductal adenocarcinoma.” Clin. Cancer Res. 2004. 10:7427–7437.<br />
<br />
119. Wang, W, et Al. “Intratumoral a-SMA enhances the prognostic potency of CD34 associated with maintenance of microvessel integrity in hepatocellular carcinoma and pancreatic cancer.” PLoS ONE. 2013. 8:e71189.<br />
<br />
120. Omary, M, et Al. “The pancreatic stellate cell: a star on the rise in pancreatic diseases.” J. Clin. Invest. 117:50–59.<br />
<br />
121. Jacobetz, M, et Al. “Hyaluronan impairs vascular function and drug delivery in a mouse model of pancreatic cancer.” Gut. 2013. 62:112–120.<br />
<br />
122. Luga, V, et Al. “Exosomes mediate stromal mobilization of autocrine Wnt-PCP signaling in breast cancer cell migration.” Cell. 2012. 151:1542–1556.<br />
<br />
123. Yang, W, et Al. “The E3 ligase TRAF6 regulates Akt ubiquitination and activation.” Science. 2009. 325(5944):1134–1138.<br />
<br />
124. Windheim, M, et Al. “Interleukin-1 (IL-1) induces the Lys63-linked polyubiquitination of IL-1 receptor-associated kinase 1 to facilitate NEMO binding and the activation of IkappaBalpha kinase.” Mol Cell Biol. 2008. 28(5):1783–1791.<br />
<br />
125. Zhang, Y, et Al. “The eradication of breast cancer and cancer stem cells using octreotide modified paclitaxel active targeting micelles and salinomycin passive targeting micelles.” Biomaterials. 2012. 33(2):679–691.<br />
<br />
126. Zhao, P, et Al. “iTEP nanoparticle-delivered salinomycin displays an enhanced toxicity to cancer stem cells in orthotopic breasts tumors.” Mol. Pharmaceutics. 2014. 11:2703-2712.<br />
<br />
127. Barbas, A, et Al. “Aptamer applications for targeted cancer therapy.” Future Oncol. 2010. 6(7):1117–1126.<br />
<br />
128. Ni, M. “Poly(lactic-co-glycolic acid) nanoparticles conjugated with CD133 aptamers for targeted salinomycin delivery to CD133+ osteosarcoma cancer stem cells.” International Journal of Nanomedicine. 2015. 10:2537-2554.<br />
<br />
129. Choi, K, et Al. “Smart nanocarrier based on PEGylated hyaluronic acid for cancer therapy.” American Chemical Society Nano. 2011. 5(11):8591-8599.<br />
<br />
130. Lokeshwar, V, et Al. “Hyaluronidase in prostate cancer: a tumor promoter and suppressor.” Cancer Res. 2005. 65:7782–7789.<br />
<br />
131. Chao, K, Muthukumar, L, and Herzberg, O. “Structure of Human Hyaluronidase-1, a Hyaluronan Hydrolyzing Enzyme Involved in Tumor Growth and Angiogenesis.” Biochemistry. 2007. 46:6911–6920.<br />
<br />
132. Franzmann, E, et Al. “Expression of Tumor Markers Hyaluronic Acid and Hyaluronidase (Hyal1) in Head and Neck Tumors.” Int. J. Cancer. 2003. 106:438–445.<br />
<br />
133. Bourguignon, L, et Al. “Cd44 Interaction with Naþ-Hþ Exchanger (Nhe1) Creates Acidic Microenvironments Leading to Hyaluronidase-2 and Cathepsin B Activation and Breast Tumor Cell Invasion.” J. Biol. Chem. 2004. 279:26991–27007.<br />
<br />
134. Stern, R. “Hyaluronidases in Cancer Biology.” Semin. Cancer Biol. 2008, 18:275–280.<br />
<br />
135. Coradini, D, Perbellini, A, and Hyaluronan, A. “Suitable Carrier for an Histone Deacetylase Inhibitor in the Treatment of Human Solid Tumors.” Cancer Ther. 2004. 2:201–216.<br />
<br />
136. Shigdar, S, et Al. “RNA aptamers targeting cancer stem cell marker CD133.” Cancer Lett. 2013. 330(1):84–95.<br />
<br />
137. Adhikari, A, et Al. “CD117 and Stro-1 identify osteosarcoma tumor-initiating cells associated with metastasis and drug resistance.” Cancer Res. 2010. 70(11):4602–4612.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-89313213174872033482016-01-20T10:12:00.000-08:002016-01-20T10:13:13.478-08:00Does Criticism of Oscar Nominations Ring Hollow?<br />
With the conclusion of the 2016 Oscar nominations and the lack of any black individuals in any of the major categories the black populous has commenced with their criticism of such an outcome with social media tags like #OscarsSoWhite or calling the Oscars “the White BET Awards”. Unfortunately almost all of this criticism represents a significant problem in society, not because one may disagree with the nomination results, but instead because the criticism is sound bite in nature without any substance behind it. <br />
<br />
Overall there are two principal motivations behind criticizing these nominations. First, certain individuals feel that the Oscars missed an opportunity to promote diversity by not having black nominees (or other minorities, but it is unlikely that most in the black community care about the nomination of a non-black minority). The underlying motivation of this argument is that every year the Oscars should nominate at least one black individual, i.e. basically there should be a “diversity” quota. <br />
<br />
Congratulations to anyone with this belief for you are a racist. You believe that someone should receive different treatment based on the color of that individual’s skin. Oscar nominations do not fall under the umbrella of “affirmative action” because acting, directing or writing performances are not significantly hindered by potential detriments from possible racism in the upbringing or the acquisition of the role for once you have the role those issues become irrelevant. To argue otherwise would be akin to saying Wrestler A in the U.S. Olympic Trials should start with a 2-point advantage over Wrestler B due to the background of Wrestler A. College admissions and employment opportunities are much more complex and need deeper analysis versus a direct merit based analysis like an Oscar nomination.<br />
<br />
So now that the ridiculous idea that the Oscars need diversity for the sake of diversity has been revealed as what it is, a racist based quota system, it is time to address the second and actual legitimate criticizing motivation, the idea of the snub. Unfortunately for individuals supporting a particular person or movie for an Oscars, almost every year there are more deserving candidates than there are nominations; therefore, clearly a number of deserving individuals will not be nominated. Where is the criticism from the black community about the lack of nominations for Steve Carell (Big Short), Michael Keaton (Spotlight) or Johnny Depp (Black Mass) or does their lack of nominations not matter because they are white? <br />
<br />
If one feels that Idris Elba (Beasts of No Nation), Will Smith (Concussion), Ryan Coogler (Creed director), etc. should have been nominated then one must articulate the rationality behind why a nomination was deserved. Also one must go further and then discuss why nominee x did not deserve his/her nomination, thus it would have been appropriate for one of the above individuals to have taken nominee x’s place. Failing to conduct such an analysis simply places one in the first criticism motivation camp, i.e. you are a racist, because you fail to produce a rational argument to counter the decision of the nomination committee beyond “there should be a black person simply for faux diversity reasons”. <br />
<br />
Now some individuals could contend a flaw in the nominations because out of all of the possible worthy contenders, there were no black nominees? While on its face this argument may seem convincing, the chief problem is that it fails to take into consideration the demographical participation rate of both overall contenders and those who would be deemed worthy contenders. The black participation rate is rather low in the movie industry on both counts; for example of the worthy nominees for Best Actor (most likely the strongest category for blacks to receive a nomination) just the sheer math lists the worthy black participation rate at no more than 20%, thus significantly reducing the probability of receiving a nomination.<br />
<br />
The above argument would have much more merit if the black participation rate were higher. For example if the NBA All-Star team selection was carried out by a committee of individuals and both teams were selected without any black participants, then one would have a much stronger argument for racial motivations behind the selection because of both the number of available black participants and their overall quality. However, in the movie business the participation rate hardly matches that seen in the NBA.<br />
<br />
Some could counter-argue that the low participation rate implies racism in the movie business, but such an argument is difficult to make because of simple natural demographics. A number people, including many individuals in the black populous, seem to forget that black individuals only make up approximately 13% of the U.S. population. So it is not unreasonable for a number of fields to have similar participation rates because the overall population is low to begin with as well as the pool of individuals with sufficient talent to sufficient participate and even excel in these fields (which is similar as a percentage for all demographics not just blacks). <br />
<br />
Overall if one wants to criticize the Oscar nominations then make sure to do so using rational analysis regarding why candidate x should have been nominated over nominee y. That way your criticism will at least have a measurable element to judge its validity. Otherwise, if one forgoes this type or similar rational analysis one is simply acknowledging that he/she is a racist.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-76665968515621366922015-12-23T10:05:00.000-08:002015-12-23T10:05:51.259-08:00Should the United States adopt a different system from the current Opt-In system for organ procurement?<br />
One of the more acknowledged problems in healthcare that receives some attention yet little is actually done about is the lack of available organs for transplant. Based on recent data at least 114,000 people in the United States are waiting for an organ transplant that will significantly increase their remaining lifespan.1,2 Unfortunately most of those waiting will die before receiving that desired organ due to the dramatic gap between the available supply of organs for transplant and those waiting for one. <br />
<br />
To understand and appreciate the extent of this gap according to the Scientific Registry of Transplant Recipients (SRTR) between 2000 and 2009 the annual number of deceased organ donors (the most viable for most types of transplants) in the U.S. increased from 5,985 to 8,022 whereas the number of individuals waiting for a transplant increased from 74,635 to 111,027.3 In the first half of 2010s there was not significant deviation from this trend. Note that this increase in the waiting list occurred despite an increase in organ transplants. In either absolute numbers or relative percent change, there is an increasing gap between available organs and those who need them. While various aspects of biological research are working to create an environment where new organs can be grown in a lab with low rejection probabilities, thus significantly mitigating this supply problem, such a reality still appears to be a long way off. Therefore, should changes be made to the current organ donation system to speed the closure of this gap and save lives? <br />
<br />
While each state has their own laws on organ donation the general model has always followed the Uniform Anatomical Gift Act (UAGA), which was first passed in 1968 and amended in both 1987 and 2006. UAGAs are created by the National Conference of Commissioners on Uniform State Laws (NCCUSL) as a means to create uniformity among states on various laws where uniformity makes sense due to a lack of special circumstances; however in the end states have the option to adopt, decline, or simply use the act as a skeleton for their own laws. The original 1968 UAGA established the general goal of organ donation as a system based on altruism through voluntarism due to the opt-in nature of the program and created legislative guidelines for donation of fetal organs and tissues.4 <br />
<br />
In 1987 the UAGA experienced two significant changes among other smaller changes: first it was amended to forbid persons from “knowingly, for valuable consideration, purchase or sell a part for transplantation or therapy, if removal of the part is intended to occur after the death of the decedent.” Second, a narrow form of presumed consent was added whereby a medical examiner could remove any needed organs or tissue in the absence of any objection by the decedent or decedent’s next of kin.5 This presumed consent addition was not unique for numerous states already had similar types of regulations in their organ donation laws mostly concerning cornea removal.<br />
<br />
In 2006 the UAGA was further revised to remove the presumed consent regulations largely due to a number of lawsuits filed against those measures.6,7 Almost all states followed the pattern of the UAGA by either officially enacting its recommendations or making changes to their own state laws to flow in close proximity with its recommendations including the removal their own presumed consent regulations as well, with only a few states retaining very restrictive guidelines concerning cornea removal.6 <br />
<br />
As noted above while the idea of characterizing organ donation as an altruistic gesture is certainly a nice idea in theory, especially when 90+% of people support organ donation and 70+% of polled individuals would consider being an organ donor; in reality when only about 42% of U.S. adults are registered organ donors when lives are on the line, clearly theory and reality are in conflict.1,2<br />
<br />
The difference between those who claim to be interested in being a donor and those who actually are donors suggest some significant problems with the opt-in system. One of the major issues appears to be physicians adhering to the wishes of next of kin to not harvest organs even though the decedent was an organ donor; a decision that makes no sense. Also there can be problems with organ procurement agents obtaining referrals from donors.7 A lack of public campaigning to raise awareness regarding the organ shortfall and the benefits of organ donation has also played a role in the lower than expected donor rates. Finally another less fixable problem is the psychological reluctance of most individuals to contemplate death and plan appropriately for it. This “kicking the can” strategy concerning death typically creates numerous problems when handling end of life decisions, including issues involving organ donation. So, if these are the problems associated with the opt-in system, what are other options that could increase the number of individuals willing to donate?<br />
<br />
One way to close the gap and improve donation rates is to provide an incentive for individuals to be “altruistic” (the irony of having to provide incentives for individuals to be altruistic is somewhat hilarious). However, with the sale of organs illegal, incentives must be creative in a sense, but also be of significant value. Israel and Singapore are two countries that utilize a low-cost incentive program that involves influencing organ allocation. In the U.S. a national waiting list is maintained where transplant candidates are ranked largely based their overall health (how long they have left to live without the organ) and when their name was placed on the list. However, in the determination of who receives an organ there is no “bonus” to those who are donors. The priority rule or preferred donation system used by Israel and Singapore provides some level of preference to future donors over those who do not plan to be future donors. <br />
<br />
For example in Israel potential organ recipients are rated on a multiple point scale and whether or not they are planning to be a donor is also part of that criteria.8 Additional consideration can be gained if a direct family member of a potential recipient has signed a donor card or has already donated in the past be it as a live non-designated donor or a deceased donor.8 In Israel this program largely arose from the perceived repugnant behavior that a number of individuals were willing to accept an organ transplant, but would never be willing to donate an organ even after death. For Israel this program, as well as other supplemental small incentive programs, dramatically increased the rate of organ donations, especially in its early years of its adoption 2011 and 2012.9<br />
<br />
While the initial logic associated with the priority rule program appears sound, for it makes sense that future organ donors should receive some level of priority over those who do not plan to donate, there are some important issues. The first problem is the system in Israel is not legally binding in that an individual can agree to become a donor, but back out later. This type of system creates problems on both fronts because instituting a rule that once a person has agreed to be a donor then that individual can never withdraw from being a donor would see an immediate court challenge that would probably result in the elimination of such a condition. However, if the system remains as such one can simply declare donor intentions when it is advantageous and withdraw when it is no longer advantageous making a mockery of the system. One way to address this issue may be instituting a time limit where no benefits are acquired until an individual has declared donor intentions for at least x number of years, thus at least eliminating individuals who join solely for selfish short-term reasons.<br />
<br />
The second problem is such a system raises potential moral questions when non-medical elements outside of time are introduced into the organ selection process. While on its face such a system appears to have a “tit-for-tat” characteristic, it would not be hard for one to produce a potential slippery slope argument. A common argument would be that the individual who agrees to become a donor is receiving some form of preferential treatment because he/she is offering something of value to the organ bank, replacing the used organ as well as offering others. Some could argue that in this environment how is it justified for an alcoholic poor person to receive a liver over a philanthropic millionaire? The millionaire provides dramatically more benefit to society if he/she survives over the alcoholic. While this argument should be irrelevant because the priority rule system only addresses organ donation specifically… sometimes certain parties just need a small window of opportunity to change a system significantly and the United States does not have a quality track record for societal fairness.<br />
<br />
The third problem is such a system could be unconstitutional on the grounds that it would violate equal protection in that government could provide an organ to one individual over another based on non-medical factors. For a violation of equal protection it must be determined that the groups being compared are similarly situated otherwise government or another agency can apply different standards as long as those standards are not discriminatory.10 It is unknown how a court would rule in this case because providing a benefit only based on the notion that the receiver made a non-binding declaration of donation would require the court to determine the intent of the parties and whether that intent makes other groups distinguishable, which could potentially open a nasty can of legal worms.7 Any “perks” for donor kin would be an instant no-go because granting a benefit to someone simply on the basis of relation is inherently discriminatory. <br />
<br />
The fourth and final major issue is would religious objections to organ donations cause problems for such a system in a discriminatory fashion? Initially it appears that religious objections should not cause a significant problem because discriminatory intent requires that the principal purpose of creating a law in the eyes of its creators be to produce discrimination; if the law is neutral and indirect discrimination is simply derived from its enforcement then no legal discrimination exists. This general legal structure was noted in Personnel Administrator of Massachusetts v. Feeney.11 However, while religious objections should not be a problem, religion can make these types of things more complicated. <br />
<br />
Overall these potential issues do raise the question of the true value of changing the existing opt-in system to a priority rule donation system. So if a priority rule system is not preferred what other options remain? Another possible system for donor expansion removes the passivity from the opt-in system while maintaining its spirit, the mandated choice system.<br />
<br />
Execution of the mandated choice system is rather straight-forward; when individuals over the age of 18 acquire or renew their driver’s license they are asked whether or not they wish to be an organ donor. This system attempts to maintain the altruistic characteristic of organ donation while eliminating the obligation of the potential donor to initiate the process to become a donor. Such a system has been utilized in both Texas and Virginia before other systems replaced them and is currently operating in Illinois, Colorado and California.12 Of course there are certain conditions that must be followed outside of simply asking “Do you want to be an organ donor.” <br />
<br />
For example the American Medical Association has noted that in the mandated choice system the asked individual must be properly informed regarding the elements that are involved in organ donation to ensure that the individual understands the procedure and can be regarded as meeting the principles of informed consent.13 Also some might argue that a mandated choice system is not constitutional on First Amendment grounds in that an individual has the right not to speak and asking the question of organ donation without providing a means to simply not answer without consequence would be unlawful.<br />
<br />
Realistically the First Amendment argument more than likely fails if the question embodying the mandated choice system is asked in a neutral manner with no legitimate attempt to favor a particular decision. With this condition in mind the question medium would more than likely have to be paper for one could interpret certain pressures upon an individual when asked verbally whether or not they want to be an organ donor. Such pressures are commonly associated with “being put on the spot”, which can favor a yes response over a no response, especially with a moral issue like organ donation and being asked by a government official (DMV employee). The question itself should simply ask “Would you like to make your organs available for transplant into other parties after your death?” or something similar, just a neutral question with no positive or negative overtones. <br />
<br />
The success level of mandated choice programs have varied over time for both Texas and Virginia eventually overturned their programs because of strong negative reaction from the public including an increase in donation rejection in Texas up to 80%.14 Whereas in Illinois organ donation participants have increased to 60%.7 It is difficult to reconcile these two results. The best potential explanation may simply be political in the context of how individuals view government involvement in society. Both Texas and Virginia lean more conservatively and some may simply be offended that government even asks in the first place while the more liberal leaning Illinois is not offended by such behavior. Overall if this is the case then it is difficult to see how a mandate choice program would make significant in-roads towards increasing organ donation rates as potential increases in some places may be offset by other potential decreases in others.<br />
<br />
The final major option to increase organ donation rates would be to simply return to the presumed consent days (i.e. Opt-Out over Opt-In), yet expand the program to include all organs not simply corneas or John/Jane Does. Clearly to ensure significant positive changes this presumed consent program would have to be hard/strong (after death if the individual did not opt-out then next of kin have no say in the issue of organ donation) versus soft/weak (next of kin can still reject organ donation for the deceased). Not surprisingly such a change could produce strong objections from some individuals for presumed consent/opt-out would in essence redefine the rule of who owns a deceased’s organs from next of kin to the government. Others would argue that such a policy poses a direct attack on individual liberty, autonomy and privacy by restricting freedom of choice, the very factors that some believe grant acceptability to an opt-in system.<br />
<br />
The notion that an individual loses liberty, autonomy and privacy in an opt-out system is basically ridiculous. In short there is no threat to these elements in such a system because the individual has sufficient opportunity to declare their intentions to not be an organ donor while still alive. Once an individual dies the rights associated with liberty, autonomy, privacy, etc. are heavily handicapped, thus eliminating any meaningful violations in this circumstance.<br />
<br />
Any minor opposition on the grounds that legislating altruism is not a responsibility of the government is a non-starter because the issue of establishing an opt-out system over the current opt-in system is a matter of public health due to the significant gap between available organs and individual need, not altruism and again there is no violation of personal autonomy because the individual is dead, thus the individual no longer possesses the capacity for autonomy. <br />
<br />
Also some could argue that soft/weak presumed consent provide respect for the decedent’s relatives who are more than likely grieving the loss by allowing them to preserve the state of the loved one. However, there is a notion of hypocrisy in this idea for individuals in a presumed consent system can opt-out, thus why is it alright that the wishes of the next of kin supercede the wishes of the decedent if the decedent never opted out? Any “psychological” detriment born by the next of kin due to a hard/strong presumed consent system is the fault of their own selfishness and/or arrogance, not the system.<br />
<br />
A more relevant issue is the question of next of kin property rights. Although it may sound grizzly, when an individual dies the body and its contents (in a sense) basically become property of the next of kin, especially with relation to burial rights. Some next of kin could challenge a presumed consent system on the grounds that it interferes with property rights or even religious services. However, in case of previous presumed consent laws, courts almost always side against such claims. For example in Tillman v. Detroit Receiving Hosp, a Michigan court ruled that the state’s presumed consent law for cornea extraction did not violate the privacy right of the decedent or her next of kin and the cornea removal did not constitute sufficient mutilation to void such action.15<br />
<br />
However, one could argue that this case only involved cornea extraction not the extraction of numerous and various other organs, which would occur in a more thorough presumed consent system. While there certainly would be more incisions made in the individual, the individual would receive the appropriate remediation treatment through stitches and if properly dressed should have no significant mutilation or ascetical issues apart from an individual who simply had corneas removed. <br />
<br />
Moreover pertaining to the issue of next of kin property, government should be able to utilize eminent domain to support the acquisition of the decedent’s organs. Eminent domain is the power of government to take private property for public use and while it commonly refers to land, it should also be applicable in a presumed consent environment when the individual did not choose to opt-out as a donor. Clearly in the case of a presumed consent environment the government would exert authority for the organs, but not the body, unless legally required. <br />
<br />
How could the government manage the Takings Clause of the Fifth Amendment in such a scenario? Overall one could simply validate the acquisition of the organs under the “public use” requirement, for the organs are certainly going to be used for a “public purpose” through the increase of deceased organ donation rates resulting in more lives saved. From a standpoint of organ value, if one wanted to, realistically no monetary compensation could be expected. For the next of kin, who would own the organs, have a product with ephemeral functionality and due to the fact that one cannot legally sell an organ, a monetary value of zero dollars. <br />
<br />
The ephemeral nature of organ functionality is important because it cannot be argued that the organs may have monetary value later due to a change in the law, thus compensation would be required to satisfy this potential future value. Therefore just compensation or “fair market value” is simply zero dollars. This reality is helpful because it avoid questions regarding organ value in the context of the various associated parties like the government, the next of kin or a potential recipient. However, while it can be argued that technically the government would not have to pay any monetary sum to the next of kin for the acquisition of the organs, it stands to reason that the government could make a small good-faith gesture tied to addressing funeral costs (i.e. 200-500 dollars).<br />
<br />
A final initial question regarding a presumed consent system would be whether or not it actually increases the number of available organs. Official literature can be somewhat murky on this issue for some studies find that when controlling outside factors presumed consent does increase organ donations rates significantly,16,17 but others suggest that the casual link is not appropriate due to the level of heterogeneity that exists in transplantation systems throughout the world.19<br />
<br />
The chief problem with analyzing existing systems is the lack of strong data regarding hard/strong presumed consent. Of the more studied presumed consent systems in Europe (Austria, Belgium, France, Italy, Norway, Spain and Sweden), only Austria typically sees genuine practice as a hard/strong presumed consent system whereas the others are either soft/weak or physicians almost default to a soft/weak system for various reasons.12 Unfortunately most soft/weak systems have few differences from an opt-in system, thus identifying any significant differences becomes quite difficult limiting the value of the analysis.<br />
<br />
Interestingly enough the studies focus so much on potential confusion associated with multiple factors in the donation environment that they discard simple logic in that under a strong presumed consent system everyone who does not opt-out will become a donor upon death, thus it stands to reason that donation rates would significantly increase on two counts of logic. First, no one who has opted-in should opt-out, thus theoretically the worst a presumed consent system does is break-even. Second, studies have shown that most of the time people often choose the assigned default option, among various instances, possibly due to a lack of strong feelings or a lack of desire to spending the resources and time required to change the option.19-22<br />
<br />
In the end among the three major donation methods, beyond opt-in, that could potentially increase the number of available organs for transplant, all three have their positives and negatives. Priority rule systems incentivize the donation process, but have numerous holes in how this incentive is effectively applied and could have a number of potential constitutional issues associated with their application. Mandated response systems are able to effectively transfer the initiation of the donation process from the individual to the government significantly eliminating situations of ignorance to the existence and operation of the donation system. However, there are questions to how effective such a system is in actually increasing the available number of organs for donation. Presumed consent systems increase the available number of organs as well as do not appear to have any obvious legal issues, but must address elements of methodological opposition largely brought on by government paranoia and contempt in some camps as witnessed by the failure to establish such a system in New York, Illinois, and Colorado. <br />
<br />
Overall it appears that a presumed consent program is the best option for quickly increasing the total number of available organs for transplant, but pursuit of this strategy must involve a strong commitment to establishing such a system over the personal objections of a number of individuals despite the ability to opt-out. <br />
<br />
<br />
Citations – <br />
<br />
1. 2012 National Donor Designation Report Card by the Donate Life America: http://donatelife.net/2012-national-donor-designation-report-card-released/<br />
<br />
2. 2013 National Donor Designation Report Card by the Donate Life America: <br />
http://donatelife.net/2013-national-donor-designation-report-card-released/<br />
<br />
3. Scientific Registry of Transplant Recipients 2012: http://srtr.transplant.hrsa.gov/annual_reports/2012/Default.aspx<br />
<br />
4. Uniform Anatomical Gift Act (1968). http://www.uniformlaws.org/shared/docs/anatomical_gift/uaga%201968_scan.pdf<br />
<br />
5.Revised Uniform Anatomical Gift Act (1987). http://www.uniformlaws.org/shared/docs/anatomical_gift/uaga87.pdf<br />
<br />
6. Revised Uniform Anatomical Gift Act (2006).<br />
http://www.uniformlaws.org/shared/docs/anatomical_gift/uaga_final_aug09.pdf<br />
<br />
7. August, J. “Modern Models of Organ Donation: Challenging Increases of Federal Power to Save Lives.” 394 Hastings Constitutional Law Quarterly Vol. 40:2 393-422.<br />
<br />
8. Lavee, J. “A New Law for Allocation of Donor Organs in Israel.” The Lancet. 2010. 375(9720):1131-1133.<br />
<br />
9. Even, D. “Dramatic Increase in Organ Transplants Recorded in Israel in 2011.” Haaretz. Jan. 12, 2012. http://www.haaretz.com/dramatic-increase-in-organ-transplants-recorded-in-israel-in-2011-1.406824<br />
<br />
10. Vacco v. Quill, 521 U.S. 793, 799. 1997.<br />
<br />
11. Personnel Adm’r of Massachusetts v. Feeney, 442 U.S. 256. 1979.<br />
<br />
12. Rodriguez, S. “No Means No, But Silence Means Yes? The Policy and Constitutionality of the Recent State Proposals for Opt-Out Organ Donation Laws.” FIU Law Review. 7:149-186.<br />
<br />
13. AMA Recommendation. Opinion 2.155 – Presumed Consent and Mandated Choice for Organs from Deceased Donors. American Medical Association. http://www.ama-assn.org/ama/pub/physician-resources/medical-ethics/code-medical-ethics/opinion2155.page?<br />
<br />
14. Siminoff, L, and Mercer, M. “Public Policy, Public Opinion, and Consent for Organ Donation.” Camb Q Healthc Ethics. 2001. 10(4):377-86.<br />
<br />
15. Tillman v. Detroit Receiving Hosp., 360 N.W.2d 275, 277. 1984.<br />
<br />
16. Hawley, Z, Li, D, and Schnier, K. “Increasing Organ Donation via Changes in the Default Choice or Allocation Rule.” Journal of Health Economics. 2013. 32.6:1117-1129.<br />
<br />
17. Abadie, A, and Gay, S. “The Impact of Presumed Consent Legislation on Cadaceric Organ Donation: A Cross-Country Study. Journal of Health Economics. 2006. 25.4:599-620.<br />
<br />
18. Boyarsky, B, et Al. “Potential Limitations of Presumed Consent Legislation.” Transplantation. 2012. 93.2:136-140.<br />
<br />
19. Samuelson, W, and Zeckhauser, R. “Status Quo Bias in Decision Making.” Journal of Risk and Uncertainty. 1988. 1(1):57-59.<br />
<br />
20. Madrian, B, and Shea, D. “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior.” Quarterly Journal of Economics. 116(4): 1149-1187.<br />
<br />
21. Johnson E, and Goldstein, D. “Defaults and Donation Decisions.” Transplantation. 2004. 78(12): 1713-1716. <br />
<br />
22. Gäbel, H. “Donor and Non-Donor Registries in Europe.” Stockholm, Sweden: on behalf of the committee of experts on the Organizational Aspects of Co-operation in Organ Transplantation of the Council of Europe. 2002.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com1tag:blogger.com,1999:blog-57719692398152598.post-15455198986141601502015-11-24T10:02:00.000-08:002015-11-24T10:02:20.415-08:00What are Mars Analogue Missions Really Studying?<br />
While various parties argue back and forth about whether humanity has progressed far enough technologically to colonize Mars, technology alone will not determine the success of such a venture. Interpersonal relationships and how the first colonists are able to work together to augment their strengths and mitigate their weaknesses will also be an incredibly important element in producing success. With this in mind NASA and other space-based organizations have undertaken occasional training experiments or analogue missions in an attempt to simulate a mission to Mars. These missions typically take place in specific locations in Hawaii or Antarctica, which are suitable for simulating Martian environment as far as Earth can simulate Mars; however, do these analogue missions have the appropriate goals and tasks for the participating individuals to properly simulate a Martian colonization party? <br />
<br />
The major goal of these simulation experiments is to access how different individuals interact with each other over a fixed continuous time period in a confined space, simulating conditions in both travel to Mars and after landing, in effort to understand and predict potential positive and negative behavior among the colonizing party. However, the environment these individuals are commonly thrust into is not similar to that which will be faced by the initial colonists. While the inside habitat-outside habitat transition is properly simulated through the use of space suits, the activities of the participants within the habitat are more focused on specific scientific studies, not on building and developing the operational structure of the shelter. In short these experiments focus too much on simulating a developed Martian habitat versus a developing one.<br />
<br />
For example one of the first major issues for a Martian colonization mission is that there is little to no food growth on site. The first colonists will bring a significant amount of food with them when first traveling to Mars, but almost every expert is in agreement that the development of some means to grow sufficient amounts of food on Mars will have to occur soon after arrival for it is too costly to continue to re-supply from Earth. Unfortunately these simulations experiments do not appear to be modeling this critical element for Mars colonization. This lack of planning is a missed opportunity because there are questions to what is the most effective way to grow food on Mars and various simulations starting from scratch could produce important information to determining which method would be the most successful in a real colonization mission. Every scientist and engineer knows that laboratory simulations/conclusions and in-field simulation/conclusions can differ radically.<br />
<br />
One of the sub-questions on this issue is what type of food growth system would be optimal for Martian colonization both in the interim and for expansion. Various options, such as hydroponics, aquaponics, aeroponics, cultivating Martian soil, etc. exist and analogue missions would be an effective means to produce higher quality costs, effort and efficiency estimates of these system, both alone and in cooperation with each other, apart from ISS analysis and laboratory hypotheses. Aeroponics is NASA’s leader in the “clubhouse”, but would it work well as the initial on-site food provider for the first Martian colony? <br />
<br />
Furthermore due to the significant reduction in gravity on Mars, colonists will have to engage in a rigorous exercise program to reduce the potency of negative physiological effects associated with the loss of this gravity. Simulating the necessary exercise in these analogue missions cannot help study how it influences health on Mars due to the lack of similar gravity, but it can help study how such levels of vigorous exercise would influence energy levels and food consumption along with interpersonal relationships. Unfortunately this potentially valuable information is not acquired in these simulations because the participants are not instructed to exercise in such a way.<br />
<br />
Another important connective element that is lacking in these simulations, due in large part to the lack of these above elements, is the changes in stress that would accompany this behavior and these goals. While it is not ethical to emulate the life threatening conditions that failure would bring, general failure to complete necessary tasks would create tension and stress through challenges to the pride of the individuals involved, thus would better emulate real Martian colonization conditions. In these moments of stress, potential problems within the group dynamics can be identified that would not exist when stress levels were not increased leading to a better understanding of how to manage failures in the real colonizing party.<br />
<br />
In the end analogue missions are important for various reasons and while human factor elements are certainly important to study and general simulation research strategies have their place, there needs to be more simulations that mimic what colonists will experience when first landing on Mars to better create a methodology regarding how to maximize success for the Martian colonization mission. Overall without expanding the scope of analogue missions to reflect the realities of Martian colonization, one wonders what the point of conducting these missions in the first place actually is, for they are certainly not preparing colonists for the most important part of the colonization, establishing the environment and behaviors to increase the probability of long-term survival.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-83370766142226838902015-11-13T13:08:00.002-08:002015-11-13T13:08:48.459-08:00Wanted: Reasonable and Intelligent Gun Policy<br />
One thing that cannot be argued is that the number of gun deaths in the average year in the United States, including various mass shootings, has demonstrated that current gun policy does not work. The delusional idea harbored by some members of the National Rifle Association (NRA) that an effective means to address gun violence, which would involve weakening existing gun laws in effort to arm more “good guys” with firearms to combat the bad guys, demonstrates a genuine lack of intelligence and/or caring for the problem. While the specific intricacies of gun policy are better left to those with more experience, there are certain elements that appear to be “no-brainers” of sorts when producing effective and meaningful gun policy in effort to limit the number of violent gun-related deaths.<br />
<br />
Clearly the lingering holes in the requirements for legally required background checks are unacceptable in any reasonable gun policy. Private sellers, i.e. individuals who “claim” they are not engaged in the business of selling guns, should be required just like all licensed federal dealers to perform background checks outside of legal gun transfers to other direct family members (i.e. father to son, sister to brother, etc.). The reason most arguments that private sellers should not be subjected to performing background checks fall flat is that a vast majority of these transfers occur at gun shows or online where it is much easier for individuals who know they would fail a background check to acquire a firearm. It is a matter of public safety first and foremost; also there is no “right” to sell a gun without any type of legal condition or restriction. Overall there is no rational argument against requiring a private dealer to conduct a background check. <br />
<br />
Another reasonable step in the right direction would be the repel of the Tiahrt Amendments, which make it much more difficult for authorities to access and use gun trafficking data to pursue criminals and prevent criminal activity, especially via a dealer that is knowingly and willingly violating the law. Eliminating the Tiahrt Amendments would allow authorities to require dealers to conduct inventory of their stock (more than likely annual accounts would be appropriate) to increase the probability that unaccounted for inventory is detected much sooner than currently occurs. Also federal agencies will be able to maintain completed gun background checks for longer than 24 hours increasing the probability that authorities discover, arrest and convict straw purchasers (i.e. individuals who commonly purchase guns for individuals that would not pass a background check). <br />
<br />
Hmm, all of these changes seem completely reasonable, especially the last one for a background check is not like DNA, authorities will not be able to utilize an existing background check to “set-up” someone as a criminal when they are not one, to think otherwise is to simply give in to paranoia. Furthermore allowing the ATF to create an electronic database of gun records that is easily searchable will dramatically increase the ability of proper authorities to manage and address gun-related issues more efficiently especially criminal activity. As long as this database is not made public there should be no issue with its existence. Just as a point of note, most of these issues were addressed in both The Fix Gun Checks Act of 2011 and The Fix Gun Checks Act of 2013. <br />
<br />
Another good policy choice is to expand to the federal level the procedures and policies contained within the Maryland Firearms Safety Act of 2013. While this policy has numerous positive and practical elements, one of the most important is requiring certification for firearms training before legally being allowed to purchase a handgun. Note that with this expansion the law would be akin to policy associated with the National Minimum Drinking Age Acts in that states would not have to abide by the law, but they would not receive certain levels of federal funding if failing to do so. Therefore, if state x did not want to legal mandate that residents had to have firearm certification training before purchasing a gun it would be allowed, but state x would also not receive some significant percentage of federal aid.<br />
<br />
Arguments against the policy of this bill both in Maryland and to any federal expansion of it range from the paranoid to the ridiculous. For example one argument against requiring gun training for the purchase of a weapon has been to analogize it to requiring an individual to have training in public speaking in order to speak in public. It is rather self-explanatory to why such an analogy is foolish. However, for the sake of completeness why is this “point of argument” ridiculous? The chief difference between guns and speech is an issue of public safety. A significant portion of gun use results in negative public health consequences, including death, whether or not the acquisition of the gun was legal versus both the assumption and the reality that almost no portion of speech is expected to or produces negative public health consequences. <br />
<br />
Also like the 2nd Amendment, the 1st Amendment is not universal in its protection, in that not all speech is protected. For instance one cannot falsely yell fire in a crowded theater or other venue nor can an individual use speech to incite a group of individuals to “string up” some Jews, Blacks or Christians, etc. Furthermore the use of speech occurs at a far greater level than that of gun use, thus making it far more inconvenient for both society and individual functionality to require public speech course requirements versus gun training requirements. <br />
<br />
One widely used argument in favor of gun training licenses have been a comparison to automobiles in that if one has to have a license to operate an automobile it makes sense that one should have to have a license to purchase a gun. Opponents have countered this analogy by stating that one does not need a driver’s license to purchase a car, thus one should not need a license to purchase a gun. On its face this counterargument seems to have value, until looking deeper. It breaks down rather quickly in the realm of practicality for almost no one purchases a car without the intention of it being driven, either by the buyer or someone else, and the same logic is applied to a gun, who spends hundreds of dollars on a gun without the prospect of using it at some point in the future? The only common instance where the purchase of either a car or gun is made without the prospect of using it is for collector purposes, which entails older models. It would not be difficult to create a hard cap on a date of purchase; i.e. all guns older than 1970 or some other year decades ago, which would not require a license.<br />
<br />
Others would stubbornly avoid the issue of public health and practicality and argue that gun purchase is a right, but car purchase is not, thus the above analogy is irrelevant because Maryland policy violates the Constitution. The problem with this counter-argument is that firearm purchase is not a guaranteed universal right, the courts, both of conservative and liberal leaning, have ruled numerous times that government in all its forms can place reasonable restrictions on arms purchase and possession. Requiring the acquisition of a training license for the purchase of a particular type of firearm in a clear and transparent way, with no additional requirements beyond existing current training methods, does not produce an unreasonable burden for the purchase of a weapon, which is exactly what the Maryland and federal courts have ruled regarding the Maryland law.<br />
<br />
One point of interest is why pro-gun individuals are so knee-jerk against even reasonable and intelligent gun policy? The best possible explanation appears to be simply “slippery slope paranoia”. They believe that giving any type of power to the government to restrict access to weaponry for private citizens will eventually lead to an environment where government restricts access to all weaponry. <br />
<br />
Of course such a belief is incredibly foolish on two separate grounds. On legal grounds this belief fails because the Supreme Court ruling in District of Columbia v. Heller basically created a floor regarding what the government can do with respects to limiting private access to weapons. On non-legal grounds if a tyrannical government ever arose in the U.S., the idea that private citizens would be able to effectively fight against it (and the U.S. armed forces) without vast levels of support from foreign nations is laughable and utterly delusional, thus any “restrictions” on private firearm access established by this new tyrannical government would be irrelevant. Overall all slippery slope paranoia is doing for pro-gun individuals is making them indirectly responsible for more people dying due to gun violence.<br />
<br />
The issue of creating a training course is not a significant hurdle because numerous training courses of varying degrees already exist. The only additional requirement beyond the standard training course would be the necessity of live fire training. Basically to purchase a gun the training course would require individuals to fire that type of firearm, given if it has a different firing capacity. For example firing a Beretta is significantly different from firing a type of shotgun, but not significantly different from firing a Desert Eagle. Basically individuals should actually know how to use the weapon they intend to purchase versus simply just “thinking” they know how to use it. For most training courses this live fire requirement is already addressed. <br />
<br />
The length of time that a purchase license would last is an interesting question because too long and the general acquired skills that comprise the purpose behind gaining the license diminish, but too short and individuals may be pressured into purchase out of fear that the license will expire. Overall the total time period would be up to the states, but it stands to reason that between 1 to 5 years for license length would seem most appropriate. The issue of renewal is of limited importance because most individuals do not purchase a large amount of guns period, let alone over a long period of time; therefore there is little concern for any “annoyance” factor that may involve having to renew a license for a fourth gun purchase x years from the first purchase. <br />
<br />
In the end anyone who cares about reasonable and effective gun policy, which should be almost everyone because continued gun violence does not serve a positive purpose for any law-abiding citizen, must realize that both closing background loopholes and ensuring that gun purchasers have proper training are important elements to accomplishing this goal. Some may argue that background checks are too slow and training is an inconvenience to buyers; however, purchasing a gun should not be a spontaneous action. Whether the gun will be used for hunting or self-protection, either motive is not short-term critical, but long-term thus the inability to acquire the gun immediately from the buyer on the same day does not produce a burdensome disadvantage opposed to the benefits background checks and training provide to society as a whole. Overall a certain portion of the population has to realize that some measures to ensure appropriate and responsible distribution of firearms will NOT result in the loss of any appropriate firearm access. Continuing to oppose such rudimentary and reasonable policy does nothing but increase the probability for more death.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-9663219599036559782015-10-26T10:05:00.003-07:002015-10-26T10:05:47.042-07:00Are changes needed in probation and parole (community supervision) protocols?<br />
When the topic of prison is brought up most of the time the conversation focuses around the events that lead to an individual’s incarceration: the arrest and the trial. A significant amount of words and ink have been spent talking about the inequities of the system both on economic (still very true) and racial (becoming less true) grounds. In fact a number of individuals continue to argue the point that because minorities make up a disproportional amount of inmates relative to their population demographics, the criminal justice system is bias. <br />
<br />
Unfortunately while these individuals are very quick to point to drug related criminal offenses as a significant reason behind this bias, conveniently forgetting that almost all of these individuals who are convicted of these crimes are guilty, these same individuals fail to discuss another important and pertinent issue that afflicts most poorer former criminals, the nature of probation and parole and its role in influencing the prison population. Instead of arguing at worst false or at best hard to prove bias, perhaps these individuals should turn their attention to addressing an actual problem demanding reform in the criminal justice system: the role and influence of probation and parole.<br />
<br />
For the purpose of this discussion it must be noted that parole is a sub-category under the more broad designation: community supervision. Community supervision is commonly defined as allowing convicted criminals to serve sentences in the community where if no jail time is involved the supervision is more specifically referred to as probation and if it involves the early release from jail the supervision is referred to as parole. This is an important note because there are times when individuals use the terms community supervision and parole interchangeably, which is not entirely accurate. However, both probation and parole are inherently intertwined on a meaningful level. <br />
<br />
When probation and parole is actually discussed, one of the central arguments for increasing its utilization is associated with the general per inmate costs associated with incarceration. It is not uncommon to hear some prison reform activists state at rote that incarceration costs per inmate are absurdly high with a national average exceeding $30,000, one study in particular calculated a cost of $31,286 per inmate.1 While on its face this number seems remarkably high and irresponsible, the problem with this figure is that while technically accurate it does not accurately portray the actual costs associated with prison. <br />
<br />
For example that same study found that while $31,286 per inmate was the national average, in Kentucky it only cost $14,603 per inmate versus in New York it costs a whopping $60,076 per inmate.1 Remember that these are ratios, not absolute numbers so this radical disparity cannot be explained by simply stating that there are more people incarcerated in New York than Kentucky. So the question is why does such a disparity exist?<br />
<br />
The simple answer is that most prison related costs are associated with two major categories: 1) capital and direct operational costs; 2) employee related costs like salaries and pensions, etc.; Unfortunately for the “prisons cost too much, thus laws need to be changed/nullified” crowd both of these elements have low rates of elasticity. This is one of the principal reasons why an inmate costs almost $46,000 more to house in New York than Kentucky, for the average cost of living in New York is much higher than Kentucky, thus prison officials and other employees command higher salaries as well as higher build and maintenance costs for the prisons themselves. So what is the response, significantly reduce the salary and/or benefits of New York based prison employees? <br />
<br />
One might counter that the goal should be to significantly lower the prison population, but such a result should only have a marginal influence on prison costs. Due to the costs associated with employee salaries and benefits along with the nature of prison operation, costs should not drop in any type of meaningful proportional relationship with the resultant decrease of the prison population. While it is true that there would be fewer inmates, which should result in overlap of some employee duties in the prison system leading to firings, the reason for this lack of proportional change is due to two major realities: 1) the structured and static prison environment and its operation does not produce large quantities of employee overlap; 2) a number of prisons are already short-staffed meaning that reducing the prison population will reduce the level of burden for certain employees, but not employee related costs.<br />
<br />
Therefore, for early inmate release to actually play a significant role in reducing costs associated with prison, the number would have to be large (double digit percentages), which would raise questions regarding who was being released and why. It is also worth noting that drug related offenses is estimated to make up about 17-20% of the total prison population (depending on exact definitions) down from a peak at 22% in 1990, thus decriminalizing minor possession drug offenses will do little to change the size of the prison populations.2,3<br />
<br />
The failure of the economic argument does not dismiss the idea for the necessity of prison reform, especially in the area of community supervision. However, instead of trying to stretch an argument on an economic ground that is just not accurate, the argument should be from the perspective of social justice and morality. So what is it about these elements of community supervision, especially parole over probation for parole carries a more damning societal element than probation, that need reform?<br />
<br />
One could make the argument that community supervision programs have already been widely utilized regardless of whether or not the motivation was to limit the inmate population for the “participation” rate has increased from 800,000 in 1970 to more than 4.75 million in 2013.4-6 Unfortunately this increase has not translated into a dramatic decrease in incarceration for it is thought that at least 1/3 of all inmates are incarcerated for probation or parole violations.4,5 However, it is important to note that these statistics are broad statistics and do not narrowly define why these individuals in community supervision eventually end up in prison. It is certainly valid to presume that some are incarcerated for routine violations of protocol whereas others have committed new crimes that results in a parole violation in addition to the criminal charges for the new crime.<br />
<br />
To the point of the protocol violators, this raises another question of how much protocol violation is suitable? For example if an individual continues to skip meetings with a parole officer such behavior is an indication that this individual does not respect the process or even the most simple rules, thus it makes sense to anticipate an increased probability for future criminal behavior, thus violating this individual’s parole would be appropriate. <br />
<br />
The benefits of parole for both the state/prison and the inmates are rather obvious: 1) parole can be a means to foster reflection and behavior change lowering the probability for negative actions while in prison and future recidivism once released from prison; 2) parole can act as a controlled means to reduce the prison population without significantly increasing the risk to public safety through who is released. However, while there are benefits the operational concerns with parole fall into two categories: 1) the process of receiving parole; 2) the process of maintaining good standing while on parole.<br />
<br />
Parole boards are utilized to determine whether or not an individual is suitable for parole largely due to their focus and specialization to judge risk factors associated with probability to re-offend. In fact parole boards and parole itself support the idea of a more evidence-based methodology in the criminal justice system, especially with regards to sentencing (i.e. the use of risk assessment and comparative relationships and examples to help make decisions about sentences both in their initial assignment and their suspension).7<br />
<br />
However, one of the interesting questions regarding the methodology of a parole board is the large focus on risk assessment, but almost no focus on value assessment. Basically parole boards are only judging potential negative outcomes born from the release of an individual, thus individuals with negative scores that are not negative enough, above some pre-determined threshold established by the board, have the possibility of receiving parole. However, all “scores” will be negative because no positive potential is significantly judged. What would happen to the number of paroles if parole boards analyzed what positive things the individual in question could do for the community? <br />
<br />
Some have argued that parole boards have no incentive for changing the way they operate because there are no interpreted costs associated with how they operate.7 Basically there is no retention cost assigned to a parole board for the social and economic costs of continuing to incarcerate an individual and there is no reward given to parole boards for releasing individuals that do good things in a community. This lack of retention cost is thought to establish a very high bar to grant parole in normal circumstances because again the only thing that is assessed is whether or not an individual will produce negative outcomes for society when released on parole.<br />
<br />
Another interesting aspect that is not commonly addressed with regards to parole is the idea of “lack of failure”. There almost appears to be a motivation to ensure that no parolee commits another crime. This motivation is completely unrealistic unless one attempts to so heavily limit the idea of paroles that only a handful of individuals ever receive it. Some would argue that is exactly what has happened. The motivation for granting parole must be willing to accept the fact that some parolees will commit new crimes in order to ensure a valid parole system. Otherwise without a valid parole system the idea of prison as a rehabilitation tool loses credibility because individuals who change and/or mature during their sentence and have higher probabilities of being productive members of society will still have to absorb the full cost of their previous criminal behavior. Also society misses out on the benefits that a number of these individuals could provide. <br />
<br />
It must be noted that while attempting to measure benefits as well as risk is important, parole boards must never be forced to release a certain target or quota of inmates. Such a quota system would defeat the purpose of the evaluation process for it would force the parole board to change the system from an absolute judgment to a relative judgment. Basically the board would have to evaluate whether or not prisoner A was “safer” than prisoner B, not whether prisoner A was actually safe. It would be akin to a curve system in education where the top 10% of a class received As regardless of their actual performance, i.e. someone with a 55% in the class in the top 10% would receive an A even though 55% is certainly not A-level performance. <br />
<br />
Of course with respects to the revocation of parole one can get distracted by statistics like 50% of individuals in U.S. jails and 33% in prisons are there due to parole or probation violation. However, while some may view these statistics with shock, it is important to note that what is not being asked is why are so many individuals on probation or parole violating the terms of that condition? Are the parole conditions too stringent/unreasonable? Do the individuals not have effective opportunities to “change their lives” once leaving prison? Are parolees just disrespectful of the law and the conditions of their parole? For example failing drugs tests are completely on the parolee for no one is forcing illegal drugs into their body. One study in 2004 determined that within thirteen states around 25% of those on parole were returned to jail for “technical” violations, so clearly some concern is warranted.5<br />
<br />
One explanation for the reasons behind so many probationers and parolees going to jail is that various laws allow most states to impose broad release conditions upon parolees. Basically the one real criterion is that the condition governing the continued release must be less punitive than prison. Parolees can challenge conditions based on a perceived violation of constitutional rights, but a vast majority of court decisions have ruled in favor of the state under the premise that certain “rights” are diminished during the period of supervision.5 This and other “restrictions” of rights of former incarcerated criminals is certainly a continuing problem. <br />
<br />
So what are the core problems regarding the parole structure? First, the procedure of granting parole appears to place too much emphasis on avoiding direct negative actions versus looking at what positives an individual can produce for society. This system limits the number of people that are able to “qualify” for parole. Second, it does not seem like there is a cohesive and universal system of requirements for parolees. The existing “pick-and-choose” system appears too restrictive and capricious. Third, there is too much discretion for parole officers in deciding what is a violation and what can “slide”. While it certainly can be argued that all violations are the responsibility of the parolee, it is certainly reasonable to have a transparent understanding of when a violation will actually be judged as violation and what will not. Fourth, when parole violations are recorded, a number of times the length of time taken to process the violation is unreasonably long resulting in longer temporary incarceration periods.<br />
<br />
One of the principal questions regarding the state response to individuals on parole is what is the responsibility of the state to “ward off” or limit the probability that individuals commit new criminal activity? For example there are a number of situations where certain conditions of parole are tied to reasonable actionable risks associated with a particular individual, i.e. prohibitions on purchasing or possessing weapons for violent offenders, drug use for those on parole for drug related crimes, or interacting with known associates or suspected associates for past criminal activities in general. Clearly such restrictions are designed to limit the opportunity that an individual has to commit additional crimes after being paroled. However, are such restrictions appropriate? Interestingly enough whether or not these strategies actually reduce the probability of new crimes is almost irrelevant because none of the above restrictions are overly burdensome that if specifically and transparently assigned to the parolee, following these restriction would interfere with the individual living his/her life.<br />
<br />
However, what about more board rules like curfews, alcohol consumption prohibition, required participation in educational and/or drug treatment programs or even clerical paperwork such as submitting financial forms to a parole officer? It can be argued that these more board rules do place an unjustified burden on the lives of the parolee especially in reference to how they influence the probability that said individual will commit new criminal action. For example will having to file a monthly income statement to a parole officer really stop individual A from engaging in future criminal activity, it stands to reason that the answer is no, unless the individual was convicted of some form of financial fraud. <br />
<br />
Therefore such a condition does nothing, but add additional burden to the life of the parolee. Some studies have suggested that rehabilitative interventions, like drug treatment, can actually increase the probability that parole is revoked more than likely due to the addition of factors that can lead to parole being revoked.8 Whether these violations are born directly from the additional factors like drug treatment or increased scrutiny due to the participation in these factors is unclear. <br />
<br />
Realistically it seems that the only genuine broad/generic condition that should be assigned to a parolee is a consistent meeting time between him/her and the appropriate parolee agent. Such a meeting should be used to provide a forum for discussion and counseling (in a sense) to provide the parolee with some level of support versus an interrogation or “visit to the principal’s office”. These meetings should also provide a forum for parolees to manage any issues associated with the conditions of their parole. Outside this meeting, broad rules seem not to have value and revoking parole should only involve purposeful abandonment of these meetings, any specific restrictions as discussed above due to previous criminal activity or the commission of new crimes. Eliminating these unnecessary “for societal security” restrictions should also eliminate some of the capaciousness of whether or not a particular parole violation is actually written up as a violation or not; limiting the number of restrictions gives weight to those remaining restrictions making their violation actually mean something.<br />
<br />
Overall though it is important to note that while some attempt to link the rate of parole violations to improper parole rules and restrictions, a parole violation can occur for numerous reasons. It is certainly possible that a sanction is so restrictive that compliance is unlikely or that a sanction is applied inappropriately to a given candidate, but it is also possible that a large rate of parole violation is valid in that individuals are willingly violating parole due to their inability or disinterest in following the assigned restrictions and the corresponding parole officers are doing their jobs well and appropriately in violating parole for those individuals. While making direct “slippery slope” arguments in association with criminal activity is questionable, it is certainly reasonable to suggest that individuals who do not respect the law on the misdemeanor level, such as typical parole violations, will have less respect for it on a felony level as well. Therefore, it is important to understand why there are so many parole violations not simply react to the fact that there are so many as evidence that the system “doesn’t work”. <br />
<br />
Some individuals argue that one factor that should have more weight on both awarding parole and revoking parole is the age of the individual. Age is thought to be the greatest influencing factor on criminal activity probability where the probability for criminal activity peaks around the mid-20s and then steadily drops as the individual ages, when all other factors remain equal. In addition the older an individual is the more difficult it may be for that individual to comply with various parole derived restrictions due to other occupational and familial obligations.<br />
<br />
Furthermore in the vein of reducing probation violations a number of proponents like to point to a reform undertaken in Hawaii called the Hawaii’s Opportunity Probation with Enforcement (HOPE). These proponents seem to regard HOPE as a modern answer or improvement to a cumbersome system. HOPE proponents argue that in the past when criminals on probation for drug-related crimes would violate their probation the result would frequently be disproportionally slow to the gravity of the violation, thereby resulting in wasted state resources and undue burden on the parole violator. In HOPE the response to program violations, like drug test failures/skipped tests or missed probation/parole meetings, involves certain and swift responses typically a few days to a week in jail. Also HOPE proponents point to the initial arraignment period that is conducted through a large group in open court, which is thought to save time and money versus conducting individual arraignments.<br />
<br />
One study, often cited by HOPE and other parole reform proponents, found that in a randomized controlled study HOPE probationers were 55% less likely to be arrested for a new crime, 72% less likely to use drugs, 61% less likely to miss appointments, 53% less likely to have probation revoked and on average sentenced to 48% less prison time than the control group.9 However, while proponents have sung the praises of this study to validate the superiority of HOPE over more conventional programs, there appear to be some valid criticism of the study. <br />
<br />
For example there are questions involving the study over-emphasizing the influence of weak key elements and under-emphasizing other active elements producing bias in favor of the HOPE model; a failure to effectively control for other factors that may have lead to HOPE participants experiencing a lower level of criminal activity versus the control; focusing too much on the amount of criminal activity perpetrated by both parties (HOPE and control) instead of the type and severity of the criminal activity; finally incomplete analysis regarding the potential psychological influence of administering harsh sanctions like multiple days of jail for a few failed drug tests, but no other criminal action.10<br />
<br />
Among these concerns is the issue of the validity of HOPE as a “panacea” for all areas experiencing the need for probation reform. For example HOPE includes a variety of somewhat small time offenders, i.e. sex crimes, property, assault, but the only real evaluation study is the aforementioned one on drug offenders. Also similar to the issue of psychological ramifications there is anecdotal evidence that suggests a number of non-violent HOPE probationers with no history of violent crime advanced to committing violent crimes perhaps due to the incarceration born from HOPE violations. Basically while HOPE probationers appear to commit less overall crime versus controls, there is a higher probability that a HOPE probationer will commit a more violent or “high-value” crime than a control. <br />
<br />
Some proponents argue that these problems exist not due to a problem in the methodology and practice of the HOPE program, but instead due to a lack of resources to properly execute the methodology. Basically HOPE would work just fine if there were more police officers and judicial resources. However, this argument is rather hollow for clearly the resources to make the program work “just fine” are not available. Overall the HOPE program has a number of champions and a number of detractors in the Hawaiian government and justice system, thus looking to apply it to other regions of the United States as a practical means of probation reform appears premature. <br />
<br />
There are certainly problems in community supervision protocols, but some might argue that over the past few years for when data is available, community supervision rates have steadily declined. Such a statement is correct for between 2007 and 2013 the number of adults under community supervision declined from 5,119,300 in 2007 to 4,751,400, a drop of approximately 7.1%.6 Most of this drop can be attributed to a drop in the number of probationers (about 95% of the total drop). However, the reason for this drop is unclear; are fewer people being punished without being put on probation or are more people simply being incarcerated? <br />
<br />
In 2012 about 67% of states including the District of Columbia experienced a decrease probationers where Georgia, Michigan, New York and North Carolina accounted for 51% of the decrease; while 33% of states reported an increase in probationers where Washington, Ohio, Tennessee, and Idaho accounted for about 50% of the increase. Parolee population decreased slight in 2012 as well with the increase and decrease split between the states with Pennsylvania, Texas, and federal system accounting for 55% of the increase and California alone accounting for 72% of the decrease.11<br />
<br />
A positive trend is that between 2008 and 2012 the rate of incarceration among probationers, regardless of cause be it new offense, revocation or other reason, has gradually declined from 6% in 2008 to 5.1% in 2012. It is important to note that the decrease from 2011 to 2012 was from 5.5% to the aforementioned 5.1%, so the 5.1% may not hold when the 2013 and 2014 data is analyzed and based on initial data the negative trend has held, but not the rate of decrease.11<br />
<br />
Also it is unclear what has caused this decrease, budget cuts, more responsible probationer behavior, less inherent restrictions to violate, etc. A similar, but smaller trend, for a decrease in reincarceration was seen for parolees from 2007 to 2012 that flattened out in 2013, with a similar lack of reason why although California again drove the decrease for parolees. However, it is worth noting that most of this decline was seen from drops in revocation rates versus drops in new commission of criminal activity.<br />
<br />
Also another short-term positive is that only 35% of those who become parolees did so through mandatory release from prison versus 54% in 2008, marking fourth consecutive year of decline.11 Not surprisingly discretionary release rose to 41% to account for some of this decrease in mandatory release.11<br />
<br />
While the above information is positive, there is a concern that the trend is more dependent on the global recession that occurred in early 2008, which ravaged state budgets forcing more releases from prison and more creative “solutions” versus probation and parolee like fines and community service. Speaking to this concern is that although 2012 did see a decrease in community supervision, that decrease was smaller than the decrease in 2011 (i.e. the slope was positive) and this decrease shrunk again between 2012 and 2013.6,11 This result may be a blip in the trend or the start of a new trend due to state budgets normalizing having generally recovered from the recession. <br />
<br />
Returning briefly to the question of are fewer people being punished without being put on probation or are more people simply being incarcerated, some believe it is the latter. This belief is based on the idea that for some unknown reason, maybe political or not, local district attorneys have become more aggressive at charging individuals with crimes that result in longer jail sentence, thereby making probation less likely.2,3<br />
<br />
When looking at all of the issues surrounding the criminal justice system in the United States, one of the easier areas to make positive advances is in community supervision, especially for parole. One of the key areas of parole is a change in mindset with regards to its application in that the public must understand that no system is perfect, therefore, the goal should not be to completely eliminate the prospect of criminal activity from parolees, but reduce it through effective decision-making and management. A part of this effective decision-making is to apply appropriate restrictions on parolees based on their previous criminal history and perceived psychological acumen not a broad “one size fits all” mentality. Administering unnecessary and broad restrictions will more than likely produce more harm than good both for the community and the parolee. Overall while addressing the issues within community supervision will probably not produce the savings boon that various prison and criminal justice reformers seek for the criminal justice system, it would be important to serving appropriate and fair justice.<br />
<br />
<br />
Citations – <br />
<br />
1. Henrichson, C, and Delaney, R. “The price of prisons: What incarceration costs taxpayers.” Federal Sentencing Reporter. 2012. 25.1: 68-80.<br />
<br />
2. Pfaff, J. “Waylaid by a Metaphor: A Deeply Problematic Account of Prison Growth.” Mich. L. Rev. 2012. 111:1087.<br />
<br />
3. Pfaff, J. “The Myths and Realities of Correctional Severity: Evidence from the National Corrections Reporting Program on Sentencing Practices.” American law and economics review. 2011:ahr010.<br />
<br />
4. Pew Center on the States. “State of recidivism: The revolving door of America's prisons.” 2011:2<br />
<br />
5. Klingele, Cecelia. "Rethinking the use of community supervision." J. Crim. L. & Criminology. 2013. 103:1015.<br />
<br />
6. Herberman, E, and Bonczar, T. “Probation and Parole in the United States, 2013.” U.S. Department of Justice. Bureau of Justice Statistics. October 2014. NCJ 248029.<br />
<br />
7. Ball, D. “Normative Elements of Parole Risk.” 1/1/2011<br />
<br />
8. Albonetti, C, and Hepburn, J. “Probation revocation: A proportional hazards model of the conditioning effects of social disadvantage.” SOCIAL PROBLEMS-NEW YORK. 1997. 44:124-138.<br />
<br />
9. Hawken, A, and Kleiman, M. “Managing Drug Involved Probationers with Swift and Certain Sanctions: Evaluating Hawaii’s HOPE: Executive Summary.” Washington, DC: National Criminal Justice Reference Services. 2009.<br />
<br />
10. Duriez, S, Cullen, F, and Manchak, S. “Is Project HOPE Creating a False Sense of Hope: A Case Study in Correctional Popularity.” Fed. Probation. 2014. 78:57.<br />
<br />
11. Maruschak, L, Bonczar, T. “Probation and Parole in the United States, 2012.” U.S. Department of Justice. Bureau of Justice Statistics. December 2013. NCJ 243826.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-43852231954205185682015-09-23T10:08:00.000-07:002015-09-23T10:08:38.152-07:00Global Political Questions associated with Building a Space ElevatorThe idea of a space elevator has long captivated various minds since it was both theorized in scientific circles by Konstantin Tsiolkovsky and introduced into popular culture by Arthur C. Clarke. This fascination is divided between the technical difficulties associated with its construction and the optimistic returns from its successful operation. The most prominent benefit of a space elevator is the presumed dramatic lower launch costs as travel into space would move from expensive single-use launch systems to the multi-use consistent space elevator. Some have attempted to dampen the optimism associated with a functioning space elevator by suggesting that a space elevator in general will not significantly affect the overall cost of space travel. <br />
<br />
These suggestions are more than likely incorrect because they commonly fail to appreciate the eventual evolution of a space elevator for the first prototype for any form of new technology is always the most expensive and least efficient. Also the non-launch elements associated with space travel, that skeptics reference as a significant cost factor unaffected by a space elevator, should also see significant cost drops over time, through not immediately, as industries adjust to space travel being a more common occurrence than less than once per year. Therefore, those industries directly related to space travel, especially those that supply parts and consumables, will create more streamlined procedures to prepare for and supply launches. Costs will also reduce through the interaction between the private sector and the public sector as with lower costs associated with space travel governments will be more willing to fund space travel increasing the rate of private funding. Finally a space elevator should be an important achievement for humanity in general if it wishes to actually leave the confides of Earth to colonize other heavily bodies, be them planets, moons, asteroids, etc, with any level of success.<br />
<br />
While a lot of effort has been spent on the technical issues and the back and forth of how valuable a space elevator will be, very little time has been spent on the political and secondary economical issues associated with a space elevator. This lack of attention is unfortunate because these issues are very important to the stability of a space elevator both physically and functionality. Therefore, it is important to understand these issues and how they can be successfully managed in order to effectively influence the positive operation of a space elevator after its construction.<br />
<br />
The most talked about and analyzed secondary issue with a space elevator is protecting it from environmental damage. The list of possible threats to a space elevator is rather extensive including, but not limited to: lightning and high winds, oxygen and other atmospheric (both lower and higher) chemical reactions, radiation and electromagnetic fields, and space debris along with micro-meteors and other low-Earth orbiting objects like satellites. Concerning satellites it is expected that twice per day each orbital plane will intersect with the elevator and there will be times when both a satellite and the elevator will fill the same area at the same time threatening a collision that will damage both the satellite and the elevator. This problem is not viewed as critical in any real light because operating satellites commonly have a means to generate slight course corrections that can be used to avoid these potential collisions. Non-operational satellites and other space debris are more complicated for they cannot make any adjustments. <br />
<br />
Meteoroids, especially micrometeorites, are even worse than space debris for they are much less predictable. Impacts from micrometeorites are almost guaranteed, forcing one of three possible strategies: 1) deploying some form of shielding that could be absorb the damage and then regenerate itself some how; 2) designing a different system for elevator continuity beyond the more conventional ribbon design. One example that has been discussed is the hoytether system, which involves a network of strands in either cylindrical or planar arrangement with multiple helical strands; 3) create an autonomous repair system to manage the various points of damage. <br />
<br />
One common and almost universally agreed upon strategy for minimizing the damage potential from orbiting objects is to anchor the space elevator on a mobile controllable target like a large ship or ocean-going platform. By making the anchor point mobile, it should be easier to avoid negative weather patterns as well as non-controllable orbiting objects. Most want this platform in the Eastern Pacific Ocean due to its relatively calm winds and the low probability of lightning. Using non-conductive fibers and small cross-sectional areas that rotate with the wind can provide additional protection. Issues associated with ice formation have been a little more troublesome due to weight considerations. However, all told there may be some meaningful problems with this moving anchor strategy that are not discussed by its proponents, which will be highlighted later.<br />
<br />
There is some question to whether or not oxygen corrosion in the upper atmosphere will actually be a significant problem. One way to test the problem potential of oxygen corrosion could be to send various potential elevator material to the International Space Station and expose those materials to the appropriate conditions for extended periods of time. If corrosion is a problem then either the tether must be made from corrosion resistant material like gold or platinum or be coated with such a material. Finally actual repairs to the space elevator are somewhat ambiguous with space elevator supporters simply reporting that there will be special repair climbers that handle this issue. However, it does not lend much confidence when it simply must be assumed that once construction is completed sufficient knowledge will exist to design these repair climbers. <br />
<br />
Overall the previously mentioned issues may be the easiest ones when dealing with a space elevator. Very little work has been done on the political issues associated with the operation of a space elevator. For example suppose country A builds a space elevator, what would be the procedure for allowing another country, group or individual to launch something into space? Will the only requirement be the ability to pay some monetary sum established by country A? If so, would that allow a group like Hamas or ISIS to launch something into space?<br />
<br />
These are important questions for multiple reasons, but most notably pertaining to potential weaponization of space. Note that for the purpose of this discussion the term “weaponization of space” will mean: “the placement of a device in orbit that can directly destroy, damage or<br />
disrupt the normal functioning of one or more objects within the confines of Earth.” Some individuals would argue that space has already been “militarized” due to the use of satellites in military operations, but space has yet to be “weaponized”. Also note that this definition for “weaponization of space” does not include attacks against orbiting objects like satellites for such potential already exists, demonstrated by U.S. and China and thought to be had by Russia as well.<br />
<br />
International agreements concerning space have been far and few between and are commonly negotiated in the United Nations. The first agreement and still governing one, due to actual ratification, regarding international relations within space is the Outer Space Treaty created in 1966 and officially signed by the United States, United Kingdom and Soviet Union in 1967 followed by all other major space “powers”. Unfortunately the Outer Space Treaty only notes broad legalities in association with space like no national appropriation through claims of sovereignty, state responsibility and liability for actions in space or damage, peaceful intent in interaction with celestial bodies, etc. While placing nuclear weapons in space is explicitly forbidden, there is no explicit prohibition of other types of weapons.<br />
<br />
More extensive and specific attempts for an international agreement regarding the issue of weaponizing space have been put forward, most notably the two versions of the “Treaty on Prevention of the Placement of Weapons in Outer Space and of the Threat or Use of Force against Outer Space Objects” (PPWT) by Russia and China, but the United States has rebuffed these attempts citing security concerns over possible space assets, a lack of a verification regime and provisions that would directly prohibit possessing, testing and stockpiling weapons that could be placed in outer space. One might questions the validity behind the rationality of this rejection, especially the issue of space assets due to Article V of the PPWT explicitly granting no restriction on the right of self-defense in accordance with Article 51 of the Charter of the United Nations.<br />
<br />
Nevertheless the General Assembly of the United Nations has passed two resolutions regarding the prevention of arms in space. The first resolution called on all States to contribute to the peaceful use of outer space, prevent arms races there and refrain from actions contrary to this major objective; it passed with overwhelming support with only two abstentions (Israel and the United States). The second resolution called for the “no first placement of weapons in outer space” and had less support, despite passing, with 4 countries (Georgia, Israel, Ukraine and the United States) voting against and 46 abstentions (including European Union member states). The use of the United Nations as a go-between may need to end in favor of more direct multi-national treaties due to the general lack of respect various powerful countries show the United Nations when it takes a position opposite to that of a particular powerful country as shown in the voting results on these two resolutions.<br />
<br />
Also if an agreement is reached what would be the consequences for violating the agreement as all of the countries that could successfully build a functional space elevator have dubious foreign policy histories; thus what penalties could be levied that could reaffirm trust issues in an attempt to normalize relations if such an agreement were violated? Would the only appropriate penalty be the destruction of the space elevator or would operational control be transferred to another party? Should the idea of a treaty be scrubbed completely instead granting operational control of any space elevator to, ironically the most neutral available body, the United Nations? While such a possibility could manage future problems better, how would funding a space elevator proceed if the government of country A knew that it would not retain operational control despite providing the capital, labor and technology to construct it? <br />
<br />
Apart from the issues of weaponizing space, the country that controls a space elevator will have an insurmountable economic advantage for launching objects into space, what would happen if this country monopolizes the technology not allowing any other nations access? Can a space elevator simply be treated as any run-of-the-mill commodity? Would anti-trust or global monopoly laws be applied? Should there be an international treaty that sets a firm price for all nations in the event of a space elevator being constructed or should the constructing country have the ability to set any price? These above issues are rarely, if ever, addressed when individuals discuss a future environment with a functional space elevator. The general mindset appears to be a “utopia-esk” societal arrangement where anyone who wants to use the space elevator can use it at cost. Clearly it is difficult to envision this particular environment as one that will develop in reality.<br />
<br />
Managing the problems associated with a privately constructed space elevator could also be complicated. Referencing the previous major question of who would have access to the space elevator, suppose corporation A built a space elevator, what would stop them from allowing groups to use it that held political, economical and/or military beliefs that differed from those held by country A? Numerous corporations have demonstrated numerous times over the years that as long as enough money is involved they have no moral qualms against carrying out business relationships with individuals or groups that commonly engage in violent actions against other parties, even if the reasons are superficial. So what types of laws will manage private space elevators? Should it even been legal for a private corporation to have operating control over a space elevator with the severity of what could result from “bad behavior”? Once again should the United Nations take over operating control of the space elevator with all revenue going to corporation A?<br />
<br />
With all of the above issues, if any individual or group wants to take the possible construction of a space elevator seriously then the international community must establish guidelines, rules and agreements that address these issues, especially on the issue of access. Access is the most important element because it will establish the general expectations regarding how society will utilize the space elevator to evolve both in a positive or negative manner. Without a binding and known understanding when it comes to these above issues, the probability for the successful construction of a space elevator drops dramatically because uncertainty will more than likely cause some party with the capacity to engage against the construction process in a negative way. Basically if country A does not know whether or not they will get access to a space elevator they may utilize violence to ensure the elevator is never completed.<br />
<br />
The issue of potential violence speaks to the location of the space elevator. As noted earlier one of the more popular strategies associated with locating a space elevator is placing it on a movable anchor, most likely a ship out in the Pacific Ocean. What type of protection should this ship have to ensure the safety of the elevator? Would this ship need to house and feed a police force? Would this ship need some form of anti-aircraft defense system? What type of no-fly zone and no-sail zone, if any, would encompass the ship? While the placement of the ship in international waters would eliminating any direct issues of jurisdiction with a single country it would also eliminate a number of problems associated with launching an attack against the ship as well, for attacking a ship in sovereign waters may represent an act of war that would prevent some parties from actually launching an attack. How maneuverable would the ship be if it has to engage in combat for sharp movements may create shearing and tensile stresses on the elevator causing meaningful damage?<br />
<br />
Another issue that must be addressed in association with a space elevator is how to manage space debris. The successful operation of a space elevator could dramatically increase the number of objects in LEO or even GEO, which will increase demands on available orbital space as well as provide additional threats to damaging the space elevator. What type of international accord will govern the procedure for managing space debris? <br />
<br />
The most significant authority regarding space debris is Article VIII of the Outer Space Treaty which states that all countries retain their ownership rights on all objects launched into space even if those objects are no longer functioning or are pieces off of existing functional objects and the 1972 Convention on International Liability for Damage Caused by Space Objects. There is no salvage aspect to space objects, unlike oceanic objects, which are covered by maritime law. Thus for any country or agency to interact with non-functional satellite A they need legal consent from the launching nation. The biggest problem with this current standing is that small objects that break off of a satellite or other larger space object with no functionality at all are still considered owned by the launching nation, thus technically to remove these objects there origin source would have to be identified making legal removal difficult. <br />
<br />
One way to deal with this issue is for all “space” nations to reverse the legal standing of space objects. Basically instead of country A retaining legal standing over all launched material and its resultant components, country A would need to explicitly state what space objects they hold legal standing on, thus if no chain of custody could be established for a given object then no country could have claim on that object and it could be freely removed by an appropriate party.<br />
<br />
The two most common removal methods for space debris are: 1) moving the object to a “graveyard” orbit where it will be unable to interact with functioning satellites; 2) launching a projectile at the object to remove it from orbit and return it to Earth. An operational space elevator would ease the obstacles associated with these two above methods as well as possibly provide a third removal method involving attaching the object to a climber and transporting it down to Earth on the elevator itself.<br />
<br />
It is also worth noting that Article VII of the Outer Space Treaty covers liabilities; strict liability standards exist for space objects that cause damage to the surface of the Earth or aircraft and fault standards are assigned for damage occurring to a non-Earth based location. This liability would have to be transferred to any organization responsible for removing these objects. Unfortunately for those desiring a competitive marketplace for debris removal, the best strategy would actually be limiting all removal activities to the controlling operator of the space elevator due to this group possessing the most relevant knowledge and access. Competitors would not have access to the elevator and their strategies for removal would typically be more risky. Flat and fair rates should be charged for debris removal.<br />
<br />
Due to the increased ease at removing debris, would it be appropriate for each country to replace all satellites older than x years (x to be determined by an international agreement) including all associated parts at cost before allowing the use of the space elevator? Basically with the development of a space elevator would countries be able to launch as much as they could afford or would each country have a specific quota based on the some factor (size of economy maybe) that could even be brought/sold/traded? <br />
<br />
Overall there are a number of important political and diplomatic issues that have yet to be discussed let alone resolved regarding the construction of a space elevator. One might suggest that discussing these issues is akin to putting the cart before the horse for the technology to construct a space elevator is still in its basic infancy; however, that fact highlights the necessity of discussing these issues for if these issues cannot be successfully managed and resolved then the construction of a space elevator would produce wasted effort and resources. Managing the political issues go hand in hand with the technical issues for successfully operating a space elevator, so it is important that all aspects of a space elevator be discussing in realistic terms over some dreamy utopic ideal.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-11678868670194568302015-09-09T10:06:00.000-07:002015-09-09T10:06:35.462-07:00The "Cost" of Morality in Society<br />
One of the interesting aspects of how society has developed involves the apparent evolution of morality and its role in society. It would be reasonable to conclude that the formation of an individual’s moral beliefs is mostly derived from two sources. First, as a child, individual morality is heavily influenced by parents along with the culture/traditions of their environment. Second, as the child grows the influence of these initial defining factors can increase or decrease as life experience supports or challenges those original beliefs. Therefore, an individual’s morality is largely defined by the morals of parents/community and how life experiences interact with those initial drivers.<br />
<br />
While some may argue the finer points, humans like to believe that they reside in a society built upon the idea of a meritocracy in that an individual can become successful regardless of upbringing or circumstance by simply working hard and/or smart. However, for such a belief to represent reality instead of one’s mere false perception of reality, society must adhere to a specific set of rules to ensure that this ideal is met. Thus, the development and administration of morals for a particular society is different than that of those who comprise society because there cannot be variance in their application. Basically society must have one set of rules that is enforced universally for the idea of a meritocracy-based society to have any level of validity. Note that this condition is not the only element that is required to establish a legitimate meritocracy, but is only one of the numerous conditions that are required. <br />
<br />
Unfortunately the law itself does not singularly define morality in a society because those who comprise society directly influence the law, both in its development and enforcement. With this in mind it is important to understand how individuals react to violations of the law, i.e. the moral code of society. This understanding can be difficult because of mischaracterizations of interpretation. For example one of the most famous “moral” structures is The Golden Rule: Do unto others as you would have them do unto you. However, nowhere within The Golden Rule does it actually say that one must or even should be altruistic or fair to others. If an individual does not care about the prospect of being screwed over in his/her relationships and interactions, then that person can screw people over as many times as he/she wants and still be in accordance with The Golden Rule. The quid pro quo nature of The Golden Rule demonstrates a murky issue regarding morality in society.<br />
<br />
Another critical component of The Golden Rule is the idea of reciprocation. Negative actions are only relevant to The Golden Rule if another party can act in response to the pronounced negative action. Basically Person A is free to screw over anyone he/she wants if no one is able to retaliate. This realization is critical to the very notion of justice. For there to be justice an entity must exist that produces a certain morality and has the power to enforce that morality. In a society that entity is society itself, so when society has a fractured morality the ability to execute justice becomes more difficult and less certain. Therefore, it is important to ask how society responds to immorality in society.<br />
<br />
When the public concludes that an individual has committed an immoral act(s), a vast majority of the time that individual responds in one of three ways. First, the individual acknowledges the immoral nature of the action, apologizes for it and commonly professes to be more vigilant in the future regarding these types of issues. Interestingly enough the public seems amazingly forgiving, especially to those in power be it benign power like celebrities or real power like politicians. Such forgiveness might be misplaced based on how aware the offender was to the original immorality of the action for rarely are immoral actions that demand a public apology to society “mistakes”. Sometimes the individual in question really is genuinely sorry and does live up to their vigilance pledge while other times they are not genuine and are simply attempting to minimize the detriment associated with their malfeasance. <br />
<br />
Second, the individual holds steadfast to the idea that the action is not immoral and either ignores the characterization or tries to explain the action based on his/her analysis of the action and the motivations behind it. This action typically generates polarization between those who agree with the explanation or support the individual in general versus those who do not because they believe that the action is immoral and due to the lack of acknowledgment of its immorality the action will more than likely be repeated. Sadly this decision appears to be the most commonly selected among the three because the individual recognizes this split, which limits the power available to impose consequences on the individual for the action. Basically instead of admitting to doing something wrong the individual claims to have done nothing wrong.<br />
<br />
Third, the individual defends the action by citing similar or worse actions that have been taken by other individuals in the past, making an effort to limit the “severity” of their violation. This strategy is commonly used by politicians and their defenders and sometimes falls under the understanding of “it’s not a big deal because everybody does it”; yet this strategy is inherently counterproductive and foolish. The main problem with this strategy is that the initial action is never actually justified or explained in a moral context; also the action is indirectly confirmed using “hypothetical” preambles like, “even if I did it…” Why would one attempt to lessen the presumed severity of an action if one did not take that action and did not believe its perceived morality to be controversial? <br />
<br />
Furthermore not only does the individual indirectly admit to committing the questionable action, but a rational bystander observing the situation can only come to one conclusion. That conclusion is not “Oh that is why that action was taken, I understand now (agreement or disagreement follows)”, but instead “Oh, so you are an immoral scumbag, but according to you individual C is also an immoral scumbag”. Thus, society is given not a rational explanation for individual A’s actions followed by appropriate consequences, but a battle in the scales of immorality. Using rational analysis this strategy is clearly flawed, so how is it that politicians are still able to get away with criticizing the morality of their opponent’s to explain their own moral shortcomings?<br />
<br />
Avoiding the easy answer of society does not function rationality, one important possible explanation for the lack of consequences to numerous violations of morality is that, whether or not society cares about an individual’s morality is subjective. There are telling signs that modern society has reached an impasse between morality and success. For example is there any real advantage to being moral if society views you as a successful individual? <br />
<br />
There appear to be two major advantages that stem from moral behavior and the resultant “moral” characterization given to such an individual: 1) moral individuals tend not to violate social norms and the law, which significantly reduces the probability of criminal and civil action against them; a secondary element to this point is that moral individuals are rarely swindled, speaking to the old adage “you cannot con an honest man”; 2) moral individuals seem to have inherent advantages when cultivating allies for social and economic proposals largely based on perceived trustworthiness; <br />
<br />
Unfortunately it could be argued that for rich individuals neither one of these advantages are meaningful. Simply looking at numerous examples in the criminal justice system demonstrates that the ability to be successfully prosecuted for a crime is inversely proportional to an individual’s net worth; successful individuals typically have larger amounts of wealth than average individual and are more difficult to prosecute for their transgressions, thus heavily limiting the first advantage to being moral. Also with large amounts of money and resources even if another swindles a successful individual, the losses are typically insignificant.<br />
<br />
Also due to the fascination and allure most members of the general public have towards success and wealth, rich and successful individuals have far less trouble recruiting allies to their personal crusades both through their utilization of resources or perceived charisma. Thus having money and success can achieve the advantages associated with moral behavior via different pathways. However, having money and success also produce other meaningful advantages for individuals that are not associated with moral behavior. Further troubling is that behaving in a moral manner provides obstacles to becoming successful for they restrict passage along the shorter less scrupulous paths to acquiring success. It is much easier to swindle someone out of 5,000 dollars either directly through fraud or indirectly through influencing public policy over working 250 hours at 20 dollars an hour for a gross 5,000 dollars. <br />
<br />
Therefore, with the simple understanding that morality and success overlap the same advantages, with additional advantages associated with success alone and with potential conflict between morality and success, for a number of individuals immoral behavior is justified in the attempt to achieve success. Achieving success is the critical element for the viability of immoral behavior, for while society tends to look the other way regarding the moral transgressions committed by successful individuals either in the pursuit of success or after achieving success, if an individual fails to become successful then society looks to punish the individual for those transgressions. In some respects modern society views moral behavior under a lens of “the ends justify the means.” <br />
<br />
So what drives an individual to commit an action that could be regarded as immoral? For the individual in question an immoral action can be justified one of two ways: 1) psychological defense mechanisms are applied that allow that individual to perceive their action as moral and/or justified; 2) the individual does not care about the morality of the action and simply takes it to produce some form of advantage to get closer to becoming successful. Interestingly enough a number of individuals apply both methods first using psychological defenses then qualifying the defense with an “ends justify the means” attitude to support achieving the advantage through the immoral behavior. <br />
<br />
The second “justification” has multiple iterations with some experiencing a slippery slope evolution starting with small violations that are more justifiable and slowly increase their tolerance for justification whereas others simply invoke the “ends justify the means” attitude from the beginning. To investigate this slippery slope element more, largely because it is actually worth investigating for those with a large-scale “ends justify the means” attitude are simply insecure fools, why does an individual speed when driving? <br />
<br />
Clearly moral behavior involves not violating the law, but many people each day elect not to be moral, so how do they justify such a decision? Looking at morals in general, the problem with morality seems to be that people tend not to associate many tangible or even intangible rewards or gains with being a moral person. In addition to the perceived lack of advantage to being moral, individuals will frequently reason that they also give up something to be moral, the gains that would come from not being moral, i.e. the perceived shorter pathway to success.<br />
<br />
Using the speeding example, suppose there are two individuals John and Smith who both travel to work approximately 63 miles away, with 60 of those miles on an expressway with a 55 mph speed limit. John elects to following the speed limit of 55 mph where as Smith decides to travel at 65 mph. In this example by being moral and following the law John loses about 10 minutes in relation to Smith in extra travel time. Of course there are consequences to being immoral for if Smith is caught in violation of the law by an appropriate agent Smith not only loses the time he would have gained by breaking the law, he will also lose additional time and be penalized financially. Also Smith increases the probability of getting into an accident of some sort. So with these potential consequences, why does Smith elect to be immoral? Smith would more than likely use a cost-benefit analysis with an associated severity and certainty of consequence analysis. Does such a methodology cheapen morality? <br />
<br />
In a cost-benefit analysis morality could either be considered a benefit or a cost depending on the overall characterization of the action. If the considered action is in-line with the general character of the actor then morality will be viewed as an intangible benefit because it will help solidify that particular trait. If the considered action is opposed to the general character of the actor then morality can be viewed as an intangible cost because it could challenge any developed morality of the individual. The cost classification of morality can change if the individual changes his/her values, something that may happen with certain immoral actions to compensate for taking those actions. Not surprisingly the comparison between morality as a benefit versus a cost tends not to be equal because typically in human psychology positive elements are overestimated in their importance and negative elements are underestimated in their importance, which applies significant bias to this analysis.<br />
<br />
What rationalization does an individual use to reduce the significance of morality in the decision-making process? One common strategy is the 'white-lie' rationalization. The decision-maker simply isolates everyone else from the consequences of the decision typically with the reasoning that taking the action will not hurt anyone. For example Smith may elect to speed when traveling alone because he will be the sole receiver of any potential benefits or consequences. With highway statistics and common physics reporting that the faster a vehicle is traveling when colliding with another vehicle the greater the probability for fatalities this “I am the only one bearing responsibility for speeding” reasoning is clearly flawed. <br />
<br />
However, Smith may hold on to this flawed reasoning because of what he determines to be a small probability of an accident occurring, thus the more probable benefits and consequences still remain reserved for him and him alone. Of course a simple severity argument removes any remaining reason for Smith to speed in a typical situation because although the probability of an accident is low, the severity of the result more than eclipses any time benefit acquired by speeding in the first place, especially since the utilization of the saved time will be generally irrelevant. For example the additional 10 minutes of time that Smith saves each day in transit will commonly be squandered doing some unnecessary and superficial task; the acquisition of the additional time serves no real benefit, thus legitimizing the severity over the certainty of the consequence because the benefit is meaningless; i.e. there is additional risk for only superficial reward. <br />
<br />
So what can be done to address the waning value of morals in modern society beyond writing analysis about the flaws in the logical processing of advantage over disadvantage similar to that seen above? One option is to increase the rate of punishment for rich individuals based on the presumptive moral structure that because the value of immoral action is largely applied to increasing the probability that one becomes successful, the more successful an individual the less reason that individual has to behave immorally. Therefore, immoral behavior by wealthy individuals can be viewed as more severe than immoral behavior by poor individuals. Interestingly enough such a mindset would almost be opposite the popular current mindset, for the transgressions of poor people seem to be more amplified in society than the transgressions of rich people. <br />
<br />
The immediate problem with such a strategy is that executing a more severe punishment against individual A than individual B for the same infraction solely on the basis of income differential is not indicative of a fair and practical criminal justice system. Fortunately increasing punishment to the rich and successful can be a viable strategy by simply ensuring that lawbreakers are punished justly. Basically if the criminal justice system actually lived up to the ideal of being fair and practical, successful individuals will have a higher probability of being punished for their transgressions opposed to the current system, which produces unfair advantages for the rich and successful.<br />
<br />
In addition crimes associated with avoiding the investigation of the truth behind an action, most notably perjury and obstruction of justice, should have increased penalties versus those that currently enforced in society. One of the principal ways individuals avoid prosecution for their crimes is committing these two above offenses in effort to limit the ability of the criminal justice system to produce sufficient evidence to convict and rich/successful individuals have a higher probability of executing these strategies due to their additional resources and contacts. Increasing the penalties associated with perjury and obstruction of justice will at least reduce the probability that individuals engage in these tactics and make punishment for such action meaningful against those who still choose to take them.<br />
<br />
Also society must reduce the allure and admiration for the rich and “celebrity” in general for such a change will reduce the behavior of blindly following ideas by rich individuals solely because they are rich. Furthermore society must acknowledge the value of morality by applying associated pressure to wrongdoers. While the adage of “everyone deserves a second chance” is fine and appropriate, the number of chances one seems to get from society is directly proportional to level of success; in that the richer someone is the more immoral behavior is accepted both in magnitude and frequency. Society must change this perception, no more “fourth, fifth, sixth, etc.” chances. <br />
<br />
Finally the societal attitude regarding success and the allowed lack of morality in its pursuit is interesting in association with the frequent complaints that are heard regarding the number of individuals that are incarcerated in this country. It should be of little surprise that there are so many people in jail because society has created a flippant mindset regarding the law regardless of the magnitude of the crime. When looking at the number of individuals in jail very few have been convicted of crimes they did not commit, thus they are criminals. This creates an element of hypocrisy because one cannot complain about the number of individuals in jail and yet not argue against the “succeed at any cost” attitude that society has developed. <br />
<br />
Overall society has two paths to choose from: 1) accept society as it is now and the simple fact that such a society reduces the value of morals as well as increases the probability of significant divisions between classes and races, which will also inherently result in more criminal activity (whether or not this criminal activity is prosecuted remains to be seen); 2) reject this aspect of society and seek to eliminate the advantage cross-over between morality and success, thus at least restoring the character intangible values of morality to society, which should have a negative effect on criminality. Unfortunately as it currently stands the idea of hoping that morality somehow wins out in the end over the pursuit of success is a pipe dream; society must decide what it values more and if it wants to view itself as a meritocracy where success is determined by the power of an individual outperforming others under a consistent set of rules, thus making that success matter in any real psychological sense, then morality must win out. 13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-26795887366211725462015-08-19T10:11:00.002-07:002015-08-19T10:11:50.797-07:00The Politics of Money in Politics<br />
The recent announcement that Lawrence Lessig was exploring the idea of running for president raises two interesting issues. First, the principal reasoning behind his interest in running for president, in that he feels that present system of democracy in the United States has been flawed for some time now and feels other methods have not produced desired results at remedying these flaws. As Mr. Lessig tells it these flaws are largely born of the Citizens United Supreme Court ruling in 2010, which changed the political environment to basically allow for an infinite amount of money to influence the democratic process in every election. Due to this new influx of money a number of individuals, including Mr. Lessig, believe that the inherent principal nature of power equality that is representative of an indirect democracy has been lost resulting in the very real possibility that democracy in the United States could transition into an oligarchy.<br />
<br />
Mr. Lessig’s concern about this issue is so significant that it raises the second issue, the very nature of his tenure as President. For Mr. Lessig the importance of maintaining democracy should exceed everything else, but he believes, and justifiably so, that the existing field of presidential hopefuls will be unable to exclusively focus on this issue as they would have a number of other domestic and foreign policy issues to addresses as well. Thus, Mr. Lessig’s candidacy and resultant presidency is similar to that of a referendum. His entire platform is that he will devote the entire focus and power of the his presidency to ensuring the maintenance of democracy, which will largely involve eliminating the mass influx of money into the political process either through the repeal of the Citizens United ruling or another method. After accomplishing this goal Mr. Lessig would resign as President leaving the remainder of his term to his Vice President. <br />
<br />
The more important of these two issues is whether or not Mr. Lessig is correct to view the unlimited influx of money into the political process as a chief threat to democracy. The trademark notion of a democracy is “one person one vote” implying equal influence from all voting parties regardless of position or standing. There has been no change in this practice regardless of the level of money committed to a given election cycle. However, some would argue that the evolution of the political system in the United States has created an environment where any elected position of significant consequence demands a large amount of money to purchase advertisement and conduct other publicity activities in order to have a reasonable chance at winning. This monetary demand places an additional motivational incentive on potential candidates to abide by the wishes of those that have the ability to donate large sums of money at multiple instances. Also a greater influx of money may influence the candidate pool keeping individuals that might otherwise run for a position from doing so under the belief that they could not raise enough money to be competitive. <br />
<br />
So the question boils down to how much of an influence does money have on the ability of an individual to be elected in a given political race? Clearly there have been no significant cases of individuals literally selling their votes, that is an arrangement being made between a voter and a supporter of candidate A that said voter will vote for candidate A for 50 dollars. Therefore, if money is not used to directly “purchase” votes, what purpose does it serve in an election? The principal purpose of money in an election is to maximize information distribution for a given candidate. Basically the real advantage of candidate A having more money than candidate B is that it allows candidate A to take advantage of the interest and time limitations possessed by the electorate. <br />
<br />
For example instead of depending on a potential voter taking the initiative to look up the official position of candidate A on issue Y, spending money allows candidate A and his/her supporters to present the position of candidate A on issue Y directly to the voter via some form of media advertising be it television/radio/print/Internet or via direct interaction with a candidate A supporter. In addition to significantly increasing the odds of potential voters knowing the position of candidate A on issue Y, the fact that candidate A and his/her supporters are creating the delivery mechanism of the information allows them to frame the information in such a way that if desired the core message could be prone to misinterpretations or even outright lies that favor candidate A. This action can also be used against competitors framing their positions in such a way that could make them less attractive to voters. <br />
<br />
The next question is how important is this information capacity in an election? This issue has two different parts: first, how valuable is information in an election and second, how much information is available? Starting with the second issue first, in the Internet era for modern developed countries there is little ability to “bottleneck” information or control the information stream. Gone are the days when someone could simply spend enough money or favors to shutout another candidate’s message altogether. The principal advantage of money with respects to this second issue is the ability to saturate information on all forms of delivery systems: television, radio, Internet, hiring people to “spread the word” in public areas, etc. However, money is not the limiting factor controlling the actual ability to distribute information, it simply allows for the more efficiency spread of that information.<br />
<br />
Even though money is not a limiting factor controlling the basics of information distribution in a political campaign, is it a critical factor that can dramatically increase the probability of winning? This question is the central question in the first issue of the importance of information capacity: how valuable is information? The value of information in a political election is almost exclusively associated with its ability to produce votes for the candidate. Voters will not vote for candidate A based on two central elements: 1) the voter does not have information pertaining to candidate A either as a person and/or political position; 2) the voter’s political values and/or social values are significantly different from those of candidate A. <br />
<br />
In the first scenario the value of information is important for on the most basic level (not taking into consideration the specific characterizations of the candidate and the potential voters) there is a greater likelihood of an individual voting for candidate A if they are known versus voting for candidate A if they are not known. While it is certainly possible that a voter will not vote for candidate A after learning of their political/social values, it is also possible that they will vote for candidate A. Therefore, the behavior of the voter changes from a base low value (typically involving whether or not the individual will vote in the first place) to either a slightly lower value (disagreement with newly understood positions of candidate A) or significantly higher (agreement with the newly understood positions of candidate A). Overall it makes sense to inform voters regarding the important positions and traits of candidate A both logically and practically.<br />
<br />
However, it must be noted that the importance of expelling anonymity is inversely proportional to the scope of the election because of the validity of that anonymity. Basically if candidate A is running for a position on the School Board for Smith country there is a good possibility that candidate A will be unfamiliar to a number of potential voters because the perceived importance and scope of that position is small, thus information about candidate A is important to dispel that lack of knowledge. On the other hand if candidate A is running for one of the two U.S. Senate positions representing the state of California, it is highly unlikely that potential voters will be unaware of the important elements, both political and social, representing candidate A. Note that social elements must be included when discussing information distribution because a number of voters vote not on the political issues supported by a candidate, but on whether or not they like the candidate, which could have little to do with the candidate’s political positions.<br />
<br />
In the second scenario there is little money can do to produce votes for candidate A. If voter y is aware of the political positions and social standing of candidate A and his/her personal viewpoints are in opposition to candidate A’s positions then further information distribution is basically a waste of resources. The immediate question regarding the above statement is why does the distribution of counter information have such little influence that it can be so readily considered a waste of resources? <br />
<br />
There are two significant reasons for the above statement: <br />
<br />
1) In recent years, in large part thanks to a loud and more radicalized Conservative movement and to a lesser extent similar Progressive movement, voters in general have become much more polarized on a wide breadth of political issues creating a hostile environment to ideas that run counter these opinions, thereby further limiting an already small group of “convincible” middle-ground of potential voters. In fact there are even more party-line voters and single-issue voters that have mindsets so etched in stone that even if valid empirical evidence suggests that mindset is not accurate they ignore that empirical evidence. Basically in general there are more individuals who are less likely to even listen to a viewpoint that opposes their personal viewpoint, let alone debate the fine points of either viewpoint, than there have been in the past; <br />
<br />
2) Political insidiousness and desire for retaining power has resulted in gerrymandering various Congressional districts, which has also been indirectly related to the general break of diversity within a number of established communities creating more homogenous neighborhoods leading to the production of group-think single party voting blocs. Due to the presence of these voting blocs it is very difficult for opposing ideas to establish any meaningful foothold, especially due to the greater polarization of political environments as mentioned in reason one. These areas are a significant reason behind why winning percentages are so high for incumbents.<br />
<br />
The above discussion produces an interesting question for Mr. Lessig’s position that the potential influence of unlimited money is the principal threat to the equality of democracy (i.e. a representative democracy that represents each person equally). If theoretically money has no direct influence and little indirect influence on acquiring votes and in practice political science studies have produced conflicting results on the total value of money in an election, can the potential influence of unlimited money in elections really be viewed as the principal threat to democracy? <br />
<br />
Another concern with studying the issue of corruption via money is what process is used to determine whether a lawmaker is simply voting on their personal ideals (candidate A voting in favor of tax breaks for corporation W because he (stupidly) believes in the validity of supply-side economics), versus whether he is voting against his ideals to fulfill the Faustian bargain to a corporation (corporation W donated 1.5 million dollars to his previous campaign and plans to donate another 1.5 million to his next, so he votes in favor of tax breaks for corporation W)? This important issue is rarely addressed when discussing money and its potential corrupting influence in politics.<br />
<br />
Overall one could argue that the genuine problem with money in politics is that the money is being wasted for minimal advantage advertising instead of being spent on improving the domestic economy through investment or charitable donations. Perhaps the false perception of the advantage of money in politics is the real problem not the actual influence of money. For example Mr. Lessig and others that share his position have noted that it takes significantly more money to be elected to a given position of government now than it did decades ago, but is this statement actually valid? For example typically statements like that do not correct for inflation or how increases in population have increased the perceived advantage for more money, which would be a “natural” occurrence. Also there have been a number of races where candidate A has defeated candidate B despite candidate B outspending candidate A by 5, 6 or even 10x.<br />
<br />
However, for the sake of argument assume for the moment that Mr. Lessig’s point about the dangers of money is accurate. The next concern for Mr. Lessig is what can be done about it? If elected president Mr. Lessig would only have the power of the Executive branch of government in which to act against the Citizens United ruling, a branch that has little to no real power to produce the type of change that Mr. Lessig desires. One could argue that his election would produce a “mandate” to challenge the Citizens United ruling, but what real power would this challenge have? <br />
<br />
First, the idea of “mandates” are really only political theater anyways for in the past there was some level of concession by the opposing political party with the acknowledgement that “the will/voice of the people” had spoken and it would be inappropriate to obstruct the plans of the new administration and/or Congress out of petty spite. Of course that was then, the political climate now has certainly revealed that petty spite is fashionable. Mr. Lessig is certainly aware that the Republican Party, which has taken advantage of this new environment more so than the Democratic Party, would be his main legislative opposition to accomplishing his goal? Simply “invoking” the “mandate” of his election will not be sufficient to make them allies or have them “fall in line”.<br />
<br />
Second, even if Congress did act against the Citizens United ruling, what could it do that would not be challenged in the U.S. Supreme Court by the proponents of the ruling? It stands to reason that the current existing U.S. Supreme Court would overturn any legislative action that sought to weaken the “freedoms” granted by the Citizens United ruling. It has already demonstrated this motivation to some extent in American Tradition Partnership, Inc. v. Bullock rejecting a Montana state law that limited corporate campaign contributions even after the Montana State Supreme Court ruled that the law was narrowly tailored enough that it withstood strict scrutiny. <br />
<br />
Realistically it appears that at the moment only two things will allow for the restriction of excessive amounts of money from the political system. First, a change in the political ideology of the U.S. Supreme Court and a re-evaluation of the legal structure of the Citizens United ruling regarding the potential for corruption in the political system due to the influx of money resulting in this new Supreme Court overturning the Citizens United ruling, similar to how Brown v. Board of Education overturned Plessy v. Ferguson. Second, a new Constitutional Amendment explicitly addressing the issues associated with the Citizens United ruling, with the most popular type of amendment eliminating the ability of a corporation to be considered a “person” in the context of free speech. Outside of these two strategies, what can be done? Mr. Lessig’s emphasis about the advantage of focus, limiting money being the only issue behind his presidency, has little meaning for it is not a limiting factor in accomplishing his goal; the issue cannot be solely resolved by effort and trying hard. The limiting factor is the probability of success associated with the limited number of available strategies.<br />
<br />
Another concern is the idea that a single-minded focused mandate, which the election of Mr. Lessig would represent, can be established solely because polling information report that 80% - 85% of those polled, with little difference between political affiliations, believe that the potential of unlimited money in the political system is a big problem or “rigs the system”. Unfortunately, something the environmental movement is intimately familiar with is that just because a vast majority thinks a certain way in isolation does not mean that same majority is willing to work to accomplish that viewpoint. Basically while 80% of those polled consistently want money out of politics, how important is it to them to accomplish that goal, i.e. will they prioritize removing money from politics over various other economic issues, foreign policy issues, environmental issues, etc? <br />
<br />
As it currently stands based on previous actions, these respondents and potential voters appear to think the removal of money from politics is not very important because where are the droves of candidates making the removal of money from politics their number one campaign issue because it is so important to their constituents and will dramatically increase the probability of getting them elected? Basically if so many people think that money is rigging the system and that resultant corruption is of the utmost importance to address, there should be no difficulty finding numerous candidates that will vote to eliminate money from the political process on the most stringent level allowed by law versus tying their ideals to the pocketbook of corporation y or donor z. Clearly, and unfortunately, this is not the case. On its face it appears that Mr. Lessig has fallen into the typical single-issue trap of thinking that because the issue is very important to him, it must also be, guaranteed without question, very important to a lot of other people. <br />
<br />
Some could argue that an important response is to increase the power of transparency in the contribution system by disallowing individuals to make anonymous donations, produce anonymous pitch material, etc. The general idea behind this belief appears to be that through the creation of a political environment where individuals that donate large sums of money must make those donations in a completely transparent manner and those that use the money must outline how it was used it, the probability of immoral actions will be reduced significantly limiting the overall negative influence of money in politics. <br />
<br />
The problem with this strategy is that it does not address the saturation mindset. It stands to reason that most people believe that all candidates are taking money from some form of special interest and/or large corporate donors (even the small third party ones regardless of whether or not they actually are), so no candidate is “clean”. Some could counter-argue that if potential voters are made aware of monetary donations and expenditures then they could seek out candidates who have received no money or significantly less money and characterize those candidates as “not beholden to special interests”. The concern with this reasoning is that receipt of donated money becomes a single issue. It is difficult to envision a scenario where an individual votes against a candidate that shares his/her viewpoint on a wide variety of issues if it is revealed that the candidate has taken a lot of money from special interest groups.<br />
<br />
Therefore, ‘taking money from special interest groups’ will be regarded as just one of many issues that is considered by a voter when deciding on which candidate to vote for. Unfortunately due to the fact that messaging and access is heavily influenced by money it seems very probable that very few candidates will refrain from taking special interest money when available to them, regardless of any transparency requirements. If this scenario comes to pass then with every viable candidate feeling it necessary to take money, the previous public psychological assertion become true: everyone is taking money, everyone is dirty, thus it does not matter who takes money. Certainly establishing transparency should be done because it is a logical and fair idea and will help increase the probability of more complete information profiles on candidates for potential voters; however, without offering an effective way to remove money from the system, it is unlikely that any transparency strategy will have any real positive effect regarding money in the political system.<br />
<br />
Another option put forth by Mr. Lessig, among other parties with other systems, is the idea of Democracy Vouchers where tax rebates to a certain value (currently $50) are reserved for the exclusive donation to a certain political campaign or issue. The belief is that by resorting to a law of scales, volume will be able to cancel out the influence of the high value low volume donor class, which is viewed as the chief problem in the system. Unfortunately this type of plan is flawed in numerous ways. The chief flaws have already been discussed in a previous post <a href="http://www.bastionofreason.blogspot.com/2012/09/analysis-of-brennan-center-democracy-21.html">here</a>. Another potential flaw in Mr. Lessig’s personal idea is that because the vouchers are tax-based there would be some question to whether or not individuals who do not pay taxes would also receive the $50 or be shutout. If they were shutout then clearly such a program would not be living up to Mr. Lessig’s idea of an equal representational democracy.<br />
<br />
Overall the idea of attempting to defeat “bad money” with “good money” be it from the public or from “good PACs”, etc) is rather foolish because of issues regarding sustainability in that what government programs get cut each year due to the loss of billions of dollars returned to the public to “invest” in politics and simple practicality for the polarization of politics have heavily limited the coordinated influence of volume politics. For example in its initial attempt to influence the political landscape in 2014 Mr. Lessig’s personal Super PAC, Mayday, was a significant failure. Basically plans like Brennan Center-Democracy 21 Federal financing and Democracy Vouchers a more likely to exacerbate the problem of money in politics, not act as a “correcting” force if they do anything at all.<br />
<br />
On a side note while the idea of a “referendum president” is somewhat interesting, its general characterization can be looked upon more as a novelty than anything significant especially because without a definitive timeline for when the resignation would take place, voter decision-making becomes complicated. For example it is concerning to think how a “referendum president” would handle a catastrophic domestic or foreign event? Would the Vice-President simply handle those potential events? Who would foreign leaders interact with when addressing foreign policy? Etc.<br />
<br />
Overall the idea of removing money from politics in effort to ensure a fair democracy and minimize corruption does not appear to be an effective battle strategy to ensure these characteristics. The concern is both the ability to remove money and whether or not money is actually a real problem. A fair and effective democracy is served by three essential elements: voting access, informed voters and voting power. In this country none of these elements are at what one could say “full strength”. <br />
<br />
The first element to a fair and effective democracy is ensuring appropriate voting access where the requirements one must meet to be eligible to vote are fair, universally applied and transparent. Unfortunately this simple requirement is not being met by a number of regions; instead these areas are attempting to circumvent fairness by forcing individuals to acquire some form of governmental issue photo identification at personal cost under the false pretense of preventing voter fraud. Such unnecessary and frivolous demands are much more dangerous to a fair and effective democracy than potential unlimited money because it directly influences who can vote.<br />
<br />
The second element to an effective democracy is ensuring an informed and motivated electorate. Recall that the principal role money plays is information exchange. Therefore, the best way to make money irrelevant is to create an informed and committed electorate invalidating the purpose of money. The point of a representative democracy is that voters who vote for the winner feel that their viewpoints are being presented and fought for in the appropriate governmental body. The influence of money is only negative when that expectation is not met; when those who have voted for the winner do not have their elected official arguing in favor of their viewpoints instead that elected official is arguing in favor for viewpoints that contrast or are not important those of the majority at the behest of a wealthy donor minority. <br />
<br />
The best way to expose this betrayal of duty is an informed and committed electorate, one that knows what they want out of their elected official(s), not one that simply holds on to old ideas and/or votes a single-party ticket solely because the candidate has a certain letter besides their name on the ballot. If the electorate does not choose to inform themselves then it is difficult to judge whether or not money is corrupting the process; however, the electorate must be given tools to access the appropriate information. Therefore, candidates must be obligated to produce information packages regarding important issues and their stances on those issue that can be distributed via mail, posted online or with existing hard copies at government buildings and libraries. A guaranteed information source will allow voters to inform themselves in a non-bias or “spin” manner. <br />
<br />
The third element to a fair and effective democracy is currently the one most lacking of the three. Unfortunately there is a significant lack of honesty and logic in the political process, which significantly hinders the total expression of voting power. For example a politician can make statement A to the public, but actually support an opposing position and as long as the public is not able to discover that opposing belief in time, the politician can be elected on a basis of false pretenses. This reality is especially relevant when the position of a corporation and large political donor may be in direct contrast with the position of the general public. How can voting have any real power when a politician can simply lie about his/her position until elected? <br />
<br />
Some would argue that if an elected official lies about what they would seek to accomplish the only real response is that the public takes the philosophy of “fool me once shame on you, fool me twice shame on me” and when the individual comes up for re-election vote him/her out of office. However, what type of display of power is that? Lie and get some number of years of guaranteed elected office? How is that fair and just? Therefore, what type of process can be used to sort out false statements? Should each candidate be expected to produce a “beliefs” contract that if deviated from once elected would produce just cause for termination from that position? If this occurred what would be the process for the candidate to change his/her opinion on an issue if a mistake in reasoning was discovered? It stands to reason that a new system is needed for clearly the existing process of recall is not sufficient to ensure the power and wishes of the majority of the electorate. <br />
<br />
Overall the potential candidacy of Mr. Lessig for President of the United States appears inherently questionable because the methodology Mr. Lessig supports for removing money from politics is unclear and the most plausible options are either not viable or are not significantly aided by Mr. Lessig being President. Incidentally attempting to remove money from politics through a direct “limitation” by neutralizing the Citizens United ruling seems very difficult at this point in time and without any real probability of success any attempt would result in wasted effort and resources. Instead of attempting to neutralize money through its forced removal or by countering it with even more money, focusing on neutralizing the influence of money through voter empowerment and ensuring voter influence should be a more viable way of facilitating a legitimate, fair and effective democracy.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-82721247490836531122015-07-25T10:10:00.000-07:002015-07-25T10:10:13.514-07:00Should life in prison really be life in prison?<br />
When one considers controversy in the criminal justice system one of two issues immediately come to mind: 1) the death penalty, where effective arguments exist for both the pro and the con sides; 2) racism in the criminal justice system, where debate is typically over-emotional and illogical on both sides, especially from those complaining about the extent of racism; however, the widespread focus on these two issues draws attention away from other meaningful issues. One of these interesting issues that receive less attention is the question of justification for sentencing someone to life in prison without the possibility of parole. <br />
<br />
Not surprisingly there are a number of people who believe the judicial system should not have the capacity to hand down a sentence of “life without parole” (lwop). An aspect of this argument has been bolstered by three separate United State Supreme Court rulings, Roper v. Simmons, Graham v. Florida, and Miller v. Alabama, where it was held that it was not Constitutional to sentence juveniles to the death penalty or a mandatory life in prison without parole sentence regardless of the type of crime. Emboldened by this ruling a number of individuals have attempted to further advance this position to include eliminating lwop sentences all together or at least just expand the breadth of these ruling to young adults, arguing that a lwop sentence is a de facto death sentence.<br />
<br />
Furthermore the argument goes that the general nature of a lwop sentence is not based on rehabilitation because the individual in question is never getting out of prison, it is a mixture of punishment and deterrence for other potential actors. However, the influence of this meaning is less relatable to juveniles and young adults due to their emotional and mental development. Proponents of the above position believe that time is the most relevant factor in “decriminalizing” individuals for the frontal lobes mature and, in men, testosterone levels decline reducing the probability of aggressive and impulsive behavior. Basically time is a superior method to reducing crime probability versus hoping young people view individuals similar to themselves incarcerated for the rest of their lives and come to the conclusion “I better not do that”. <br />
<br />
In fact some may simply come to the conclusion “I better not get caught” suggesting a time old thought regarding crime, the probability for the certainty of punishment matters much more than probability for the severity if punished when considering the commission of a crime. Therefore, based on this reasoning these individuals argue that sentencing individuals, especially the young, to life in prison without parole does not serve either society or the individual in question. <br />
<br />
Some have also argued that the deterrence factor does nothing significant to limit the occurrence of crime derived from passion for rarely do individuals calculate the benefits and consequences before engaging in an emotionally driven response. However, this argument is rather weak in its validity for most emotional actions do not typically produce a crime that will result in a lwop sentence upon conviction. Understand that lwop sentences rarely occur outside of homicides, most notably a Murder 1 conviction, which seldom have acute emotional components, even in felony murder cases. The general conditional pre-requisites for charging an individual with Murder 1 involves 1) premeditation; 2) willfulness; and 3) deliberation (typically with malice afterthought);<br />
<br />
This above argument regarding passion and emotion creates concern in that the chief problem with attempting to expand the “lack of maturity” argument to lwop sentences is the nature of lwop crimes typically do not involve lack of maturity or emotional development as a meaningful factor. Basically regardless of the level of social, mental or emotional development, any individual without some form of brain damage should acknowledge that the elements involved in the crimes that warrant such a sentence (vicious and premeditated homicides or homicides in the course of committing other high level felonies like armed robbery, kidnapping, etc.) are against the law and consequences for their commission will be severe. One does not need to be a fully matured and emotionally stable 26 year-old to know that shooting someone in the chest with a .44 is not a good thing and will be harshly punished. One of the chief reasons for a differing stance between juvenile treatment with the death penalty and lwop sentences is the finality of the death penalty eliminates the ability to overturn mistakes in the judicial process.<br />
<br />
Another aspect of weighing lwop sentences on young single count offenders is will the elimination of these sentences serve the concept of justice? For example if 20 year-old person A murders 20 year-old person B with all of the necessary elements to justify a Murder 1 conviction what type of sentence would represent justice? Realistically it can be argued that person B was robbed of at least 40 years of life, if not more, so should person A pay in a year for year context? If person A is only incarcerated for 20 years is that justice? Basically what type of punishment represents justice when one person blatantly takes the life of another? <br />
<br />
Some would argue that keeping Person A in jail for the rest of his/her life is a miscarriage of justice because ending Person A’s life on de facto grounds does not serve the public interest or the interest of justice, it simply steals an additional life ruining two lives instead of one. However, the counterargument is that Person A can still have productive and positive experiences despite being in jail, something that Person B can no longer have at all. <br />
<br />
It could be argued that the deterministic aspect of “without parole” is the problem for individuals who are sentenced to life with the possibility of parole are not guaranteed to acquire parole. Therefore, the elimination of this mandate would allow experts and individuals with intimate knowledge of specific prisoners to judge whether or not an individual remained a threat to society and if justice had been done. Individuals who favor judicial discretion in general would agree with this position for they are from similar molds. <br />
<br />
Of course the counter-position is that there are a number of individuals who have received parole after committing violent crimes, i.e. been judged no longer a threat to society, and soon after their release committed similar or worse crimes resulting in their re-arrest and incarceration. Therefore, the issue of simply revoking the very idea of life without parole encompasses the idea of certainty. Should a population of prisoners who have “turned their lives around” be denied the possibility of parole to prevent another population of prisoners from manipulating such a system to acquire release and the ability to continue their criminal enterprise?<br />
<br />
Another factor for consideration is how influential is the threat of a lwop sentence in “convincing” an individual to take a plea bargain, thus saving the state or Federal government money, time and other resources in not having to prosecute a murder case, which are frequently significant. If this influence is meaningful, then the loss of lwop sentences could result in a greater probability of delayed or even lost justice for the court system would have to deal with a greater influx of cases creating a backlog. <br />
<br />
One of the more widely known important elements to supporting the elimination of “without parole” conditions on sentences is the belief that the prison system can produce sufficient rehabilitation potential. While existing track records are mixed in this regard, evidence does exist that prisons produce a means for individuals to “get it” and turn their lives around. Unfortunately for supporters of the various positions surrounding the elimination/reduction of sentences there is another important element in this process, which while receives lip service now and again, does not receive any significant level of public or political support: how to reincorporate criminals, especially those who have been incarcerated for a long period of time, back into the economic fabric of society? <br />
<br />
This question is especially troublesome now for while it has almost always been difficult for criminals to re-acclimate themselves into society on some level, as society currently stands there are a number of individuals without criminal records have not been effectively incorporated into the economic framework who will be competing with these newly released criminals. Without the ability to incorporate newly released criminals, especially those serving long sentences for violent crimes, the probability of recidivism is high, regardless of age and emotional/mental maturity. Sadly this is a question that proponents of eliminating lwop sentences largely ignore kicking the proverbial can to the general “prison reform” crowd. This behavior is questionable because how can one in good conscious seek to eliminate “without parole” sentences whether for juveniles only or entirely without addressing this important question of economic incorporation? Some may argue that it is not fair to leave an individual in jail while this issue is addressed, but is it fair to society to release people that cannot be properly reintegrated?<br />
<br />
The final major question regarding the elimination of “without parole” sentences is how to address the psychological impact of prison influencing an individual’s ability to live in general society? There is reason to believe that a number of inmates suffer from a form of institutionalization after a sufficient period of time in prison, which will negatively impact their ability to reintegrate themselves successfully back into society.<br />
<br />
One particular change in psychology that could be significantly harmful to reintegration is the increased level of apathy, passivity, and isolation commonly seen from institutionalism.1 One of the more stereotypically, yet still true “rules” of prison life is stay invisible unless you are struggling for power; doing so means keeping your head down and your mouth shut. Unfortunately society has moved to a point where it almost exclusively prefers people be loud and expressive; in fact it appears, at least in the manner of public notoriety, that the motor-mouth arrogant frequently incorrect braggart is preferred over the stoic well-meaning fact-giver. Basically what is expected for “success” in prison life versus what is expected for “success” in “normal” life is largely contradictory. So how is this situation resolved? One could require inmates released after large incarceration periods psychological assistance from trained professionals, but who pays for this service?<br />
<br />
Overall there are some important issues regarding the elimination of “without parole” qualifiers on sentences that go beyond simple age. The most noteworthy and important ones relate to the nature of justice, both in punishment and how such a change would influence courts, how long-term prisoners can be incorporated economically into a society that is leaving behind non-prisoners at ever increasing rates and how the potential psychological changes born from institutionalization influence reintegration? Until satisfactory answers can be produced for at least these three questions, notwithstanding other smaller more specific questions, the idea of eliminating “without parole” qualifiers in criminal sentencing seems inappropriate; remember individuals serving these sentences are not akin to those jailed for punching a guy in a bar for hitting on “his girl” or dealing small quantities of marijuana without a license in a state where it is legal by state law, but instead were convicted for very serious crimes that almost always involved the loss of at least one other life.<br />
<br />
<br />
Citations – <br />
<br />
1. Johnson, M, and Rhodes, R. “Institutionalization: a theory of human behavior and the social environment.” Advances in Social Work. 2007. 8(1). 219-236.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-28508579541975641752015-07-25T10:07:00.001-07:002015-07-25T10:07:33.073-07:00One Sexual Offense Fits All?<br />
It has been said, ““precept of justice that punishment for crime should be graduated and proportioned to [the] offense.” [Weems v. United States]. However, punishment for a crime is not exclusive to the domain of incarceration. For most criminals there is the social stigma of being a criminal, which significantly limits their economic, political and societal power and influence. In the case of individuals convicted of sexual based offenses this stigma is typically enhanced. While nothing can be done about the subjective stigmas assigned to criminals by other individuals regardless of the type of offense, when one looks at the administrative burdens applied to individuals convicted of sex offenses versus other types of crimes, including murder, one wonders whether or not such exclusive and additional punishment is a violation of the Eighth Amendment of the Constitution.<br />
<br />
After the period of incarceration for a sex offender has concluded the typical administrative burdens applied to that individual encompass restrictions on residency based on the surrounding area most notably they cannot reside within some fixed specified distance from common areas where children congregate like schools, daycare centers, parks, bus stops, etc; in some situations if such an area is constructed after the individual has established residency in a particular location the individual will be forced to move (some states have grandfather clauses that do not require a move some do not). In addition sex offenders must check in with local law enforcement when moving to a new address, changing employment, changing their legal name, etc., and depending on the state have to reaffirm these notifications after a certain period of time. Finally their names are listed on a public database for a period of time that may not be commensurate with their current relationship with their local environment. Basically their name could be on this list 8 years after the incident that resulted in their conviction and after moving to an entirely new community in which these individuals have lived without incident.<br />
<br />
To understand these administrative requirements one must attempt to understand their philosophical origins. Most sexually based crimes illicit a guttural and emotional reaction typically leading to a characterization of repugnance, that strangely enough at times, exceed the disgust one feels towards murder or other higher level crimes. The original intent of the sex offender registration list appears born from at best a psychological compromise to provide a level of deterrence from recidivism by limiting the available opportunities that could lead the individual to repeat such criminal action or at worst as an additional punitive measure because it was not legally viable to incarcerate such an individual for a period of time typically demanded/anticipated by the public in reaction to the crime.<br />
<br />
Unfortunately this compromise has evolved into a “one size fits all” punishment moving beyond the once applied standard judicial review and discretion. It tends to no longer take the nature of the sexual offense into consideration beyond broad “milestones”. For example all would agree that there is a significant difference between a 19 year-old male having sex with a consenting 16 year-old female and a 29 year-old male raping a 16 year-old female via a drugged beverage. While these differences are certainly reflected in the incarceration portion of the punishment they typically are not reflected in the administrative/societal portion of the punishment. <br />
<br />
Basically while both individuals from the above example are technically sex offenders, the fact is that in most situations there is a tiered structure that is so broad in its administrative penalties that the level of judicial discretion is non-existent. In a sense the application of administrative punishment can be viewed as generally lazy, disinterested in determining the actual threat posed by the individual to the community instead labeling all as viable and credible threats. <br />
<br />
There are two pertinent court cases pertaining to the issue of sex offense and the Eighth Amendment. First, in Graham vs. Florida the United States Supreme Court adopted the position that non-capital sentences for minors, adding to capital sentences held in Roper vs. Simmons, could be found unconstitutional under a proportionality review. This proportionality review can fall within two general classifications: 1) challenges to the length of a sentence dependent on the circumstances surround the case in question; 2) cases in which the Court implements the proportionality standard by certain categorical restrictions. The important element to Graham vs. Florida with regards to the above topic is that it set the precedence that categorical Eighth Amendment proportionality reviews could be applied to non-capital offenses, moving beyond the idea of “death is different”.1<br />
<br />
Second, in Ohio v. Blankenship the defendant claimed that his classification as a Tier II sex offender pertaining to the crime of having a sexual relationship as a 21 year-old with a consenting 15 year-old with full knowledge of her age resulting in a conviction of a single count unlawful sexual conduct was cruel and unusual punishment. This claim was based on the administrative penalties associated with that classification (largely associated with having to register as a sex offender for 25 years) in contrast to the threat he provided as a possible future repeat offender.<br />
<br />
The Ohio Court of Appeals ruled against Blankenship determining that existing legal remedies were not available because he was an adult when he committed the crime versus being a juvenile, thus a previous ruling (related to C.P., 131 concerning juveniles) was not applicable and that he was in fact a sex offender, thus the current legal structure in Ohio was applicable. Blankenship appealed to the Ohio Supreme Court, which held arguments in early March 2015; as of this posting it appears that no ruling has been made regarding this case, but a number of individuals believe that the ruling could go either way. So currently while it is legally and theoretically possible to find the administrative penalties associated with conviction as a sex offender unlawful via the 8th Amendment, no court has current done so.<br />
<br />
Some could argue that there is an important distinction in statutory rape cases between an individual who has accurate knowledge of the age of his/her sexual partner versus having inaccurate knowledge through deception or misinformation. On this issue the point of willing culpability is irrelevant. For example there is no meaningful difference between a 19 year-old having sex with a 15 year-old where both parties are fully aware of the age of the other versus a 19 year-old having sex with a 15 year-old who has lied to the 19 year-old claiming an age of consent (18 year-old). <br />
<br />
Such consideration would be akin to facilitating punishment based on whether or not an individual was aware that he/she was speeding. Whether or not the individual knows he/she is speeding is irrelevant to the fact that the individuals was speeding and violating that particular law. Furthermore the issue is not whether or not an individual who commits statutory rape or a similar low level sex-based crime is a sex offender. By law the individual is a sex offender, the issue is the assigning the appropriate punishment for the committed crime in all aspects, i.e. is it appropriate that an individual convicted of sexting receives the same administrative punishment as an individual convicted of rape?<br />
<br />
An interesting point of fact pertaining to the validity of the administrative penalties associated with non-violent sex offenders is that the general recidivism rate for sex offenders has been demonstrated numerous times to be lower than any other crime except murder.2-3 An interesting point of contention could be made regarding this data between parties that agree with board mandatory classifications and parties that disagree. <br />
<br />
Proponents of the administrative penalties could argue that this lack of recidivism is due to the harsh administrative restrictions placed on sex offenders heavily reducing the temptations and opportunities for recidivism. Opponents of these penalties could counter-argue that this lack of recidivism is because most sex offenders are not sexual predators, but simply do something stupid early in their lives that get them labeled and convicted as a sex offender through some basic non-violent sex-related crime like sexting a consenting individual or statutory rape with a consenting partner. While the truth is unknown, opponents are more likely correct than proponents because the data encompasses a time frame for some of these analyses where the harsher administrative penalties were not entirely applicable.<br />
<br />
An important element to whether or not the 8th Amendment can be applied on this particular issue, especially with regards to the sex offender registry is whether the registration is viewed as punitive or civil; a characterization as punitive should increase the probability of relevance in applying the 8th Amendment versus a civil characterization. In most cases it is difficult to argue that the registry is not punitive in nature with the administrative hurdles that are assigned to those on the list, especially concerning the living restrictions. It stands to reason that if the only demand of the list was public access and an accurate name and address then it would be more civil in nature; however that is currently not the case.<br />
<br />
Based on existing information it is difficult to argue that the sex offender registry serves an important role in protecting society from a large number of individuals convicted of sex offenses because those individuals are not a threat to society. Furthermore the additional elements of societal stigma and restrictions of freedom produced through association with the list could constitute a disproportional punitive response to the crime, especially when that association is not subject to judicial review, but mandated by a state or the Federal government. For example it could be argued successfully that for a vast majority of individuals who are convicted for the first time on a single count of a non-violent sexual-based crime, registration as a sex offender is not appropriate, therefore could be appropriately challenged as a violation of the 8th Amendment. <br />
<br />
An interesting side note is that defining mandatory registration as a sex offender as a violation of the 8th Amendment may be necessary to properly apply justice even if it not legally appropriate. In short associating this scale of punishment to the 8th Amendment may be the only way to give politicians the political cover they need to continue to publicly assert their “tough stance” against sex offenders of all shapes and sizes, but also have appropriate punitive punishment based on the type of sexual offense. Basically while applying an analytical system of judgment regarding the threat potential of a sexual offender to “relapse” is logical and compliant with justice, forcing such a system on states through association with the 8th Amendment may be necessary due to political concerns. <br />
<br />
However, while the courts have almost always been at the forefront for social change, would it be appropriate to make this association even if it were not valid? What type of slippery slope would that produce? On an even larger scale what can be done in a democracy when the majority is not interested in changing its opinion regardless of any arguments counter to their opinion? Overall when thinking from a non-emotional logical perspective mandatory registration for most single count sex offenders appears inappropriate, not surprisingly producing a path to properly appreciate that viewpoint legally is the more difficult problem. <br />
<br />
Citations – <br />
<br />
1. Shepard, R. “Does the punishment fit the crime? Applying eighth amendment proportionality analysis to Georgia’s sex offender registration statute and residency and employment restrictions for juvenile offenders”. Georgia State University Law Review. 2011. 28(2) Article 7. 529-557.<br />
<br />
2. BOJ Recidivism of Sex Offenders Released from Prison in 1994, November 2003 http://bjs.ojp.usdoj.gov/content/pub/pdf/rsorp94.pdf<br />
<br />
3. U.S. Department of Justice Criminal Offenders Statistics: Recidivism, statistical information from the late 1990s and very early 2000s.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-70717772974131878862015-06-23T10:07:00.001-07:002015-06-23T10:07:32.496-07:00The Legitimacy of Holistic Admissions at U.S. Universities<br />
With the competition for landing a quality job increasing with every passing year, acceptance into a high quality university is viewed as essential to maximizing the probability of landing one of these jobs. However, in lockstep with the competition for quality jobs, the competition to gain entrance into those universities widely regarded as high quality has also increased. This competition has produced controversy surrounding the procedure in which applicants are admitted creating a tug-of-war of sorts between various parties and their interests. One of the chief points of controversy is the validity of the “holistic” review process. In fact a lawsuit filled against Harvard University by the Students for Fair Admissions contends that holistic admission processes are inappropriately discriminatory and should be significantly clarified in their evaluation metrics beyond “whole person analysis”. Obviously a reading of the official complaint by the Students for Fair Admissions divulges a harsher conclusion than that above, but the sentiment above is more appropriate to produce a more fair admissions environment.<br />
<br />
Proponents of the holistic method champion its multi-faceted analysis approach where a larger spectrum of an applicant’s qualifications for admissions is considered beyond the traditional metrics (standardized test scores, grades and certain extracurricular activities), which produces a more fair and accurate admissions process. Opponents of the holistic method believe that it is commonly used at best to hide the admissions process beyond a veil of ambiguity allowing universities to justify perplexing and arbitrary decisions and at worst to legitimize a quota system where more qualified candidates are rejected in favor of under-qualified candidates to achieve diversity demographics in order to evade public scorn. Clearly based on the perceived stakes, where getting into university A can set a person up for life versus university B which would create unnecessary hardships, the emotional aspect of this debate is high. Unfortunately this emotional aspect has produced an environment that abandoned a critical philosophical base for understanding the why or why not a holistic appropriate is appropriate. <br />
<br />
First it is important to address that the holistic process has been attacked by some as a demonstration of “reverse racism” through the process of affirmative action. The term “reverse racism” is a misnomer and is not properly used in this descriptive context. Racism is giving differing treatment, either in a positive or negative manner, to an individual based on their ethnicity or race. Based on this definition, reverse racism would be akin to not giving differing treatment to an individual based on their ethnicity or race. However, when individuals invoke the term “reverse racism” the actual meaning is not what they are intending to convey. Instead they simply mean a different type of racism. Unfortunately some parts of society have associated the term racism to reflect only one particular form of racial bias instead of all forms of racial bias, which is inappropriate. Therefore, the term “reverse racism” should be eliminated from conversation in this context and replaced with the appropriate term – racism. <br />
<br />
Second, it must be noted that the original intention of affirmative action was not to give “bonus points” to an individual based on their race, but to access how race may have influenced the acquisition of certain opportunities and thereby influenced the development of an individual through their performance when engaging in these opportunities. It should not be surprising that an individual with rich, committed and connected parents will have more opportunities and ability to prepare for those opportunities when presented than an individual without wealthy or even present parents. <br />
<br />
For example it is expected that SAT scores would be higher for children of richer families both because of increased opportunity to prepare and increased opportunity to retest if the performance is not deemed acceptable. Also there is a higher probability that individuals from rich families will be better nourished than those individuals from poor families, which will directly influence academic performance and ability to participate in other valuable non-academic opportunities. Such environmental effectors are simple elements that can skew the value and analytical ability of “raw” metrics like standardized tests. Basically affirmative action is akin to judging the vault in gymnastics. Not all jumps have the same difficulty level; a non-perfect vault with a 10.0 difficulty will consistently beat a perfect vault with a 7.0 difficulty.<br />
<br />
A quick side note: while the idea of affirmative action was originally based on the premise of race in an attempt to combat direct and indirect forms of racism, in the present the idea of affirmative action has shifted more to address differences in economic circumstance over race/ethnicity. The idea that rich individuals of race A will somehow be significantly excluded from opportunity A versus rich individuals of race B is modern society is no longer realistic. It is important to identify that more minorities will be assisted by affirmative action not directly because of race, but instead because of past racism that reduced the probability of these minority families to build intra-generational wealth thereby making them poorer than white families.<br />
<br />
Based on the “potential judgment” aspect of affirmative action, some individuals may object to the idea that it is appropriate to punish an individual for having access to opportunities that others may not claiming that this behavior is a form of bias. This point creates the first significant philosophical question that must be addressed in the admissions process: is it justifiable that an above average individual in an advanced difficulty pool should find favor in an opportunity over a high quality performing individual in a lesser difficulty pool? <br />
<br />
An apt example of this notion is seen in the disparity between the “Big 5” college conferences (ACC, Big 10, Big 12, PAC 12 and SEC) and the mid major conferences when selecting basketball teams for the NCAA Championship Tournament. While the committee tends to give preference to teams from the Big 5, the question is should they? A Big 5 power team, “Big Team A”, with a 55.5% conference winning percentage at 10-8 and an overall record of 21-13 has clearly demonstrated itself as slightly above-average among its peers whereas a mid major team, “Medium Team B”, with a 89% conference winning percentage at 16-2 and an overall record of 26-7 did not have the same opportunities to compete against the level of competition as Big Team A, but has demonstrated themselves a quality team with a greater unknown ceiling. Basically should someone slightly above the middle of the pack in one environment that could be viewed as more competitive be passed over for someone at the top at a tier 2 level?<br />
<br />
In the arena of applicants the question of quality could boil down to: should the 100th best “area” A applicant be accepted over the 10th best “area” B applicant. Think about it this way: should applicant C from city y who scores significantly above average for that area on standardized tests and also has quality grades be accepted over applicant E from city x who scores slightly above average for that area on standardized tests and has quality grades even if applicant E’s scores are slightly higher? Note that obviously city x has a higher student average for standardized tests than city y.<br />
<br />
Those who say yes to the above question based on the importance of fostering a racially/ethnically diverse environment must be careful not to fall into the trap of needless diversity, which is its own type of bias. With regards to fostering a diverse environment, its establishment must be based on thought and behavior, not on elements beyond an individual’s control. <br />
<br />
There is an advantage to diversity of experience for it ensures a greater level of perspective and ability to produce understanding leading to more and potentially valid strategies for solving problems. However, this advantage comes from experience not from different skin color, religious beliefs, etc. For example the inclusion of person A just because he/she has certain colored skin or is of a certain ethnicity is not appropriate. Their inclusion should demand a meaningful and distinctive viewpoint. Cosmetic diversity for the sake of diversity serves no positive purpose and is inherently foolish and unfair/bias. Based on this point the crux of the issue regarding admissions is how to identify individuals with distinctive and valuable viewpoints in order to validate selecting a high achiever from a less difficult environment. <br />
<br />
Most would argue that the standard analysis metrics are not appropriate for this task. For example grades are significantly arbitrary based on numerous uncontrollable environmental and academic circumstance; i.e. an A at high school x does not always carry the same weight as an A at high school y and some high schools allow students greater amounts of extra credit which conceal their actual knowledge of the subject through grade inflation. Standardized tests can be heavily prepared for and be taken multiple times depending on time and financial resources. Also they may not present an accurate representation of ability for almost no “real-world” task requires an individual to sit in one place in a time sensitive environment answering various questions without access to any outside resources beyond what is in their brain. At one point the “college essay” could have filled this role, but now it appears the essay has de-evolved into an ambiguous farce demanding only unoriginal “extraordinary” experiences and/or teaching moments where sadly it has become difficult to determine even if the student means what they say or are simply writing what they think the admissions officers want them to say.<br />
<br />
However, while these flaws with the standard metrics exist, it is important to understand that abandoning the standard metrics entirely would be in error, for abandoning these metrics would be akin to replacing one “bias” with another. The standard metrics are an important puzzle piece, but they do not make up the entire puzzle.<br />
<br />
For some the college interview has been thought of as a panacea for bridging the gap between holistic and standard admission judgment, but interviews do have caveats that must be monitored. Supporters of the interview process believe that it gives applicants an ability to demonstrate that he/she is more than just test scores, extracurricular activities and grades as well as allows both the university and applicant the ability to more specifically define the level of “fit” between the two beyond the mass generic questions utilized in the application process. Finally interviews can be a good deciding factor between board-line applicants.<br />
<br />
Unfortunately interviews have some flaws that must be properly managed to ensure their legitimacy. First, individuals involved in the interview must be properly trained to avoid first impression bias as most interviews establish the tenor of the relationship between the interviewer and the interviewee very early, which threatens the objectivity of the rest of the interview. Also interviews must have a standard operating procedure, especially when it comes to the questions. Applicants must be asked the same questions for if different questions are asked to different applicants the subjectivity probability of the procedure increases, which hurts the interview as a comparison evaluation metric. It is fine to ask different questions if interviews are not going to be used when choosing one applicant over another, but most do not view the interview in such a causal light. <br />
<br />
Another concern about the interview is they are unable to judge growth potential in how the university may positively or negatively influence the development of the applicant if he/she actually attends the university. Also if interviews do not have significant weight in the decision-making process then they may cause more harm than good due to lack of specific feedback providing more stress on an individual over relief as individuals wonder how the interview went leading to over-embellishment of the negative on small errors. Finally if interviews are deemed important it would be helpful if more universities offered travel vouchers to more financially needy applicants so if these individuals want to tour the campus and participate in the interview process they have an opportunity to do so that is not negatively impacted by their existing financially situation. Such a voucher may be important especially if interviews are used in “board-line” judgment.<br />
<br />
A separate strategy may be the use of static philosophical probing questions in the application process. This strategy could better manage the difference in outside environmental influencing factors by gauging the general mindset of an applicant when it comes to solving problems. For example one question could be that if the individual were presented with a large jar full of chocolate and one individual sample; how would the individual calculate the number of chocolates in the jar? Note that this question demands both creativity and deterministic logic; creativity will produce more available options, but logic will be required to reason the best option from the list. <br />
<br />
Another interesting question would be to ask what is the greatest invention in human history? Such a question would inspect whether an individual believes it is more important to build a foundation or if importance comes from what expands from that foundation. A third question could be what one opportunity would the applicant like to have had that they did not receive or was not available and why? These questions are superior to the generic banal analytically irrelevant questions that most universities ask on their admission forms.<br />
<br />
Overall regardless of what methodology a university uses to accept or reject applicants the most important element is that this methodology is transparent. Universities must exhibit what attributes and credentials validate an individual’s merit for acceptance and then produce valid qualitative and quantitative reasons for why certain individuals gain admission and others do not. Transparency is the key element for a university to conduct their specific type of admission methodology without complaint. Returning to the original question whether or not a university elects to accept above average individuals from high “difficulty” environments or top performers from lower “difficulty” environment, either method is defensible as long as legitimate reasoning is available. However, there in lies the problem with the holistic method, universities are not transparent in its application, thus such behavior must change if a holistic method is to have any significant credibility.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-55266171613921526682015-06-10T10:08:00.000-07:002015-06-10T10:08:27.664-07:00Exploring the Biological Nature of Brown and Beige FatOver two years ago this blog <a href="http://www.bastionofreason.blogspot.com/2013/01/tapping-into-brown-fat.html">discussed</a> the possibility of incorporating a specialized preparation routine before exercise in an attempt to stimulate both brown and beige adipose tissue in order to increase the efficiency and overall calorie and fat burning potential of standard exercise. However, that post did not seek to fully understand or discuss the specific biological mechanisms that govern the behavior of brown or beige adipose tissue. This lack of knowledge limits the efficiency for exercise programs as individuals could either be consuming certain foods or performing certain warm-up tasks to increase exercise potential in addition to those suggested in the past blog post. Increasing exercise efficiency could be an easy means to increase the overall health of society without having to devote more precious time to exercise; therefore it would prove useful to better understand the processes that activate these types of fat.<br />
<br />
At the most basic level there are two key elements to the fat burning capacity of brown fat. First, brown fat has multiple mitochondria versus the single mitochondria possessed by white fat; these additional mitochondria allow for greater rates of metabolism along with an increased lipid concentration. Also brown fat releases norepinephrine which reacts with lipases to breakdown fat into triglycerides and later to glycerol and non-esterified fatty acids finally producing CO2 and water, which can lead to a positive feedback mechanism.1,2 Second, brown fat contains significant expression rates of uncoupling protein 1 (UCP-1).1 UCP-1 is responsible for dissipating energy, which leads to the decoupling of ATP production and mitochondrial respiration.1 Basically UCP-1 returns protons after they have been pumped out of the mitochondria by the electron transport chain where these protons are released as heat instead of producing energy (i.e. leaking).<br />
<br />
It is important to understand that there are two types of brown fat: natural brown fat and intermediate brown fat commonly known as beige fat. Natural brown is typically exemplified by the fat located in the interscapular region and contains cells from muscle-like myf5+ and pax7+ lineage.3 Natural brown fat is typically isolated from white fat and almost entirely synthesized in the prenatal stage of development as a means to produce heat apart from shivering.4 Beige fat is commonly interspaced within white fat, do not have these muscle-like cells (although Myh11 could be involved),5 and can be activated by thermogenic pathway and the strain of exercise. Beige fat also has the potential to influence the conversion of white fat to beige fat through a process commonly called “browning”.6,7 <br />
<br />
Natural brown fat is thought to have larger concentrations of UCP1-expression because they constitutively express it after differentiation versus beige, which expresses large amounts of UCP-1 in response to thermogenic or exercise cues.1,5 Therefore, natural brown fat is more effective at energy expenditure. However, it may not be possible to develop more natural brown fat after development; therefore, any positive progression in brown fat development will come from beige fat. <br />
<br />
Early understanding of brown fat activation involved non-discriminate increases in the activity of the sympathetic nervous system (SNS). The standard pathway governing brown fat activation uses a thermogenic response involving the release of norepinephrine, which initiates cAMP-dependent protein kinase (PKA) and p38-MAPK signaling leading to the production of free fatty acids (FFA) through lipolysis due to UCP-1 induced proton uncoupling.4 UCP-1 concentrations are further increased through secondary pathways involving the phosphorylation of PPAR-gamma co-activator 1alpha (PGC1alpha), cAMP response element binding protein (CREB) and activating transcription factor 2 (ATF2).8 Among these three elements PGC1alpha appears to be the most important co-activating many transcription factors and playing an important role in linking oxidative metabolism and mitochondrial action.9 <br />
<br />
However, due to the complicated nature of SNS activation and its other downstream activators the attempt to replicate it in the form of weight loss drugs like Fenfluoramine or Ephedra resulted in severe negative cardiovascular side effects like elevated blood pressure and heart rate.10 While some argue that either increasing the sensitivity or the rate of simulation to the SNS can improve upon these results, the underlying elements associated with downstream activation of the SNS makes facilitating direct influence too complicated. Therefore, from a biological perspective it makes more sense to focus on a downstream element that interacts with brown fat at a more localized level. <br />
<br />
Just a side note based on the differing interactivity between brown/beige and white fat from the SNS, white fat appears to represent long-term energy storage and brown fat is shorter-term energy, an unsurprising conclusion. However, frequent energy expenditure, like exercise, may condition the body to produce more beige fat versus white fat viewing short-term energy needs as more valuable than long-term energy needs. Basically if the above point is accurate then it stands to reason that a person would see more benefit from 20 minutes of exercise 6 days a week versus 40 minutes of exercise 3 days a week. <br />
<br />
Moving away from direct SNS stimulation perhaps the appropriate method of increasing browning involves increasing transcription and translation of UCP1. Interestingly enough empirical evidence exists to support the idea that reinoic acid could be an effective inducer of UCP-1 gene transcription in mice and operates through a non-adrenergic pathway.11,12 However, a more focused study using loss of function techniques involving retinaldehyde dehydrogenase, which is responsible for converting retinal to retinoic acid, determined that retinal, not retinoic acid is the major inducer of brown fat activity.13 Unfortunately there is no direct understanding regarding the proportional response of brown fat to retinal or retinoic acid. Therefore, the general fat-soluble nature of vitamin A will probably make it difficult to utilize its derivatives as biological stimulants for brown fat activation or browning.<br />
<br />
Another possible strategy to stimulate browning is through activated (type 2/M2) macrophages induced by eosinophils which are commonly triggered by IL-4 and IL-13 signaling. When activated this way these macrophages recruit around subcutaneous white fat and secrete catecholamines to facilitate browning in mice.14,15 A secondary means by which both IL-4 and IL-13 may influence fat conversion is their direct interaction with Th2 cytokines.16 Unfortunately while on its face this strategy looks promising, in a similar vein to vitamin A, it might not be effective due to unknown long-term side effects associated with IL-4 and IL-13 activation. Due to this lack of knowledge, if IL-4 or 13 is thought to be a viable biochemical strategy for inducing weight loss, long-term proper time lines for effects and dosages must be explored in humans, not just short-term studies in mice.<br />
<br />
A more controversial agent in browning is fibronectin type III domain-containing protein 5 or more frequently known as irisin. Due to its significantly increased rate of secretion from muscle under the strain of exercise, some individuals believe that irisin is a key mediator in browning acting as a myokine;17 if this characterization is accurate then irisin could be a significant player in the biological benefits produced by exercise including weight loss, white fat conversion and reduced levels of inflammation.18,19 However, other parties believe that because human studies with irisin have produced results that do not demonstrate benefits similar to those studies using mice, irisin is another molecule that cannot scale-up its effectiveness when faced with the added biological complexity of humans versus a mouse.20-22<br />
<br />
The key element within this controversy could be that irisin expression is augmented by the increased expression of PGC1alpha, but PGC1alpha increases the expression of many different proteins and other molecules, so the expression of irisin may not be relevant to the positive changes associated with exercise. Another factor may be that a key difference between mice and humans is the mutation in the start codon of the human gene involved in the production of irisin, which significantly reduces irisin availability.23 Thus this mutation could be the limiting factor to why despite a very conserved genetic sequence, humans do not see anywhere near the benefit mice do. If this explanation is correct it does potentially still leave the door open to directly inject irisin into the body to increase concentrations in an attempt to aid exercise derived results, but if PGC1alpha is the key, then this increased concentration of irisin could be of minimal consequence. <br />
<br />
Another potential element that demonstrates a significant concentration increase in accordance to increased PGC1alpha is a hormone known as meteorin-like (Metrnl).24 The concentration of this hormone increases in both skeletal muscle and adipose tissue during exercise and exposure to cold temperatures in accordance to increases in PGC1alpha concentrations. When Metrnl circulates in the blood it seems to produce a widespread effect that induces browning resulting in a significant increase in energy expenditure.24 The influence of Metrnl on white fat does not appear due to direct interaction with the fat, but instead indirect action on various immune cells most notably M2 macrophages via the eosinophil pathway, which then interact with the fat through activation of various pro-thermogenic actions.24 As discussed above this interaction with eosinophil appears to function through IL-4 and IL-13 signaling indicating a common pathway purpose between IL-4/IL-13 and the original SNS pathway. Not surprisingly blocking Metrnl has a negative effect on the biological thermogenic response.24<br />
<br />
Another potential strategy for browning may be targeting appropriate receptors instead of specific molecules; with this strategy in mind one potential target could be transient receptor potential vanilloid-4 (TRPV4). TRPV4 acts as a negative regulator for browning through its negative action against PGC1a and the thermogenic pathway in general.25 In addition TRPV4 appears to activate various pro-inflammatory genes that interact with white adipose tissue making it more difficult to facilitate browning even if the appropriate signals are present. TRPV4 inhibition and genetic ablation in mice significantly increase resistance to obesity and insulin resistance.25 The link between inflammation and thermogenesis is highlighted by the activity of TRPV4, which is one of the early triggers for immune cell chemoattraction.25<br />
<br />
Obesity may also produce a positive feedback effect through TRPV4 by increasing cellular swelling and stretching through the ERK1/2 pathway, which increases the rate of TRPV4 activation.26,27 However, the validity of TRPV4 as a therapeutic target remains questionable for TRPV4 expression not only influences fat/energy expenditure, but also osmotic regulation, bone formation and plays some role in brain function.25,28,29 Fortunately a number of the issues with TRPV4 mutations/mis-function appear to be developmental in influence versus post-development, thus TRPV4 therapies could still be valid.<br />
<br />
Natriuretic peptides (NPs) are hormones typically produced in the heart on two different operational capacities: atrial and ventricular. Both of these hormones appear to play a role in browning through association with the adrenergic pathway.30 The most compelling evidence for supporting this behavior is that a lack of NP clearance receptors demonstrated significant enhanced thermogenic gene expression in both white and brown adipose tissue.30 Also direct application of ventricular NP in mice increased energy expenditure.30 In addition to the above results, NPs are an inherent attractive therapeutic possibility because appropriate receptors are located in white and brown fat of both rats and humans31,32 and these receptors go through periods of significant decline in expression when exposed to fasting,33 which may account for some of the benefits seen from low calorie diets.<br />
<br />
Atrial NPs increase lipolysis in human adipocytes similar to catecholamines (increasing cAMP levels and activation of PKA) although whether or not this increase is induced through interaction with beta-adrenergic receptors is unclear.34 Some believe that NPs activate the guanylyl cyclase containing NPRA producing the second messenger cGMP activating cGMP-dependent protein kinase (PKG).35,36 PKA and PKG have similar mechanisms for substrate phosphorylation including similar targets in adipocytes,36 thus this interaction may explain why atrial NPs act similar to catecholamines.<br />
<br />
Recall from above that one of the means of inducing browning, especially for those tissues that are distant from SNS-based neurons, is macrophage recruitment. This recruitment appears to be initiated by CCR2 and IL-4 for when either is eliminated from mice models the conversion no longer occurs.15 Tyrosine hydroxylase (Th) is also important in this process facilitating the biosynthesis of catecholamines and later PKA levels. <br />
<br />
With respects to producing a biomedical agent to enhance browning there appear to be three major pathways in play: 1) the SNS pathway producing a direct activation response; 2) macrophage recruitment pathway potentially involving Metrnl, which activates IL-4 and IL-13 eventually leading to PKA activation and an indirect activation response; 3) NPs activation pathway, which eventually leads to PKG activation and an indirect activation response. As mentioned earlier SNS pathway enhancement has already been attempted by at least two drugs and failed miserably, so that method is probably out. In addition the SNS pathway does not appear to have as much browning potential as the PKA or PKG pathways due to the reliance on the location of certain nerve fibers. <br />
<br />
Enhancing macrophage recruitment could be a good strategy, but there appears to be little information regarding negative effects associated with short-term high frequency enhancement of IL-4 or IL-13 concentrations. Some reports have suggested an increase in allergic symptoms, but any more severe consequences are unknown. This is not to say that enhancing IL-4 or IL-13 is not a valid therapeutic strategy, but its overall value is unknown. In contrast enhancement of NPs appear to be a more stable choice due to positive results in initial exploration of both the application and the expected negative side effects. First, NPs can be administrated via the nose-brain pathway enabling access to the brain avoiding some potential systemic side effects.37 Second, there appear to be few, if any significant side effects to intranasal NP application, at least in the short-term.38 <br />
<br />
Overall the above discussion has merely identified some of the more promising candidates to enhance browning white fat. One could argue that resorting to drugs to enhance the overall health of an individual versus simple diet and exercise is a regretful strategy. Unfortunately the reality of modern society is that more and more people seem to have less available time to exercise or eat right. In addition to a mounting negative weight external environment (increased pollution and industrial chemicals like BPAs) this drug enhancement strategy may be the most time and economically efficient means to ensure proper weight control and overall health for the future.<br />
<br />
Citations – <br />
<br />
1. van Marken Lichtenbelt, W, et Al. “Cold-activated brown adipose tissue in healthy men.” The New England Journal of Medicine. 2009. 360:1500-08.<br />
<br />
2. Lowell, B, and Spiegelman, B. “Towards a molecular understanding of adaptive thermogenesis.” Nature. 2000. 404:652-60.<br />
<br />
3. Seale, P, et Al. “PRDM16 controls a brown fat/skeletal muscle switch.” Nature. 2008. 454:961–967.<br />
<br />
4. Sidossis, L and Kajimura, S. “Brown and beige fat in humans: thermogenic adipocytes that control energy and glucose homeostasis.” J. Clin. Invest. 2015. 125(2):478-486.<br />
<br />
5. Long, J, et Al. “A smooth muscle-like origin for beige adipocytes.” Cell Metab. 2014. 19(5):810–820.<br />
<br />
6. Kajimura, S, and Saito, M. “A new era in brown adipose tissue biology: molecular control of brown fat development and energy homeostasis.” Annu Rev Physiol. 2014. 76:225–249.<br />
<br />
7. Harms, M, and Seale, P. “Brown and beige fat: development, function and therapeutic potential.” Nat Med. 2013. 19(10):1252–1263.<br />
<br />
8. Collins, S. “β-Adrenoceptor signaling networks in adipocytes for recruiting stored fat and energy expenditure.” Front Endocrinol (Lausanne). 2011. 2:102.<br />
<br />
9. Handschin, C, and Spiegelman, B. “Peroxisome proliferatoractivated receptor gamma coactivator 1 coactivators, energy homeostasis, and metabolism.” Endocr. Rev. 2006. 27:728–735. <br />
<br />
10. Yen, M, and Ewald, M. “Toxicity of weight loss agents.” J. Med. Toxicol. 2012. 8:145–152.<br />
<br />
11. Alvarez, R, et Al. “A novel regulatory pathway of brown fat themogenesis, retinoic acid is transcriptional activator of the mitochondrial uncoupling protein gene.” J. Biol. Chem. 270:5666-5673.<br />
<br />
12. Mercader, J, et Al. “Remodeling of white adipose tissue after retinoic acid administration in mice.” Endocrinology. 2006. 147:5325–5332.<br />
<br />
13. Kiefer, F, et Al. “Retinaldehyde dehydrogenase 1 regulates a thermogenic program in white adipose tissue.” Nat. Med. 2012. 18:918–925.<br />
<br />
14. Nguyen, K, et Al. “Alternatively activated macrophages produce catecholamines to sustain adaptive thermogenesis.” Nature. 2011. 480(7375):104–108.<br />
<br />
15. Qiu, Y, et Al. “Eosinophils and type 2 cytokine signaling in macrophages orchestrate development of functional beige fat.” Cell. 2014. 157(6):1292–1308.<br />
<br />
16. Stanya, K, et Al. “Direct control of hepatic glucose production by interleukins-13 in mice.” The Journal of Clinical Investigation. 2013. 123(1):261-271.<br />
<br />
17. Pedersen, B, and Febbraio, M “Muscle as an endocrine organ: focus on muscle-derived interleukin-6.” Physiological Reviews. 2008. 88(4):1379–406.<br />
<br />
18. Bostrom, P, et Al. “A PGC1-α-dependent myokine that drives brown-fat-like development of white fat and thermogenesis.” Nature. 2012. 481(7382):463–468.<br />
<br />
19. Lee, P, et Al. “Irisin and FGF21 are cold-induced endocrine activators of brown fat function in humans.” Cell Metab. 2014. 19(2):302–309.<br />
<br />
20. Erickson, H. “Irisin and FNDC5 in retrospect: An exercise hormone or a transmembrane receptor?” Adipocyte. 2013. 2(4):289-293.<br />
<br />
21. Timmons, J, et Al. “Is irisin a human exercise gene?” Nature. 2012. 488(7413):E9-11.<br />
<br />
22. Albrecht, E, et Al. “Irisin - a myth rather than an exercise-inducible myokine.” Scientific Reports. 2015. 5:8889.<br />
<br />
23. Ivanov, I, et Al. “Identification of evolutionarily conserved non-AUG-initiated N-terminal extensions in human coding sequences.” Nucleic Acids Research. 2011. 39(10):4220-4234.<br />
<br />
24. Rao, R, et Al. “Meteorin-like is a hormone that regulates immune-adipose interactions to increase beige fat thermogenesis.” Cell. 2014. 157:1279-1291.<br />
<br />
25. Ye, L, et Al. “TRPV4 is a regulator of adipose oxidative metabolism, inflammation, and energy homeostasis.” Cell. 2012. 151:96-110.<br />
<br />
26. Gao, X, Wu, L, and O’Neil, R. “Temperature-modulated diversity of TRPV4 channel gating: activation by physical stresses and phorbol ester derivatives through protein kinase C-dependent and -independent pathways.” J. Biol. Chem. 2003. 278:27129–27137.<br />
<br />
27. Thodeti, C, et Al. “TRPV4 channels mediate cyclic strain-induced endothelial cell reorientation through integrin-to-integrin signaling.” Circ. Res. 2009. 104:1123–1130.<br />
<br />
28. Masuyama, R, et Al. “TRPV4-mediated calcium influx regulates terminal differentiation of osteoclasts.” Cell Metab. 2008. 8:257–265.<br />
<br />
29. Phelps, C, et Al. “Differential regulation of TRPV1, TRPV3, and TRPV4 sensitivity through a conserved binding site on the ankyrin repeat domain.” J. Biol. Chem. 2010. 285:731–740.<br />
<br />
30. Bordicchia, M, et Al. “Cardiac natriuretic peptides act via p38 MAPK to induce the brown fat thermogenic program in mouse and human adipocytes.” The Journal of Clinical Investigation. 2012. 122(3):1022-1036.<br />
<br />
31. Sarzani, R, et Al. “Comparative analysis of atrial natriuretic peptide receptor expression in rat tissues.” J Hypertens Suppl. 1993. 11(5):S214–215.<br />
<br />
32. Sarzani, R, et Al. “Expression of natriuretic peptide receptors in human adipose and other tissues.” J Endocrinol Invest. 1996. 19(9):581–585.<br />
<br />
33. Sarzani, R, et Al. “Fasting inhibits natriuretic peptides clearance receptor expression in rat adipose tissue.” J Hypertens. 1995. 13(11):1241–1246.<br />
<br />
34. Sengenes, C, et Al. “Natriuretic peptides: a new lipolytic pathway in human adipocytes.” FASEB J. 2000. 14(10):1345–1351.<br />
<br />
35. Potter, L, and Hunter, T. “Guanylyl cyclase-linked natriuretic peptide receptors: structure and regulation.” J Biol Chem. 2001. 276(9):6057–6060.<br />
<br />
36. Sengenes, C, et Al. “Involvement of a cGMP-dependent pathway in the natriuretic peptide-mediated hormone-sensitive lipase phosphorylation in human adipocytes.” J Biol Chem. 2003. 278(49):48617–48626.<br />
<br />
37. Illum, L. “Transport of drugs from nasal cavity to the central nervous system.” Eur. J. Pharm. Sci. 11:1-18. <br />
<br />
38. Koopmann, A, et Al. “The impact of atrial natriuretic peptide on anxiety, stress and craving in patients with alcohol dependence.” Alcohol and Alcoholism. 2014. 49(3):282-286.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-45538758455239615872015-05-27T10:09:00.000-07:002015-05-27T10:09:10.267-07:00Where is my Solar and Wind Only City?<br />
Two years ago this blog proposed a <a href="http://www.bastionofreason.blogspot.com/2013/05/solar-and-wind-need-to-step-up-to-plate.html">challenge</a> to solar and wind supporters that if solar and wind were indeed the energy mediums of the future and did not require the assistance of other energy mediums (most notably fossil fuels like coal and natural gas) then they should empirically demonstrate this potential by transitioning a single medium sized city (10,000 – 15,000 individuals) to a grid where at least 70% of the electricity, not even all energy, was produced by solar and/or wind sources. Unfortunately despite the passage of two years and the so-called further expansion of solar and wind technology no such experiment has been conducted. <br />
<br />
This lack of attention to detail in producing a model city that would empirically represent and support the actual ability of solar and wind to produce the bulk of electricity and even possibly all energy in the future beyond simple hype is troubling. Are solar and wind proponents so irresponsible that they are willing to gamble the future of society on merely their hopes, dreams, and personal preferences rather than raw data? Do they think that incorporation of solar and wind to a grid steadily advancing from 10% to 20% then 30% then 40% then 50%, etc. will run perfectly with no significant problems? If so, then the solar and wind supporters who believe these things should be stripped of all of their credibility and influence; those who do not believe in such a perfect transition should begin immediately petitioning to accept the challenge. <br />
<br />
To the solar and wind proponents who object to the above characterization due to the notion that in March Georgetown, Texas (population approximately 48,000) proposed a plan to get all electricity from solar and wind sources, in essence meet this challenge, hold your horses. While it is true that there has been an initial arrangement between the Georgetown Utility Systems and Spinning Spur Wind Farm (owned by EDF Renewable Energy) and SunEdison to purchase 294 MW (144 MW wind and 150 MW solar) from their installations, this is only an initial arrangement, no actual testing or application has occurred yet. <br />
<br />
A more pertinent issue regarding the use of Georgetown as an example is that there is no specific information pertaining to the details of how Georgetown Utility Systems will manage this change in supplier. Basically the only public reporting on this strategy have been puff-hype pieces with no real substance or details. Both Spinning Spur Wind Farm and the yet to be identified SunEdison site have not been fully constructed, are not operational and do not have any secondary storage capacity; thus any electricity produced by these institutions will be live and when those institutions are not producing electricity there will be no electricity to provide to Georgetown. <br />
<br />
Initially there are at least three major questions that must be addressed to legitimize Georgetown as a model for a solar/wind only powered city. First, where is the detailed analysis of how electricity, and possibly even energy flows, would be properly compensated to avoid brownouts in times when there is insufficient electricity being produced by solar and wind sources? Simply saying “the sun shines in the day and the wind blows when the sun is not shining” is laughable and severely damages credibility. Anyone who thinks that there will not be periods of intermittence from both Spinning Spur and the SunEdison site is harboring an inaccurate belief. Basically show that 100% renewable can be done using math, not flowery words and misplaced hype; note that it is important to also include any transmission and inverter losses in the calculation and separate nameplate capacity from actual operational capacity.<br />
<br />
Second, it stands to reason that proponents of a solar/wind only city will not allow the use of natural gas or coal to act in a backup capacity during these periods of intermittence; therefore, during periods of excess solar and wind, electricity must be stored in a battery for use at a future time. So what type of battery structure(s) is going to be utilized to store that excess energy and what is the economic feasibility of using this structure? If no battery infrastructure is believed to be feasible or economical then what type of energy medium will be tapped to act as backup in lieu of a fossil fuel medium and how will it be properly incorporated? <br />
<br />
Third, how will consumer costs for energy change from the transition away from fossil fuels over time, i.e. what will costs be in year 1, what will costs be in year 10…? To simply say it will cost less is not sufficient. It must be demonstrated that it will cost less both now and in the future and if it will not cost less in the future what forms of compensation, if any, will be provided to the residents of Georgetown?<br />
<br />
Overall these are just the three most basic questions that must be addressed before anyone should accept the idea of Georgetown, Texas being a legitimate 100% solar/wind powered city when their plan is put into place a few years from now. If these questions are not answered with accurate specifics that are later properly executed over time then Georgetown loses all significance as both a legitimate and symbolic experiment for the validity of a solar and wind “future”. <br />
<br />
Of course it must be understood that the results in Georgetown are only an initial step, success only provides support to the possibility, not any guarantee for national eventuality. So how about it solar and wind supporters are you actually ready to put your theories to the test or are you simply content with the unscientific and irrational belief that everything will magically work out without the need for essential specifics, realistic assumptions, honest economics (which is incredibly lacking in most pro-solar and wind papers) and valid proof of concepts?13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-58785736664642666142015-05-06T10:09:00.000-07:002015-05-06T10:09:55.064-07:00A Theory Behind the Relationship Between Processed Foods and Obesity<br />
While there has been a general slowing in the progression of global obesity, especially in the developed world, there has yet to be a reversal of this detrimental trend. A recent study has suggested that one aspect of influence regarding obesity progression lies with the consumption of foods that have incorporated emulsifiers and how they interact with intestinal bacteria including increasing the probability of developing negative metabolic syndromes in mice.1 Based on this result understanding the digestive process may be an important element to understanding how emulsifiers and emulsions may influence weight outcomes. <br />
<br />
An emulsion is a mixture of at least two liquids where multiple components are immiscible, a characteristic commonly seen when oil is added to water resulting in a two-layer system where the oil floats on the surface of the water before it is mixed to form the emulsion. However, due to this immiscible aspect most emulsions are inherently unstable as “similar” droplets join together once again creating two distinct layers. When separated these layers are divided into two separate elements: a continuous phase and a droplet phase depending on the concentrations of the present liquids. Due to their inherent instability most emulsions are stabilized with the addition of an emulsifier. These agents are commonly used in many food products including various breads, pastas/noodles, and milk/ice cream. <br />
<br />
Emulsifier-based stabilization occurs by reducing interfacial tension between immiscible phases and by increasing the repulsion effect between the dispersed phases through either increasing the steric repulsion or electrostatic repulsion. Emulsifiers can produce these effects because they are amphiphiles (have two different ends): a hydrophilic end that is able to interact with the water layer, but not the oil layer and a hydrophobic end that is able to interact with the oil layer, but not the water layer. Steric repulsion is born from volume restrictions from direct physical barriers while electrostatic repulsion is based on exactly its namesake electrically charged surfaces producing repulsion when approaching each other. As previously mentioned above some recent research has suggested that the consumption of certain emulsifiers in mice have produced negative health outcomes relative to controls. Why would such an outcome occur?<br />
<br />
A typical dietary starch, which is one of the common foods that utilize emulsifiers is composed of long chains of glucose called amylose, a polysaccharide.2 These polysaccharides are first broken down in the mouth by chewing and saliva converting the food structure from a cohesive macro state to scattered smaller chains of glucose. Other more complex sugars like lactose and sucrose are broken down into their glucose and secondary sugar (galactose, fructose, etc.) structures.<br />
<br />
Absorption and complete degradation begins in earnest through hydrolysis by salivary and pancreatic amylase in the upper small intestine with little hydrolyzation occurring in the stomach.3 There is little contact or membrane digestion through absorption on brush border membranes.4 Polysaccharides break down into oligosaccharides that are then broken down into monosaccharides by surface enzymes on the brush borders of enterocytes.5 Microvilli in the entercytes then direct the newly formed monosaccharides to the appropriate transport site.5 Disaccharidases in the brush border ensure that only monosaccharides are properly transported, not lingering disaccharides. This process differs from protein digestion, which largely involves degradation in gastric juices comprised of hydrochloric acid and pepsin and later transfer to the duodenum.<br />
<br />
Within the small intestine free fatty acid concentration increases significantly as oils and fats are hydrolyzed at a faster rate than in the stomach due to the increased presence of bile salts and pancreatic lipase.3 It is thought that droplet size of emulsified lipids influences digestion and absorption where the smaller sizes allow for gastric lipase digestion in the duodenal lipolysis.6,7 The smaller the droplet size the finer the emulsion in the duodenum leading to a higher degree of lipolysis.8 Not surprisingly gastric lipase activity is also greater in thoroughly mixed emulsions versus coarse ones. <br />
<br />
Typically hydrophobic interactions are responsible for the self-assembly of amphiphiles where water molecules react to a disordered state gaining entropy as the hydrophobes of the amphiphilic molecules are buried in the cores of micelles due to repelling forces.9 However, in emulsions the presence of oils produce a low-polarity interaction that can facilitate reverse self-assembly10,11 with a driving force born from the attraction of hydrogen bonding. For example lecithin is a zwitterionic phospholipid with two hydrocarbon tails that form reverse spherical or ellipsoidal micelles when exposed to oil.21 Basically emulsions could have the potential to significantly increase the hydrogen concentration of the stomach. <br />
<br />
This potential increase in free hydrogen could be an important aspect to why emulsions produce negative health outcomes in model organisms.1 One of the significant interactions that govern the concentrations and types of intestinal bacteria is the rate of interspecies hydrogen transfer between hydrogen producing bacteria to hydrogen consuming methanogens. Note that non-obese individuals have small methanogen-based intestinal populations whereas obese individuals have larger populations where it is thought that the population of methanogen bacteria expands first before one gains significant weight.13,14 The importance behind this relationship is best demonstrated by understanding the biochemical process involved in the formation of fatty acids in the body.<br />
<br />
Methanogens like Methanobrevibacter smithii enhance fermentation efficiency by removing excess free hydrogen and formate in the colon. A reduced concentration of hydrogen leads to an increased rate of conversion of insoluble fibers into short-chain fatty acids (SCFAs).13 Proprionate, acetate, butyrate and formate are the most common SCFAs formed and absorbed across the intestinal epithelium providing a significant portion of the energy for intestinal epithelial cells promoting survival, differentiation and proliferation ensuring effective stomach lining.13,15,16 Butyric acid is also utilized by the colonocytes.17 Formate also can be directly used by hydrogenotrophic methanogens and propionate and lactate can be fermented to acetate and H2.13<br />
<br />
Overall the population of Archaea bacteria in the gut, largely associated to Methanobrevibacter smithii, is tied to obesity with the key factor being availability of free hydrogen. If there is a lot of free hydrogen then there is a higher probability for a lot of Archaea, otherwise there is a very low population of Archaea because there is a limited ‘food source’. Therefore, the consumption of food products with emulsions or emulsion-like characteristics or components could increase available free hydrogen concentrations, which will change the intestinal bacteria composition in a negative manner that will increase the probability that an individual becomes obese. This hypothesis coincides with existing evidence from model organisms that emulsion consumption has potential negative intestinal bacteria outcomes. One possible methodology governing this negative influence is how the change in bacteria concentration influences the available concentration of SCFAs, which could change the stability of stomach lining.<br />
<br />
In addition to influencing hydrogen concentrations in the gut, emulsions also appear to have a significant influence on cholecystokinin (CCK) concentrations. CCK plays a meaningful role in both digestion and satiety, two components of food consumption that significantly influence both body weight and intestinal bacteria composition. Most of these concentration changes occur in the small intestine, most notably in the duodenum and jejunum.18 The largest influencing element for CCK release is the amount and level of fatty acid presence in the chyme.18 CCK is responsible for inhibiting gastric emptying, decreasing gastric acid secretion and increased production of specific digestive enzymes like hepatic bile and other bile salts, which form amphipathic lipids that emulsify fats. <br />
<br />
When compared against non-emulsions, emulsion consumption appears to reduce the feedback effect that suppresses hunger after food intake. This effect is principally the result of changes in CCK concentrations versus other signaling molecules like GLP-1.19 Emulsion digestion begins when lipases bind to the surface of the emulsion droplets; the effectiveness of lipase binding increases with decreasing droplet size. Small emulsion droplets tend to have more complex microstructures, which produce more surface area that allow for more effective digestion. <br />
<br />
This higher rate of breakdown produces a more rapid release of fatty acids as the presences of free fatty acids in the small intestinal lumen is critical for gastric emptying and CCK release.20 This accelerated breakdown creates a relationship between CCK concentration and emulsion droplet size where the larger the droplet size the lower the released CCK concentration.21 One of the main reasons why larger emulsions produce less hunger satisfaction is that with the reduced rate of CCK concentration and emulsion breakdown there is less feedback slowing of intestinal transit. Basically the rate at which the food is traveling through the intestine proceeds at a faster rate because there are fewer cues (feedback) due to digestion to slow transit for the purpose of digestion. <br />
<br />
As alluded to above the type of emulsifier used to produce the emulsion appears to be the most important element to how an emulsion influences digestion. For example the lipids and fatty acid concentrations produced from digestion of a yolk lecithin emulsion were up to 50% smaller than one using polysorbate 20 (i.e. Tween 20) or caseinate.7 Basically if certain emulsifiers are used the rate of emulsion digestion can be reduced potentially increasing the concentration of bile salts in the small intestine, which could produce a higher probability for negative intestinal related events. <br />
<br />
Furthermore studies using low-molecular mass emulsifiers (two non-ionic, two anionic and one cationic) demonstrated three tiers of TG lipolysis governed by emulsifier-to-bile salt ratio.3 At low emulsifier-bile ratios (<0.2 mM) there was no change in solubilization capacity of micelles whereas at ratios between 0.2 mM and 2 mM solubilization capacity significantly increased, which limited interactions between the oil and destabilization reaction products reducing oil degradation.3 At higher ratios (> 2 mM) emulsifier molecules remain in the adsorption layer heavily limiting lipase activity, which significantly reduces digestion and oil degradiation.3<br />
<br />
Another possible influencing factor could be change in glucagon concentrations. There is evidence suggesting that increasing glucagon concentration in already fed rats can produce hypersecretory activity in both the jejunum and ileum.22-24 It stands to reason that due to activation potential of glucagon-like peptide-1 (GLP-1) in consort with CCK, glucagon plays some role. However, there are no specifics regarding how glucagon directly interacts with intestinal bacteria and the changes in digestion rate associated with emulsions. <br />
<br />
The methodology behind why emulsions and their associated emulsifiers produce negative health outcomes in mice is unknown, but it stands to reason that both how emulsions change the rate of digestion and the present hydrogen concentration play significant roles. These two factors have sufficient influence on the composition and concentration of intestinal bacteria, which have corresponding influence on a large number of digestive properties including nutrient extraction and SCFA concentration management. SCFA management may be the most pertinent issue regarding the metabolic syndrome outcomes seen in mice born from emulsifiers. <br />
<br />
It appears that creating emulsions that produce smaller drop sizes could mitigate negative outcomes, which can be produced by using lecithin over other types of emulsifiers. Overall while emulsifiers may be a necessary element in modern life to ensure food quality, instructing companies on the proper emulsifier to use at the appropriate ratios should have a positive effect on managing any detrimental interaction between emulsions and gut bacteria.<br />
<br />
<br />
<br />
Citations – <br />
<br />
1. Chassaing, B, et Al. “Dietary emulsifiers impact the mouse gut microbiota promoting colitis and metabolic syndrome.” Nature. 2015. 519(7541):92-96.<br />
<br />
2. Choy, A, et Al. “The effects of microbial transglutaminase, sodium stearoyl lactylate and water on the quality of instant fried noodles.” Food Chemistry. 2010. 122:957e964.<br />
<br />
3. Vinarov, Z, et Al. “Effects of emulsifiers charge and concentration on pancreatic lipolysis: 2. interplay of emulsifiers and biles.” Langmuir. 2012. 28:12140-12150. <br />
<br />
4. Ugolev, A, and Delaey, P. “membrane digestion – a concept of enzymic hydrolysis on cell membranes.” Biochim Biophys Acta. 1973. 300:105-128.<br />
<br />
5. Levin, R. “Digestion and absoption of carbohydrates from molecules and membranes to humans.” Am. J. Clin. Nutr. 1994. 59:690S-85.<br />
<br />
6. Mu, H, and Hoy, C. “The digestion of dietary triacylglycerols.” Progress in Lipid Research. 2004. 43:105e-133.<br />
<br />
7. Hur, S, et Al. “Effect of emulsifiers on microstructural changes and digestion of lipids in instant noodle during in vitro human digestion.” LWT – Food Science and Technology. 2015. 60:630e-636.<br />
<br />
8. Armand, M, et Al. “Digestion and absorption of 2 fat emulsions with different droplet sizes in the human digestive tract.” American Journal of Clinical Nutrition. 1999. 70:1096e1106<br />
<br />
9. Njauw, C-W, et Al. “Molecular interactions between lecithin and bile salts/acids in oils and their effects on reverse micellization.” Langmuir. 2013. 29:3879-3888.<br />
<br />
10. Israelachvili, J. “Intermolecular and surface forces. 3rd ed. Academic Press; San Diego. 2011.<br />
<br />
11. Evans, D, and Wennerstrom, H. “The colloidal domain: where physics, chemistry biology, and technology meet.” Wiley-VCH: New York. 2001.<br />
<br />
12. Tung, S, et Al. “A new reverse wormlike micellar system: mixtures of bile salt and lecithin in organic liquids.” J. Am. Chem. Soc. 2006. 128:5751-5756.<br />
<br />
13. Zhang, H, et, Al. “Human gut microbiota in obesity and after gastric bypass.” PNAS. 2009. 106(7): 2365-2370.<br />
<br />
14. Turnbaugh, P, et, Al. “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature. 2006. 444(7122):1027–31.<br />
<br />
15. Son, G, Kremer, M, Hines, I. “Contribution of Gut Bacteria to Liver Pathobiology.” Gastroenterology Research and Practice. 2010. doi:10.1155/2010/453563.<br />
<br />
16. Luciano, L, et Al. “Withdrawal of butyrate from the colonic mucosa triggers ‘mass apoptosis’ primarily in the G0/G1 phase of the cell cycle.” Cell and Tissue Research. 1996. 286(1):81–92.<br />
<br />
17. Cummings, J, and Macfarlane, G. “The control and consequences of bacterial fermentation in the human colon.” Journal of Applied Bacteriology. 1991. 70:443459.<br />
<br />
18. Rasoamanana, R, et Al. “Dietary fibers solubilized in water or an oil emulsion induce satiation through CCK-mediated vagal signaling in mice.” J. Nutr. 2012. 142:2033-2039. <br />
<br />
19. Adam, T, and Westerterp-Plantenga, M. “Glucagon-like peptide-1 release and satiety after a nutrient challenge in normal-weight and obese subjects.” Br J Nutr. 2005. 93:845–51.<br />
<br />
20. Little, T, et Al. “Free fatty acids have more potent effects on gastric emptying, gut hormones, and appetite than triacylglycerides.” Gastroenterology. 2007. 133:1124–31.<br />
<br />
21. Seimon, R, et Al. “The droplet size of intraduodenal fat emulsions influences antropyloroduodenal motility, hormone release, and appetite in healthy males.” Am. J. Clin. Nutr. 2009. 89:1729-1736.<br />
<br />
22. Young, A, and Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 1. Jejunal hypersecretion induced by starvation.” Gut. 1990. 31:43-53.<br />
<br />
23. Youg, A, Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 2. Ileal hypersection induced by starvation.” Gut. 1990. 31:162-169.<br />
<br />
24. Lane, A, Levin, R. “Enhanced electrogenic secretion in vitro by small intestine from glucagon treated rats: implications for the diarrhoea of starvation.” Exp. Physiol. 1992. 77:645-648.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0tag:blogger.com,1999:blog-57719692398152598.post-55973265459817598382015-04-21T10:05:00.001-07:002015-04-21T10:05:52.072-07:00Augmenting rainfall probability to ward off long-term drought?<br />
Despite the ridiculous pseudo controversy surrounding global warming in the public discourse, the reality is that global warming is real and has already significantly started influencing the global climate. One of the most important factors in judging the range and impact of global warming as well as how society should respond is also one of the more perplexing, cloud formation. Not only do clouds influence the cycle of heat escape and retention, but they also drive precipitation probability. Precipitation plays an important role in maintaining effective hydrological cycles as well as heat budgets and will experience significant changes in reaction to future warming largely producing more extreme outcomes with some areas receiving significant increases that will produce flash flooding whereas other areas will be deprived of rainfall producing longer-term droughts similar to those now seen in California. <br />
<br />
At its core precipitation is influenced by numerous factors like solar heating and terrestrial radiation.1,2 Of these factors various aerosol particles are thought to hold an important influence. Both organic and inorganic aerosols are plentiful in the atmosphere helping to cool the surface of Earth by sunlight scattering or serving as nuclei support for the formation of water droplets and ice crystals.3 Not surprisingly information regarding the means in which the properties of these aerosols influence cloud formation and precipitation is still limited, which creates significant uncertainties in climate modeling and planning. Therefore, increasing knowledge of how aerosols influence precipitation will provide valuable information for managing the various changes that will occur and even possibly mitigating those changes. <br />
<br />
The formation of precipitation within clouds is heavily influenced by ice nucleation. Ice nucleation involves the induction of crystallization in supercooled water (supercooled = a meta-stable state where water is in liquid form at below typical freezing temperatures). The process of ice nucleation typically occurs through one of two pathways: homogenous or heterogeneous. Homogeneous nucleation entails spontaneous nucleation within a properly cooled solution (usually a supersaturated solution of relative humidity of 150-180% with a temperature of around –38 degrees C) requiring only liquid water or aqueous solution droplets.4-6 Due to its relative simplicity homogeneous nucleation is better understood than heterogeneous nucleation. However, because of the temperature requirements homogeneous nucleation typically only takes place in the upper troposphere and with a warming atmosphere it should be expected that its probability of occurrence would reduce.<br />
<br />
Heterogeneous nucleation is more complicated because of the multiple pathways that can be taken, i.e. depositional freezing, condensation, contact, and immersion freezing.7,8 Typically these different pathways allow for more flexibility in nucleation with generic initiation conditions beginning at just south of 0 degrees C and a relative humidity of 100%. This higher temperature fails to prevent nucleation because of the presence of a catalyst, a non-water based substance that is commonly referred to as an ice-forming nuclei (IN). Also heterogeneous nucleation can involve diffusive growth in a mixed-phase cloud that consumes liquid droplets at a faster rate (Wegener–Bergeron–Findeisen process) than super-cooled droplets or snow/graupel aggregation.9<br />
<br />
Laboratory experiments have demonstrated support for many different materials acting as IN: different metallic particles, biological materials, certain glasses, mineral dust, anhydrous salts, etc.8,10,11 These laboratory experiments involve wind tunnels, electrodynamic levitation, scanning calorimetry, cloud chambers, and optical microscopy.12,13 However, not surprisingly there appears a significant difference between nucleation ability in the lab and in nature.8,10<br />
<br />
Also while homogenous ice nucleation is exactly that, heterogeneous nucleation does not have the same quenching properties.8 Temperature variations within a cloud can produce differing methods of heterogeneous nucleation versus homogeneous nucleation producing significant differences in efficiency. For example not surprisingly some forms of nucleation in cloud formations are more difficult to understand like high concentration formation in warm precipitating cumulus clouds; i.e. particle concentrations increasing from 0.01 L-1 to 100 L-1 in a few minutes at temperatures exceeding –10 degrees C and outpacing existing ice nucleus measurements.14 One explanation for this phenomenon is the Hallett-Mossop (H-M) method. This method is thought to achieve this rapid freezing through interaction with a narrow band of supercooled raindrops producing rimers.15<br />
<br />
The H-M methodology requires cloud temperatures between approximately –1 and –10 degrees C with the availability of large rain droplets (diameters > 24 um), but at a 0.1 ratio relative to smaller (< 13 um droplets).16,17 When the riming process begins ice splinters are ejected and grow through water vapor deposition producing a positive feedback effect increasing riming and producing more ice splinters. Basically a feedback loop develops between ice splinter formation and small drop freezing. Unfortunately there are some questions whether or not this methodology can properly explain the characteristics of secondary ice particles and the formation of ice crystal bursts under certain time constraints.18 However, these concerns may not be accurate due to improper assumptions regarding how water droplets form relative to existing water concentrations.15<br />
<br />
One of the more important element of rain formation in warm precipitating cumulus clouds, in addition to other cloud formations, appears to involve the location of ice particle concentrations at the top of the cloud formation where there is a higher probability for large droplet formation (500 – 2000 um diameters).15 In this regard cloud depth/area is a more important influencing element than cloud temperature.19 In addition the apparent continued formation of ice crystals stemming from the top proceeding downwards can produce raindrop freezing that catalyzes ice formation creating a positive feedback and ice bursts.20<br />
<br />
This process suggests that there is a sufficient replenishment of small droplets at the cloud top increasing the probability of sufficient riming. It is thought that the time variation governing the rate of ice multiplication and how cloud temperature changes accordingly is determined by dry adiabatic cooling at the cloud top, condensational warming, evaporational cooling at the cloud bottom.15 Bacteria also appear to play a meaningful role in both nucleating primary ice crystals and scavenging secondary crystals.7 Even if bacteria concentrations are low (< 0.05 L-1) the catalytic effect of nucleating bacteria produces a much more “H-M” friendly environment.<br />
<br />
The most prominent inorganic aerosol that acts as an IN is dust commonly from deserts that is pushed into the upper atmosphere by storms.21,22 The principal origin of this dust is from the Sahara Desert, which is lofted year round versus dust from other origin points like the Gobi or Siberia. While the ability of this dust to produce rain is powerful it can also have a counteracting effect as a cloud condensation nuclei (CCN). In most situations when CCN concentration is increased raindrop conversion becomes less efficient, especially for low-level clouds (in part due to higher temperatures) largely by reducing riming efficiency.<br />
<br />
The probability of dust acting as a CCN is influenced by the presence of anthropogenic pollution, which typically is a CCN on its own.23,24 In some situations the presence of pollution could also increase the overall rate of rainfall as it can suppress premature rainfall allowing more rain droplets to crystallize increasing riming and potential rainfall. However, this aspect of pollution is only valid in the presence of dust or other INs for if there is a dearth of IN concentration, localized pollution will decrease precipitation.25 Soot can also influence nucleation and resultant rainfall, but only under certain circumstances. For example if the surface of the soot contains available molecules to form hydrogen bonds (typically from available hydroxyl and carbonyl groups) with available liquid water molecules nucleation is enhanced.26 Overall it seems appropriate to label dust as a strong IN and anthropogenic pollution as a significant CCN. <br />
<br />
In mineral collection studies and global simulations of aerosol particle concentrations both deposition and immersion heterogeneous nucleation appear dominated by dust concentrations acting as INs, especially in cirrus clouds.10,27,28 Aerosols also modify certain cloud properties like droplet size and water phase. Most other inorganic atmospheric aerosols behave like cloud condensation nuclei (CCN), which assist the condensation of water vapor for the formation of cloud droplets in a certain level of super-saturation.25 Typically this condensation produces a large number of small droplets, which can reduce the probability of warm rain (above freezing point).29,30 <br />
<br />
Recall that altitude is important in precipitation, thus it is not surprising that one of the key factors in how aerosols influence precipitation type and probability appears to involve the elevation and temperature at which they interact. For example in mixed-phase clouds, the top area increases relative to increases in CCN concentrations versus a smaller change at lower altitudes and no changes in pure liquid clouds.15,31 Also CCN only significantly influence temperatures when top and base cloud temperatures are below freezing.31 In short it appears that CCN influence is reduced relative to IN influence at higher altitudes and lower temperatures.<br />
<br />
Also cloud drop concentration and size distribution at the base and top of a cloud determine the efficiency of the CCN and are dictated by the chemical structure and size of an aerosol. For example larger aerosols have a higher probability of becoming CCN over IN due to their coarse structure. Finally and not surprisingly overall precipitation frequency increases with high water content and decreases with low water content when exposed to CCNs.31 This behavior creates a positive feedback structure that increases aerosol concentration, so for arid regions the probability of drought increases and in wet regions the probability of flooding increases.<br />
<br />
While dust from natural sources as well as general pollution are the two most common aerosols, an interesting secondary source may be soil dust produced from land use due to deforestation or large-scale construction projects.32-34 These actions create anthropogenic dust emissions that can catalyze a feedback loop that can produce greater precipitation extremes; thus in certain developing economic regions that may be struggling with droughts continued construction in effort to improve the economy could exacerbate droughts. Therefore, developing regions may need to produce specific methodologies to govern their development to ensure proper levels of rainfall for the future.<br />
<br />
While the role of dust has not been fully identified on a mechanistic level, its importance is not debatable. The role of biological particles, like bacteria, is more controversial and could be critical to identifying a method to enhance rainfall probability. It is important to identify the capacity of bacteria to catalyze rainfall for some laboratory studies have demonstrated that inorganic INs only have significant activity below –15 degrees C.10,35 For example in samples of snowfall collected globally originating at temperatures of –7 degrees C or warmer a vast majority of the active IN, up to 85%, were lysozyme-sensitive (i.e. probably bacteria).36,37 Also rain tends to have higher proportions of active IN bacteria than air in the same region.38 With further global warming on the horizon air temperatures will continue to increase lowering the probability window for inorganic IN activity, thus lowering the probability of rainfall in general (not considering any other changes born from global warming). <br />
<br />
Laboratory and field studies have demonstrated approximately twelve species of bacteria with significant IN ability spread within three orders of the gammaproteobacteria with the two most notable/frequent agents being Pseudomonas syringae and P. fluorescens and to a lesser extent Xanthomonas.39,40 In the presence of an IN bacterium nucleation can occur at temperatures as warm as –1.5 degrees C to –2 degrees C.41,42 These bacteria appear to have the ability to act as IN due to the existence of a single gene that codes for a specific membrane protein that catalyzes crystal formation by acting as a template for water molecule arrangement.43 The natural origins of these bacteria derive mostly from surface vegetation. <br />
<br />
Supporting the idea of the key membrane scaffolding, an acidic pH environment can significantly reduce the effectiveness of bacteria-based nucleation.45,46 Also these protein complexes for nucleation are larger for warmer temperature nucleating bacteria, thus more prone to breakdown in higher acidic environments.44,46 Therefore, low lying areas that have significant acidic pollution like sulfurs could see a reduction in precipitation probability over time. Also it seems that this protein complex could be the critical element to bacteria-based nucleation versus the actual biological processes of the bacteria as nucleation was augmented even when the bacteria itself was no longer viable.46<br />
<br />
Despite laboratory and theoretical evidence supporting the role of bacteria in precipitation, as stated above what occurs in the laboratory serves little purpose if it does not translate to nature. This translation is where a controversy arises. It can be difficult to separate the various particles within clouds from residue collection due to widespread internal mixing, but empirical evidence demonstrates the presence of biological material in orographic clouds.47 Also ice nucleation bacteria are present over all continents as well as in various specific locations like the Amazon basin.37,48,49<br />
<br />
Some estimates have suggested that 10^24 bacteria enter the atmosphere each year and stay circulating between 2 and 10 days allowing bacteria, theoretically, to travel thousands of miles.50,51 However, there is a lack of evidence for bacteria in the upper troposphere and their concentrations are dramatically lower than those of inorganic materials like dust and soot.28,35,52 Based on this lack of concentrations questions exist to the efficiency of how these bacteria are aerosolized over their atmospheric lifetimes. One study suggests that IN active bacteria are much more efficiently precipitated than non-active IN bacteria, which may explain the disparity between the observations in the air, clouds and precipitation.53<br />
<br />
Another possible explanation for this disparity is that most biological particles are generated on the surface and are carried by updrafts and currents into the atmosphere. While the methods of transport are similar to inorganic particles, biological particles have a higher removal potential due to dry or wet deposition due to their typical greater size. Therefore, from a nature standpoint bacteria reside in orographic clouds because they are able to participate in their formations, but are not able to reach higher cloud formations, so most upper troposphere rain is born from dust not bacteria. <br />
<br />
Some individuals feel that the current drop freezing assays, which are used to identify the types of bacteria and other agents in a collected sample, can be improved upon to produce a higher level of discrimination between the various classes of IN active bacteria that may be present in the sample. One possible idea is to store the sample at low temperatures and observe the growth and the type of IN bacteria that occur in a community versus individual samples.54 Perhaps new identification techniques would increase the ability to discern the role of bacteria in cloud formation and precipitation.<br />
<br />
Among the other atmospheric agents and their potential influence on precipitation potassium appears to have a meaningful role. Some biogenic emissions of potassium, especially around the Amazon, can act as catalysts for the beginning process of organic material condensation.55 However, this role seems to ebb as potassium mass fraction drops as the condensation rate increases.55 This secondary role of potassium as well as the role of bacteria may signal an important element to why past cloud seeding experiments have not achieve the hypothesized expectations. <br />
<br />
The lack of natural bacteria input into higher cloud formations leads to an interesting question. What would happen if IN active bacteria like P. syringae were released via plane or other increased altitude method that would result in a higher concentration of bacteria in these higher altitude cloud formations? While typical cloud formation involves vapor saturation due to air cooling and/or increased vapor concentration, increased IN active bacteria concentration could also speed cloud formation as well as precipitation probability. <br />
<br />
Interestingly in past cloud seeding experiments orographic clouds appear to be more sensitive to purposeful seeding versus other cloud formations largely because of the shorter residence times of cloud droplets.56,57 One of the positive elements of seeding appears to be that increased precipitation in the target area does not reduce the level of precipitation in surrounding areas including those beyond the target area. In fact it appears that there is a net increase (5-15%) among all areas regardless of the location of seeding.58 The previous presumption that there was loss appears to be based on randomized and not properly controlled seeding experiments.58<br />
<br />
The idea of introducing increased concentrations of IN active bacteria is an interesting one if it can increase the probability of precipitation. Of course possible negatives must be considered for such an introduction. The chief negative that could be associated with such an increase from a bacterium like P. syringae would be the possibility of more infection of certain types of plants. The frost mechanism of P. syringae is a minor concern because most of the seeding would be carried out between late spring and early fall where night-time temperatures should not be cold enough to induce freezing. Sabotaging the type III secretion system in P. syringe via some form of genetic manipulation should reduce, if not eliminate, the plant invasion potential. Obviously controlled laboratory tests should be conducted to ensure a high probability of invasion neutralization success before any controlled and limited field tests are conducted. If the use of living bacteria proves to be too costly, exploration of simply using the key specific membrane protein is another possible avenue of study.<br />
<br />
Overall the simple fact is that due to global warming, global precipitation patterns will change dramatically. The forerunner to these changes can already been seen in the state of California with no reasonable expectation for new significant levels of rainfall in sight. While other potable water options are available like desalinization, the level of infrastructure required to divert these new sources from origins source to usage points will be costly and these processes do have significant detrimental byproducts. If precipitation probabilities can be safely increased through new cloud seeding strategies like the inclusion of IN active bacteria it could go a long way to combating some of the negative effects of global warming while the causes of global warming itself are mitigated.<br />
<br />
<br />
<br />
Citations – <br />
<br />
1. Zuberi, B, et Al. “Heterogeneous nucleation of ice in (NH4)2SO4-H2O particles with mineral dust immersions.” Geophys. Res. Lett. 2002. 29(10). 1504.<br />
<br />
2. Hung, H, Malinowski, A, and Martin, S. “Kinetics of heterogeneous ice nucleation on the surfaces of mineral dust cores inserted into aqueous ammonium sulfate particles.” J. Phys. Chem. 2003. 107(9):1296-1306. <br />
<br />
3. Lohmann, U. “Aerosol effects on clouds and climate.” Space Sci. Rev. 2006. 125:129-137.<br />
<br />
4. Hartmann, S, et Al. “Homogeneous and heterogeneous ice nucleation at LACIS: operating principle and theoretical studies.” Atmos. Chem. Phys. 2011. 11:1753-1767.<br />
<br />
5. Cantrell, W, and Heymsfield, A. “Production of ice in tropospheric clouds. A review.” American Meteorological Society. 2005. 86(6):795-807.<br />
<br />
6. Riechers, B, et Al. “The homogeneous ice nucleation rate of water droplets produced in a microfluidic device and the role of temperature uncertainty.” Physical Chemistry Chemical Physics. 2013. 15(16):5873-5887.<br />
<br />
7. Cziczo, D, et Al. “Clarifying the dominant sources and mechanisms of cirrus cloud formation.” Science. 2013. 340(6138):1320-1324.<br />
<br />
8. Pruppacher, H, and Klett, J. “Microphysics of clouds and precipitation.” (Kluwer Academic, Dordrecht. Ed. 2, 1997). pp. 309-354.<br />
<br />
9. Lance, S, et Al. “Cloud condensation nuclei as a modulator of ice processes in Arctic mixed-phase clouds.” Atmos. Chem. Phys. 2011. 11:8003-8015.<br />
<br />
10. Hoose, C, and Mohler, O. “Heterogeneous ice nucleation on atmospheric aerosols: a review of results from laboratory experiments.” Atmos. Chem. Phys. 2012. 12:9817-9854. <br />
<br />
11. Abbatt, J, et Al. “Solid ammonium sulfate aerosols as ice nuclei: A pathway for cirrus cloud formation.” Science. 2006. 313:1770-1773.<br />
<br />
12. Murray, B, et Al. “Kinetics of the homogeneous freezing of water.” Phys. Chem. 2010. 12:10380-10387.<br />
<br />
13. Chang, H, et Al. “Phase transitions in emulsified HNO3/H2O and HNO3/H2SO4/H2O solutions.” J. Phys. Chem. 1999. 103:2673-2679.<br />
<br />
14. Hobbs, P, and Rangno, A. “Rapid development of ice particle concentrations in small, polar maritime cumuliform clouds.” J. Atmos. Sci. 1990. 47:2710–2722.<br />
<br />
15. Sun, J, et Al. “Mystery of ice multiplication in warm-based precipitating shallow cumulus clouds.” Geophysical Research Letters. 2010. 37:L10802.<br />
<br />
16. Hallett, J, and Mossop, S. “Production of secondary ice particles during the riming process.” Nature. 1974. 249:26-28.<br />
<br />
17. Mossop, S. “Secondary ice particle production during rime growth: The effect of drop size distribution and rimer velocity.” Q. J. R. Meteorol. Soc. 1985. 111:1113-3324.<br />
<br />
18. Mason, B. “The rapid glaciation of slightly supercooled cumulus clouds.” Q. J. R. Meteorol. Soc. 1996. 122:357-365.<br />
<br />
19. Rangno, A, and Hobbs, P. “Microstructures and precipitation development in cumulus and small cumulous-nimbus clouds over the warm pool of the tropical Pacific Ocean. Q. J. R. Meteorol. Soc. 2005. 131:639-673.<br />
<br />
20. Phillips, V, et Al. “The glaciation of a cumulus cloud over New Mexico.” Q. J. R. Meteorol. Soc. 2001. 127:1513-1534.<br />
<br />
21. Karydis, V, et Al. “On the effect of dust particles on global cloud condensation nuclei and cloud droplet number.” J. Geophys. Res. 2011. 166:D23204. <br />
<br />
22. Connolly, P, et Al. “Studies of heterogeneous freezing by three different desert dust samples.” Atmos. Chem. Phys. 2009. 9:2805-2824. <br />
<br />
23. Lynn, B, et Al. “Effects of aerosols on precipitation from orographic clouds.” J. Geophys. Res. 2007. 112:D10225.<br />
<br />
24. Jirak, I, and Cotton, W. “Effect of air pollution on precipitation along the Front Range of the Rocky Mountain.” J. Appl. Meteor. Climatol. 2006. 45:236-245.<br />
<br />
25. Fan, J, et Al. “Aerosol impacts on California winter clouds and precipitation during CalWater 2011: local pollution versus long-range transported dust.” Atmos. Chem. Phys. 2014. 14:81-101.<br />
<br />
26. Gorbunov, B, et Al. “Ice nucleation on soot particles.” J. Aerosol Sci. 2001. 32(2):199-215.<br />
<br />
27. Kirkevag, A, et Al. “Aerosol-climate interactions in the Norwegian Earth System Model – NorESM. Geosci. Model Dev. 2013. 6:207-244.<br />
<br />
28. Hoose, C, Kristjansson, J, Burrows, S. “How important is biological ice nucleation in clouds on a global scale?” Environ. Res. Lett. 2010. 5:024009.<br />
<br />
29. Lohmann, U. “A glaciation indirect aerosol effect caused by soot aerosols.” Geophys. Res. Lett. 2002. 29:11.1-4. <br />
<br />
30. Koop, T, et Al. “Water activity as the determinant for homogeneous ice nucleation in aqueous solutions.” Nature. 406:611-614.<br />
<br />
31. Li, Z, et Al. “Long-term impacts of aerosols on the vertical development of clouds and precipitation.” Nature Geoscience. 2011. DOI: 10.1038/NGEO1313<br />
<br />
32. Zender, C, Miller, R, and Tegen, I. “Quantifying mineral dust mass budgets: Terminology, constraints, and current estimates.” Eos. Trans. Am. Geophys. Union. 2004. 85:509-512.<br />
<br />
33. Forester, P, et Al. “Changes in atmospheric constituents and in radiative forcing. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. <br />
<br />
34. O’Sullivan, D, et Al. “Ice nucleation by fertile soil dusts: relative importance of mineral and biogenic components.” Atmos. Chem. Phys. 2014. 14:1853-1867.<br />
<br />
35. Murray, B, et Al. “Ice nucleation by particles immersed in supercooled cloud droplets.” Chem. Soc. Rev. 2012. 41:6519-6554.<br />
<br />
36. Christner, B, et Al. “Geographic, seasonal, and precipitation chemistry influence on the abundance and activity of biological ice nucleators in rain and snow. PNAS. 2008. 105:18854. dio:10.1073/pnas.0809816105.<br />
<br />
37. Christener, B, et Al. “Ubiquity of biological ice nucleators in snowfall.” Science. 2008. 319:1214.<br />
<br />
38. Stephanie, D, and Waturangi, D. “Distribution of ice nucleation-active (INA) bacteria from rainwater and air, NAYATI Journal of Biosciences. 2011. 18:108-112.<br />
<br />
39. Vaitilingom, M, et Al. “Long-term features of cloud microbiology at the puy de Dome (France). Atmos. Environ. 2012. 56:88-100.<br />
<br />
40. Cochet, N and Widehem, P. “Ice crystallization by Pseudomonas syringae.” Appl. Microbiol. Biotechnol. 2000. 54:153-161.<br />
<br />
41. Heymsfield, A, et Al. “Upper-tropospheric relative humidity observations and implications for cirrus ice nucleation.” Geophys. Res. Lett. 1998. 25:1343-1346.<br />
<br />
42. Twohy, C, and Poellot, M. “Chemical characteristics of ice residual nuclei in anvil cirrus clouds: implications for ice formation processes.” Atmos. Chem. Phys. 2005. 5:2289-2297. <br />
<br />
43. Joly, M, et Al. “Ice nucleation activity of bacteria isolated from cloud water.” Atmos. Environ. 2013. 70:392-400.<br />
<br />
44. Attard, E, et Al. “Effects of atmospheric conditions on ice nucleation activity of Pseudomonas.” Atmos. Chem. Phys. 2012. 12:10667-10677.<br />
<br />
45. Kawahara, H, Tanaka, Y, and Obata H. “Isolation and characterization of a novel ice-nucleating bacterium, Pseudomonas, which has stable activity in acidic solution.” Biosci. Biotechnol. Biochem. 1995. 59:1528-1532.<br />
<br />
46. Kozloff, L, Turner, M, and Arellano, F. “Formation of bacterial membrane ice-nucleating lipoglycoprotein complexes.” J. Bacteriol. 1991. 173:6528-6536.<br />
<br />
47. Pratt, K, et Al. “In-situ detection of biological particles in high altitude dust-influenced ice clouds.” Nature Geoscience. 2009. 2:dio:10.1038/ngeo521.<br />
<br />
48. Prenni, A, et Al. “Relative roles of biogenic emissions and Saharan dust as ice nuclei in the Amazon basin.” Nat. Geosci. 2009. 2:402-405.<br />
<br />
49. Phillips, V, et Al. “Potential impacts from biological aerosols on ensembles of continental clouds simulated numerically.” Biogeosciences. 2009. 6:987-1014. <br />
<br />
50. Burrows, S, et Al. “Bacteria in the global atmosphere – Part 1: review and synthesis of literature data for different ecosystems.” Atmos. Chem. Phys. 2009. 9:9263-9280.<br />
<br />
51. Burrows, S, et Al. “Bacteria in the global atmosphere – Part 2: modeling of emissions and transport between different econsystems.” Atmos. Chem. Phys. 2009. 9:9281-9297.<br />
<br />
52. Despres, V, et Al. “Primary biological aerosol particles in the atmosphere: a review. Tellus B. 2012. 64:349-384.<br />
<br />
53. Amato, P, et Al. “Survival and ice nucleation activity of bacteria as aerosols in a cloud simulation chamber.” Atmos. Chem. Phys. Discuss. 2015. 15:4055-4082.<br />
<br />
54. Stopelli, E, et Al. “Freezing nucleation apparatus puts new slant on study of biological ice nucleators in precipitation.” Atmos. Meas. Tech. 2014. 7:129-134.<br />
<br />
55. Pohlker, C, et Al. “Biogenic potassium salt particles as seeds for secondary organic aerosol in the Amazon.” Science. 2012. 337(31):1075-1078.<br />
<br />
56. Givati, A, and Rosenfeld, D. “Separation between cloud-seeding and air-pollution effects.” J. Appl.Meteorol. 2005. 44:1298-1314.<br />
<br />
57. Givati, A, et Al. “The Precipitation Enhancement Project: Israel - 4 Experiment. The<br />
Water Authority, State of Israel. 2013. pp. 55.<br />
<br />
58. DeFelice, T, et Al. “Extra area effects of cloud seeding – An updated assessment.” Atmospheric Research. 2014. 135-136:193-203.13Emethhttp://www.blogger.com/profile/15788112561637572273noreply@blogger.com0