Tuesday, June 27, 2017

The Necessity of Carbon Remediation and its Application

Over five years ago I discussed the issue that addressing global warming would involve both the reduction of new human based releases of carbon dioxide (CO2) into the atmosphere (carbon mitigation) and developing a method of increasing the rate of removal of already existing CO2 in the atmosphere either spurred through natural and/or technological means (carbon remediation). This dual requirement is born from the inability of nature to currently manage existing and future CO2 levels to ensure the maintenance of a viable environment to accommodate both the existing global human population and any increases that are seen in the near-future.

For both carbon mitigation and remediation two elements take precedence: effectiveness and speed. Effectiveness is rather self-explanatory; if the applied strategies are unable to reduce the release of new CO2 concentrations and remove more CO2 from the air versus what is added over the life-cycle of the remediation processes then such strategies are not worth exploring. Speed is necessary because there is already a dangerous amount of CO2 in the atmosphere and the rate of carbon mitigation is not proceeding nearly fast enough relative to the capacity of natural sinks to remove CO2. Basically with each passing year the total concentration of CO2 in the atmosphere is increasing not decreasing and based on current mitigation patterns this reality is not going to change in the near future. Note while both mitigation and remediation are important, the remainder of this discussion will focus on remediation.

With the idea of speed in mind, while there are more cost-effective (i.e. more economically attractive) remediation strategies available, largely those involving planting trees or synthesizing bio-char, these methods are significantly slower than various technological methods. In addition to the issue of speed, the efficiency of natural methods like planting trees could be called into question for there is the potential for natural sinks to decline in overall CO2 capacity between less CO2 absorption from trees, a more acidic ocean beginning to out-gas due to changes in the concentration gradient or even decreased levels of material weathering.

Even if there was no threat of lost absorption capacity from natural sinks, it is difficult to conclude that natural sinks will be able to remove enough CO2 from the atmosphere, even in a scenario of rapid emission reduction due to the already existing concentration, before the occurrence of serious negative environmental outcomes. Therefore, while it may not be a popular notion for some environmentalists and some economists, the simple reality is that technology will have to be at the forefront of removing existing CO2 from the atmosphere leaving nature to play more of an auxiliary role.

Of the two major strategies for large-scale carbon remediation, direct air capture and ocean fertilization, initial tests with ocean fertilization have not been positive. While the initial theory is solid, in practice the increased phytoplankton concentrations have been unable to demonstrate any real gains in CO2 removal, largely due to increased predation from zooplankton.1 These complications have soured the chief advantage of ocean fertilization, simplicity, leaving direct air capture as the theoretical best strategy for carbon remediation.

To ensure clarity, the term “Direct Air Capture” is being interpreted as: the technological removal of atmospheric CO2 from a non-point source (versus a point source which would be a power plant or automobile) by reacting atmospheric CO2 with a sorbent (usually an alkaline NaOH solution). This reaction with the sorbent typically forms sodium carbonate and water. The carbonate then reacts with calcium hydroxide (Ca(OH)2)) resulting in the generation of calcite (CaCO3) and reformation of the sodium hydroxide. The process of causticization transfers a vast majority of the carbonate ions (»94-95%) from the sodium to the calcium cation and the calcium carbonate precipitate is thermally decomposed to regenerate the previously absorbed gaseous CO2. The final step involves thermal decomposition of the calcite in the presence of oxygen along with the hydration of lime (CaO) to recycle the calcium hydroxide.2,3 Obviously some of the details can differ depending on the type of sorbent utilized and other side elements of the process, but the above description entails the general chemical operation of direct air capture.

Obviously direct air capture is not without its own challenges mostly due to the incredibly small concentration of CO2 in the atmosphere for while 400+ parts per million (ppm) is very significant from an environmental standpoint, it is clearly not a large amount from a chemical reaction standpoint. This CO2 “deficiency” is largely responsible for the significant costs associated with CO2 removal via direct air capture, which have been estimated at a cost floor of $300 per ton of CO2 (which is optimistic in isolation) to a $1200+ per ton of CO2 ceiling (which is rather pessimistic).4 However, regardless of these potential costs it does appear that whatever the actual costs, it is one that humanity will have to foot the bill for if it wants to maximize its probability of surviving from a societal standpoint into the near future.

There are three major issues surrounding the proper functionality of the process of direct air capture not involving the specifics of the direct process of capturing the CO2: power use, water use, and end destination of the absorbed CO2. Not surprisingly each of these issues must be addressed to optimize the overall process of CO2 removal from the atmosphere and maximize its overall economics.

The consideration of the power source is important relative to speed and efficiency regarding the total net CO2 captured and removed from the atmosphere. For example, if a trace emission source is utilized (nuclear, geothermal, wind or solar) then the process can be reasonably estimated as 90-99% efficient (10-100 tons of CO2 will be captured and removed for every 1 ton of CO2 used to power the process). With this estimate the net cost per ton will be 1.01-1.1 times more than the gross cost relative to the power use component. However, if a fossil fuel source is utilized then, largely dependent on the exact fuel mix, the process will be 50-70% efficient and the net cost will be about 1.3-1.5 times larger than the gross estimated cost for the power use component.

Obviously due to this significant efficiency disparity the utilization of a trace emission source for the process is imperative, but which process is most appropriate? Speed is the most important element in the removal process because of the existing and future damage to the environment, something that money really cannot replace so the process must operate as close to 24 hours a day 7 days a week as possible. This requirement heavily limits the viability of using wind or solar as the energy medium, thus leaving two principal contenders: geothermal and nuclear.

Now while one could attempt to argue that wind or solar could work with the appropriate level of storage as backup, such an argument does not sit on solid ground with the existing lack of storage options and the empirical track record of such a design. While small pilot plans exist and have received flashy headlines and hype, the output of these plants is basically irrelevant to any expected energy requirements for air capture. Also recall that energy can only stored if it is in excess, which will not be true most of the time, for the solar and/or wind elements are already providing energy to various elements associated with the capture process. Pumped hydro shares the same problem, as well as limiting the location for the process because of its required topography.

In the past geothermal was thought to be the better choice over nuclear largely due to any potential nuclear waste issues associated with nuclear power, with enhanced geothermal systems (EGS) being the preferred geothermal methodology. Note that the issue of safety regarding nuclear power has long been a foolish reason to oppose it for safety issues only arise when the operator (be it government or corporation) is allowed to cut corners and/or does not adhere to proper and standard safety operating procedures.

Unfortunately, there has been few rigorous studies concerning EGS especially relative to any expansion of seismic activity pertaining to its application. In short the EGS process can produce an environment that increases seismic activity of low Richter scale earthquakes (the occurrence of 2 to 3 scale quakes appear to increase in probability). However, unlike fracking, which increase both earthquake probability and severity, little is known regarding whether EGS will increase earthquake severity (from 2 or 3 to 4+). This uncertainty, which could have been and should have been studied in earnest years ago, makes it difficult to support going forward with EGS. Thus, nuclear becomes the better choice with at least a generation 2 design as the standard in order to limit or outright eliminate resultant waste or one could utilize a small modular unit design.

Water utilization is also an important issue for regardless of the system, the chemical reaction involved in the absorption of CO2 from the atmosphere requires water, commonly as a catalyst. However, despite the general nature of a catalyst (lack of consumption at the conclusion of the reaction) the open-air nature of the reaction system results in a significant percentage of the utilized water being lost to the atmosphere as water vapor making inherent water recovery within the process itself more difficult. Therefore, there are two important questions involving water use in the process: 1) How will the initial amount of water for beginning the process be procured? 2) How will atmospheric water losses be minimized?

The best solution for obtaining the required starting water is from desalination, which is suitable because direct air capture units can be built almost anywhere due to the natural mixing of the atmosphere maintaining relatively constant global CO2 concentrations over the long-term. Regarding the question of how atmospheric water losses will be minimized there are two potential strategies. First, the use of properly placed atmospheric condensers could recover a significant portion of the lost water and recycle it back into the beginning of the process. Second, depending on the economic and environmentally efficiency of the desalination process, there may be no need for any type of recycling, instead drawing all required water from desalination including that which is lost.

However, this method is inherently risky because of the potential detriments associated with desalination and any potential issues involving the hydrological cycle due to the new levels of water evaporation from the direct air capture process. Overall the better option appears to initially provide water via desalination and allow further desalination to fill-in any gaps in recycling missed by the water condensers. Fortunately, either option seems valid from an energy standpoint with the nearby nuclear reactor powering the direct air capture devices.

The infrastructure to transport water needs to be considered both from an efficiency and economic standpoint. The two most viable methods for the initial water application would be constructing a piping infrastructure to transport the desalinated water to the direct air capture units or simply using transport vehicles, like large trucks, to move the water to the direct air capture units. An important element to determining which method is best involves the rate of recycling from any water atmospheric collectors near the direct air capture units. The more water recycled the more attractive a less permanent infrastructure appears (trucks) due to the lower overall capital and even maintenance costs. However, while theory is fine, the overall scale requirements of the operation may require a more permanent source of water due to the sheer amount of water required regardless of recycling.

Also an important consideration is what to do with desalination byproducts, mostly the removed salt, some of the chemicals in the desalination process and the possibility of certain contaminants from pipe and process breakdown (copper, iron, zinc, etc.). At the moment many desalination plants dispose the brine in the ocean or a closed watercourse through a direct disposal strategy sometimes involving salinity concentration reduction by discharging the brine with wastewater or a cooling stream from a power plant.

Obviously there is concern about releasing a stream of heavily concentrated brine into the ocean for it can produce both eutrophication and significant pH changes creating problems for the local flora and fauna.5 Other common management strategies include minimization or direct reuse.5 Minimization commonly involves membrane or thermal methods whereas reuse involves recovering salts from the waste brine via crystallization or evaporative cooling and utilizing that salt for other processes or goods.5

While some are high on the idea of selling salt to offset the operation of a desalination plant such an idea seems optimistic due to the overall expected scale of the operation. Some have proposed ammoniating the brine and using it to increase the volume of CO2 capture.5 The concern with that strategy is providing the necessary ammonium to react with the brine to create a consistent and worthwhile process. Another option that has been floated is incorporating the brine into a set of molten salts that would be used in either nuclear power reactors or batteries. However, the viability of such an idea is still questionable.

Desalination is not the only aspect of the process that produces a byproduct. The more important environmental byproduct is obviously the CO2 that is extracted from the atmosphere. The most important aspect of this absorption is what process will be utilized to ensure that the newly capture CO2 is not reintroduced into the environment? Some of the more desired solutions involve dreaming of using the captured CO2 as an economic product within enhanced oil recovery processes, as a means to producing a methane or hydrocarbon based fuel for vehicles or a marketed product in a commercial industry (soda, etc.).

Unfortunately, those first two options return the captured CO2 back to the atmosphere at some percentage, which limits the overall efficiency of the CO2 absorption, increasing overall costs and decreasing the speed of net removal. Also the commercial option will not provide sufficient funds to the operation of the process. While this reality eliminates the idea that commercial product distribution can carry the finances of the process, tapping into commercial process should still be worthwhile as a means to eliminate a very minor portion of the captured CO2.

Another method to remove atmospheric carbon gaining in popularity is the use of bio-char. In essence bio-char is black carbon synthesized through pyrolysis of biomass. Bio-char is effective because it is believed to be a very stable means of retaining carbon, sequestering it for hundreds to thousands of years. Depositing the captured CO2 into one-sided greenhouses could be another method to disposing of some of the captured CO2 then turning the grown flora into bio-char would remove the CO2. While a possibility, again the scale of absorbed CO2 limits the total value of this process.

A new method for potentially removing CO2 is utilizing it in an electrolytic conversion to create molten carbonates and later converting those carbonates into Carbon Nanofibers and potentially later even Carbon Nanotubes.6 While this process has yet to be scaled to what would be classified as commercial levels, it does demonstrate some level of promise. The versatility and usefulness of carbon nanotubes or fibers have more commercial value than pure CO2 as a commercial product. However, similar to the other potential options listed above, it is difficult to presume that most of the captured CO2 will be eliminated via this process.

Mineral sequestration via olivine, serpentine or wollastonite has drawn attention as a possible avenue for CO2 “storage”. However, this strategy does not appear economically or rationally viable for natural weathering is too slow and technologically induced weathering, by grinding down these materials to dramatically increase available surface area, is emission inefficient and costly. So despite some of these more flashy or “economic” choices, overall it is reasonable to suggest that a majority of the captured CO2 will be stored long-term in underground rock formations.

With all of these additional considerations to take into account it does not appear wise to simply build these air capture units at random. These units clearly need to be constructed in an orderly and cohesive manner, perhaps even in a localized autonomous network. This network needs to contain a water source, a power source and a means of utilizing the captured CO2 in addition to having recycling pathways for all necessary materials used in the selected air capture reactions.

Overall it is also important to understand that one should not attempt to portray this type of above complex or even direct air capture in general as some new budding industry that will produce a profit. While certain elements will provide some form of revenue, envisioning a new profitable industry does not appear appropriate at this time. So if profitability is not viable, what is the economic argument for direct air capture? The response is adjusting how one looks at the economic issue. The economics of direct air capture and any resulting complex is not profitability, but prevention and to some extent, survivability.

For example, Person A does not eat broccoli on a regular basis because he is paid a sum of money by Person B to do so, but instead consumes broccoli because it is a healthy food and there is reason to believe that the consistent consumption of broccoli will result in a reduced probability of various diseases and ailments in the future relative to a person who does not consume broccoli (all other elements being accounted for). Therefore, the economic benefit for consuming broccoli is derived from lower future costs associated with healthcare and perhaps a reduction in lost wages due to less work missed versus immediate short-term incentive/reward.

No reasonable person disputes the fact that global warming will increase the probability and severity of future extreme weather events in addition to producing detrimental changes in general climate and weather patterns. These changes will produce significant levels of environmental and economic damage and will eventually threaten the very viability of human society. Therefore, a reasonable person would come to the conclusion that it is important to lessen the detrimental impacts of global warming as much as possible. Such a reduction would also result in the savings of billions of dollars in the short-term (10-20 years from now) and trillions of dollars in the long-term (20-50 years from now). Therefore, similar to the broccoli example, the prevention model is how people should look at direct air capture versus attempting to inappropriately sell it as some form of short-term “money-making” venture. The “profitability” comes from the money saved in the future by reducing the probability of detrimental outcomes associated with global warming.

With this mindset, how would such projects be funded? It is difficult to see venture capitalists getting involved because most only have a nose for eventual profits and as discussed above, this project will not produce profits in that manner. Ironically the only venture capitalists that might get involved are those who are very young and/or have large stock holdings in insurance companies. In a just world every major corporation in the world would have to pay into some form of “carbon remediation and mitigation” fund as a form of restitution for championing a carbon heavy global economy. Money from this fund would then be used to fund direct air capture in addition to other direct CO2 mitigation projects. One could argue that the funds procured from a carbon tax would also serve this purpose.

Unfortunately, the likelihood of such a program where corporations foot a lot of the bill is unlikely for it is difficult to envision most multi-national corporations agreeing to fund such a program; most companies typically do not do something unless profit is available, which here it is not, or if government is footing the bill. Therefore, it appears that various world governments will have to foot the bill. With that said what governments should go first so to speak: Well the United States is definitely a candidate as it is responsible for the most cumulative CO2 out of any other country. China is in a very close second being responsible for the most CO2 in the last few decades in addition to choosing coal and oil to grow their economy without taking into consideration the environmental realities of that choice when nuclear, wind, solar and/or geothermal were also valid, albeit slower, choices. However, in the end such funding would have to be worked out by international treaty, which does not lend much confidence when considering the success of past international environmental based treaties.

In the end, it is understandable that if the economic cost of developing an air capture complex of sorts was quantitatively calculated that it would be high; however, the nature of the complex is that all of these elements will be required in the future based on the current environmental-use path humans have embarked upon with regards to expelling CO2 into the atmosphere, thus the cost is not based on luxury, but necessity. The idea behind such a complex for direct air capture is to lower overall net costs by tying many of the air capture units into the same required operational elements, thus making the direct air capture strategy more economical on an overall scale; saving money for investment in other environmentally necessary avenues like emission reduction. Overall while the manifestation of such a complex may not be exactly as described in this blog post, the reality is that as it current stands such a complex will be needed in one form or another.



Citations –

1. "Lohafex project provides new insights on plankton ecology: Only small amounts of atmospheric carbon dioxide fixed." International Polar Year. March 23, 2009.

2. Zeman, Frank. “Energy and Material Balance of CO2 Capture from Ambient Air.” Environ. Sci. Technol. 2007. 41(21): 7558-7563.

3. Perez, E, et Al. “Direct Capture of CO2 from Ambient Air.” Chem. Rev. 2016. 116:11840-11876

4. American Physical Society. Direct Air Capture of CO2 with Chemicals: A Technology Assesment for the APS Panel on Public A?airs; APS: 2011.

5. Giwa, A, et Al. “Brine Management Methods: Recent Innovations and Current Status.” Desalination. 2017. 407:1-23.

6. Ren, J, et Al. “One-Pot Synthesis of Carbon Nanofibers from CO2.” Nano Lett. 2015. 15:6142-6148.

Wednesday, October 26, 2016

A Magic Bullet in Pain Relief?


The advancement of medicine has numerous accomplishments; however, one of the slower improvements involves addressing and managing pain. Significant instances of pain, both in acute and chronic form, afflict hundreds of millions of people worldwide, but most modern treatments struggle to demonstrate meaningful improvement versus past treatments. In fact it is estimated that at least half of surgical patients do not receive effective pain control after their treatments.1,2 Also addiction to pain medication has become a mounting problem in recent years making long-term pain management strategies more difficult.

One potential strategy for managing pain that has gained popularity in recent years is focusing on the activation of analgesic targets like sodium channels Nav1.7, Nav1.8 and Nav1.9. These sodium channels belong to a larger family of voltage gated sodium channels (Nav1.1-1.9) that each has specific locations and functional roles in the body. Among the aforementioned three sodium channels, Nav1.7 is viewed as the most important and its function was first identified from conditional knockout studies in mice expressing Nav1.8 after assumptions were raised from a small family appeared to have significant pain insensitivities via a loss-of-function recessive mutation in Nav1.7.3,4 The resultant study identified Nav1.7 playing a significant role relative to inflammatory pain and the conditional deletion of Nav1.7, not surprisingly, heavily reduced that level of pain to an almost non-registered symptomatic level.3,5,6

Nav1.7 and its 1.8 and 1.9 cohorts are present near the synapses of neurons that are commonly thought to be responsible for sending and receiving pain signals. Overall Nav1.7 appears to transmit action potentials via neurotransmitter release through a threshold managed by Nav1.9, which receives input from Nav1.8.7-10 However, it does not appear that Nav1.7 activation is exclusively reliant on Nav1.8 or 1.9.7

While one means to address pain in the past was the utilization of global sodium channel blockers, developing a drug that has strong specificity for Nav1.7 is thought to be a principal strategy for more effective pain management by localizing treatment to increase selectivity and reduce negative side effects, especially those involving the heart since Nav1.7 is not located near the heart. While not all forms of pain involve Nav1.7, which should surprise no one, a significant number of pain processes appear to incorporate Nav1.7, which has produced the aforementioned enthusiasm for producing a target therapy.4,7

Of course since the major discovery associated with Nav1.7 occurred in 2006,4 various drug development programs have been underway to produce an appropriate and effective treatment. Unfortunately despite the creation of numerous specific stable antagonists, the general results have been disappointing ranging from non-replicated results to unexpected negative side effects.11 One piece of information from these studies highlights an apparent contradiction where the more selective the antagonist for Nav1.7 the less effective the pain reduction versus less selective molecules like lidocaine being more effective.6

The major reason behind this result is thought to be a relationship between Nav1.7 and enhanced natural opioid signaling born from studies involving Nav1.7 null mutant CIP.4 Basically in null mutants an unknown biological relationship develops producing the dramatic change in opioid concentrations in a natural/steady-state condition that is responsible for blocking pain. This belief is supported by the ability of Naloxone, an inverse agonist for the u-opioid receptor (MOR) and antagonist for k and d-opioid receptors, to frequently reverse the pain insensitivity born from Nav1.7 null.7,12 However, oddly enough while knocking out SCN9A, a gene responsible for encoding Nav1.7, produces this enhanced opioid concentration state; simply reducing the activation efficiency of Nav1.7 after development does not seem to produce anywhere near the enhancement of opiods. Basically there is no proportional response.

One explanation for this result is look at how the null creature compensates for the loss of Nav1.7 during development. No Nav1.7 expression commonly results in transcriptional up-regulation of Penk, which is a precursor of met-enkephalin, but Penk was not up-regulated in Nav1.8 or 1.9 nulls.7,13 This result suggests that the neurotransmitter release associated with Nav1.7 is the critical step. Complete channel block of Dorsal Root Ganglion (DRG) neurons via high concentrations tetrodotoxin, this is relevant because a number of the neurons at this location have Nav1.7 channels, also creates a state of enhanced opioid expression.7 However, without a complete channel block there does not appear to be significant increase in opioid or enkephalin expression.7 Overall the increase in opioid concentration within null mice, and probably humans, target nociceptive input consistent with the expression of opioid receptors on small nociceptive afferents.7,14

This result seems to suggest that there is no middle ground in blocking Nav1.7; either the treatment has to produce a 100% channel block or there is no significant increase in pain insensitivity/pain relief.15,16 This issue is a problem for while some agents attempt to improve selectivity by binding to areas outside the pore-forming region on channels through less effective conservation producing inhibitory action independent of the channel’s functional state,6 it is highly unlikely that even these strategies will develop a molecule to create a 100% selective block without significant negative side effects. This challenge has lead researchers to focus on biologics, like venom toxins, over small molecules due to increased rates of selectivity even incorporating techniques like saturation mutagenesis;17-19 however, at this moment success appears improbable.

This result regarding full channel block produces two questions: first the interaction of Nav1.7 suggests that sodium can function as a secondary messenger with respects to the expression of enkephalin through the alteration of Penk mRNA expression levels. Such a belief is supported by the behavior of the ionophore monensin, which results in decreased expression of Penk whereas blocking the channel up-regulates Penk mRNA.13

If this is the case, then the importance of Nav1.7 over that of Nav1.8 and 1.9 may be directly attributable to the level of sodium that passes through Nav1.7, which has a greater effect on overall intracellular sodium concentrations versus other sodium channels. For example HEK293 cell lines with permanent expression of Nav1.7 establish a resting intracellular sodium concentration around double the level of control cells.7

Second, Nav1.7 could produce a form of some level of natural opioid inhibition or at least a form of negative feedback. This mindset seems to be supported by gain-of-function mutations in Nav1.7 typically producing conditions of erythromelalgia (PE), which is characterized by episodes of symmetrical burning pain of the feet, lower legs, and even hands and is tied to increased Nav1.7 channel activity.6 However, if this is the case it raises an interesting question to why null Nav1.7 seem to produce no inherent negatives born from the additional concentrations of opioids, i.e. no addiction or sensitivity. Perhaps in null cases other pathways form to provide a level of opioid feedback inhibition or “saturation” management.

Based on the above information it does not appear that producing a molecule to interfere with Nav1.7 activity can be effectively used to treat pain because full blockage is seemingly required to produce conditions associated with pain insensitivity and general pain treatment. Also blocking Nav1.7 over long and consistent periods of time may damage other important sensory processes. The reason Nav1.7 demonstrates success in knockouts, both cultured and natural, may be because the knockout mutation forces the body to focus on other pathways to manage the other systems that Nav1.7 would normally interact with if it existed. However, that does not exclude using information pertaining to Nav1.7 activity to identify a better pain management treatment.

A better strategy may be to pursue strategies to expand or mimic concentrations of met-enkephalin, which is directly influenced by Nav1.7 activity. Met-enkephalin is a strong agonist for the d-opioid receptor, has some influence on the u-opioid receptor and almost no effect on the k-opioid receptor.7 However, despite its meaningful opioid influence, met-enkephalin has low residence times in the body due to rapid levels of metabolization.20 Thus, simply injecting met-enkephalin into a person would serve little purpose in addressing pain because it would have to be done at large doses and too frequently. However, a synthetic enkephalin, [D-Ala2]-Met-enkephalinamide (DALA) has shown some positive attributes at managing pain by changing its rate of metabolism.

In the end despite the clear understanding that pain relief can be achieved by blocking a channel like Nav1.7, no compounds have been developed to effectively and easily take advantage of that reality. Due to the requirement of full channel block it is highly unlikely that a treatment involving small molecules will ever be successful, leaving the door open only for modified biologics. However, even with a successful “in lab” molecule the location of Nav1.7 in higher concentrations behind the blood brain barrier may make meaningful treatment difficult without some level of increased blood brain barrier penetration. Overall the allure of channel block pain therapy involving a specific location like Nav1.7 may need to be supplemented by further focus on the more downstream products associated with channel activation or inactivation like Met-enkephalin to complement pain relief strategies.


--
Citations –

1. Chapman, R, et Al. “Postoperative pain trajectories in cardiac surgery patients.” Pain Research and Treatment. 2012. Article ID 608359. doi:10.1155/2012/608359

2. Wheeler, M, et Al. “Adverse events associated with postoperative opioid analgesia: a systematic review.” Journal of Pain. 2002. 3(3):159–180.

3. Nassar, M, et Al. “Nociceptor-specific gene deletion reveals a major role for Nav1.7 (PN1) in acute and inflammatory pain.” PNAS. 2004. 101(34):12706-11.

4. Cox, J, et Al. “An SCN9A channelopathy causes congenital inability to experience pain.” Nature. 2006. 444(7121):894-8.

5. Abrahamsen, B, et Al. “The cell and molecular basis of mechanical, cold, and inflammatory pain.” Science. 2008. 321(5889):702-5.

6. Emery, E, Paula Luiz, A, and Wood, J. “Nav1.7 and other voltage-gated sodium channels as drug targets for pain relief.” Expert Opinion on Therapeutic Targets. DOI: 10.1517/14728222.2016.1162295

7. Minett, M, et Al. “Endogenous opioids contribute to insensitivity to pain in humans and mice lacking sodium channel Nav1.7.” Nature Communications. 6:8967. DOI: 10.1038/ncomms9967

8. Eijkelkamp, N, et Al. “Neurological perspectives on voltage-gated sodium channels.” Brain. 2012. 135:2585–2612.

9. Akopian, A, et Al. “The tetrodotoxin-resistant sodium channel SNS has a specialized function in pain pathways.” Nat. Neurosci. 1999. 2:541–548.

10. Baker, M, et Al. “GTP-induced tetrodotoxin-resistant Naþ current regulates excitability in mouse and rat small diameter sensory neurones.” J. Physiol. 2003. 548:373–382.

11. Lee, J, et Al. “A monoclonal antibody that targets a Nav1.7 channel voltage sensor for pain and itch relief.” Cell. 2014. 157(6):1393-404.

12. Dehen, H, et Al. “Congenital insensitivity to pain and the "morphine-like” analgesic system.” Pain. 1978. 5(4):351-8.

13. Popov, S, et Al. “Increases in intracellular sodium activate transcription and gene expression via the salt-inducible kinase 1 network in an atrial myocyte cell line.” Am. J. Physiol. Heart Circ. Physiol. 2012. 303:H57–H65.

14. Usoskin, D, et Al. “Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing.” Nat. Neurosci. 2015. 18:145–153.

15. Minett, M, Eijkelkamp, N, and Wood, J, “Significant determinants of mouse pain behaviour.” PLoS One. 2014. 9(8):e104458.

16. Minett, M, et Al. “Pain without nociceptors? Nav1.7-independent pain mechanisms.” Cell Rep. 2014. 6(2):301-12.

17. Shcherbatkok, A, et Al. “Engineering highly potent and selective microproteins against Nav1.7 sodium channel for treatment of pain.” J. Biol. Chem. 10.1074/jbc.M116.725978

18. Harvey, A. “Toxins and drug discovery.” Toxicon. 2014. 92:193-200

19. Yang, S, et Al. “Discovery of a selective Nav1.7 inhibitor from centipede venom with
analgesic efficacy exceeding morphine in rodent pain models.” PNAS. 2013. 110:17534-17539

20. Minett, M, et Al. “Distinct Nav1.7-dependent pain sensations require different sets of sensory and sympathetic neurons.” Nature Communications. 2012. 3(4):791-799.

Tuesday, September 27, 2016

The Nature of Protesting


As long as opinions exist human beings will engage in protests against those things with which they disagree. Unfortunately for protesters the general rate of success is rather dismal because most protesters have seemingly forgotten the purpose of protesting and its inherent limitations, especially in modern society. How can protesting become a useful tool for establishing change versus simply being a mobile echo chamber of time wasting annoyance and/or criminal behavior?

The major purpose of protesting is to cast attention to a given issue and either inform others who have the power to influence change or those who are also affected by the issue of its importance and the need for change, but may not already be aware of it. In modern society, especially a Republic or Democracy, the secondary goal of a protest is to act as a persuasion tool to convince others that the issue of the protest is meaningful and worthy of attention. This attention hopefully will lead to a stronger and more unified front for change against the particular issue increasing the probability that there is change.

One of the chief problems with modern protesting is it is imbued with too much emotion and not enough logic. It is understandable that there is an emotional element to protesting for either the acute veracity of a singular event or chronic weight of numerous smaller events typically produces an emotional driver to facilitate individuals into taking the time and effort to publicly air their grievances. However, this emotional aspect of the event(s) underlying the motivation for the protest has lead protesters to make disadvantageous decisions and actions in the process and/or administration of the protest.

Emotional responses and drivers apply an illogical conclusion to believe in a greater necessity to increased frequency of protesting, which relative to the purpose of protesting is commonly detrimental. Basically protesters protest action/policy “y” at greater frequency than they should, because the cause is so emotionally important to them. However, when major protest events occur within close temporal proximity, the impact of those protests towards those not already in support of the “cause” is lessened and even potentially damaging to the success of the cause. For example the group known as “Black Lives Matter” have fallen into this pitfall in their recent activity.

Part of the problem with multiple protest events over a short period of time is it portrays the organization as disingenuous to actively seeking change versus just simply seeking personal attention or notoriety. Most major protests, especially those that spawn organizations to manage the desired change, focus on a meaningful, yet large-scale issue that requires time, resources and effort to produce change. However, multiple protests over a short period of time lead those who do not immediately agree with the protests to conclude, somewhat correctly, that the protesters are not serious about their so-called desire to produce change because they do not understand the process in which that change will occur, if it occurs at all. This attitude will lead individuals to conclude that the organization and perhaps even the cause itself is not worth focusing on, especially in a world where there are already so many other “meaningful” problems.

Some may counter that protests do not just serve as a means to cast attention on a given issue or even rally like-minded individuals and convince “on the fence” individuals, but also to provide an avenue to a frustrated demographic to vent… so to speak. While this initial argument has some merit, its value is only relevant so long as the protests do not significantly interfere with the lives of others in society, for example by stopping/blocking traffic or reducing the effectiveness of economic activity. One may like to punch the air to vent; however, it is not appropriate to punch air that another person’s face is filling. Using violations of the law as a means to “burn off steam” is clearly inappropriate and heavily limits the credibility of any protest and the individuals and/or organizations responsible for it. Therefore, the argument that mass-scale protests can be used as a means to vent is an invalid one that is simply used as a flimsy excuse.

Also these types of protests that block traffic and/or generally inconvenience others are rather foolish from a standpoint of cost-benefit. By inconveniencing others, especially numerous times over a short time period, the protesters are significantly increasing the probability of producing more enemies to their cause. This behavior is meaningful because whereas an individual may have remained on the proverbial sidelines for the protester’s fight, now thanks to the slight by the protesters, either directly or indirectly, that individual may work against the motives of the protesters, perhaps simply out of spite alone. Some could counter that “you can’t make an omelet without breaking a few eggs” (i.e. disruption of the status-quo is necessary for change), but there is definitely a difference between intelligent disruption and needless/foolish disruption and most protest organizations seem to not understand the difference limiting the validity of that argument in relation to their activities.

Overall mass-scale public protesting is only step 1 in the process of producing change by demonstrating that something is a problem and creating a mindset among the populous that the problem must be addressed with haste in the future. However, the real work to change the problem occurs after step 1, for step 1 does not actually achieve any change. Not surprisingly though the steps beyond public protesting are much more difficult both in their initiation and in determining and demonstrating any actual progress towards the goal/change in question.

Unfortunately these challenges appear to trip up most organizations that materialize in the space of step 1. Either these organizations are not capable of transitioning beyond step 1 or they do not care about the events beyond step 1. This lack of skill, ability, influence, etc. traps most organizations in step 1 for through the act of public protesting, these organizations can continue to demonstrate their so-called relevance for public protesting is easy, especially with access to the Internet and the existence of a non-authoritative government. However, as time goes by these organizations are simply lying to their supporters about their relevance because continued public protests on their own will not produce success towards addressing the change these protests claim to desire. Prominent recent examples of this trap are both Black Lives Matter and Occupy Wall Street.

Perhaps that is one of the more unfortunate problems with these organizations, the idea that the “leaders” of these organizations realize that the organization is ill-equipped to accomplish the change, yet cannot acknowledge that it is time to disband or evolve the organization under the idea that such action would be regarded as failure by supporters. Recall it is much more difficult to demonstrate success from meetings in a boardroom than holding up traffic on the street. Therefore, these leaders instead aim to maintain their positions and any benefits that come from those positions, by simply continuing to focus on step 1 in an attempt to obfuscate their own lack of ability and competency by turning the attention of their supporters to the “evil” of the so-called opponent.

While the above position is rather cynical, it is also true that certain organizations function under such a mindset. However, the transition beyond step 1 has also proven difficult for those non-self-aggrandizing organizations. Thus, these organizations must focus not only on pointing out the problem(s), but proposing detailed and valid solutions to the problem. Unfortunately this is not the case for a vast majority of situations. In a sense the step 1 attitude by most of these organizations can be viewed as similar to Homer Simpson’s campaign slogan in “The Simpsons” when he ran for Springfield sanitation commissioner… “Can’t Someone Else Do It”. Basically the organizations state that they have done the “hard” work of pointing out the problem exist, now someone else can actually fix the problem which the organization will take credit for it.

Even when organizations propose solutions, those solutions are typically lacking with a variety of holes, usually on the details end and probability of application due to the general lack of information and/or bias. For example The Urban League proposed a “10-Point Justice Plan” to address the negative relationship between the black populous and law enforcement. Unfortunately this “solution” was heavily lacking in detail largely associated with general application. It promoted a lot of “universally applied” ideas merely by citing either one program in one particular city or one un-passed existing piece of Federal legislation. Also it was rather bias and generally naïve. A number of elements to the “solution” could be viewed merely as quasi-demands over actual genuine attempts to solve the problem.

However, for all of the problems of the “10-Point Justice Plan”, at least the Urban League produced a starting point in which to produce solutions. Unfortunately the fact that organizations like Black Lives Matter continue to reside in step 1, protest, draws resources and attention away from that starting point, thereby heavily reducing the probability that a long-term solution even materializes in the first place. This type of behavior goes to demonstrate the disconnect between organizations in step 1 and organizations that have moved beyond it, but claim to be “working” towards a solution to the same concern/problem.

Another concern with most protests is the tone and lack of awareness for the existing problem. For example the negative relationship between the black populous and police officers in the eyes of the black populous is thought to be entirely the fault of the police. Of course this is not correct for the black populous certainly does not treat the police with the appropriate level of respect and decorum that is expected for the position, which not surprisingly exasperates problems in the relationship. Part of the problem is a number of individuals in the black populous fall into the same pitfall they claim the police do: stereotyping all police as out to get them racist, just as they believe police believe all blacks are scum-criminals up to no good. Until the black populous acknowledges and corrects this behavior of stereotyping police officers as racists, among other things, the relationship between the black populous and the police will remain strained for it is not a one-sided problem.

Furthermore some may believe that protesting works because they look to the past and see the fruits and successes of protests. Unfortunately in the process of looking upon days long gone there is a lack of understanding in how society has evolved. These successful protest movements were able to demonstrate the power of the protesters to effectively influence society due to their integral role in society. For example The Montgomery Bus Boycott was built entirely around the fact that the general economic survival of the bus company was dependent on its black customers.

Unfortunately for protesters, over the last few decades economic development and technology has significantly altered the way the economy functions. Globalization and the Internet have generally decoupled major business from their proximity and those local consumers. Therefore, local protests tend to only impact local businesses, which frequently only damages the local infrastructure, which can cause more harm overall than what the protesters are protesting against. So while in the past, protests could apply more direct pressure, now the manner in which society has changed mitigates a lot of that direct influence and power. In some respects it can be argued that there are just too many people for protests and boycotts to really have any significant influence economically. Now such influence is regarded more as mere annoyance to outright criminal behavior that does not win allies.

In a democracy change demands voting and placing individuals in power that will produce that change. Unfortunately while step 1 attempts to create the necessary attention to get prospective voters to care about the issue, it does nothing beyond this element. A lack of voting is definitely one of the major reasons why despite all of the protesting in the world, so little genuine and meaningful change has actually occurred on most issues.

This voting issue has been largely noted in minority communities with reference to the local governing body via claims that minority demographic x makes up 72% of the voting eligible population, but the local government is 80% white and how this is wrong. However, this point is rather devious and inappropriate. It is important to note that that it is bias behavior if an individual with demographic characteristic x votes for a candidate solely because he/she shares that demographic characteristic (i.e. a black person votes for a black candidate solely because he/she is black or a Jewish person votes for a Jewish candidate solely because he/she is Jewish, etc.)

This demographical point is rather idiotic to make because a democracy is not structured in such a way that government officials should proportionally represent the electorate demographic; the point of a democracy is government officials should pass policies and govern in a manner that is approved by the majority of voters. However, the above statement commonly made by minority “activists” regarding certain communities being 72% x, yet 80% of government/civil servant positions being white portrays a racist/bias mindset of x should be represented in more government positions solely because the electorate is some % of x. Therefore, it is important that individuals vote and that are informed enough that they vote for officials that will best represent their interests regardless of whether or not those individuals share certain characteristics.

In the end individuals/organizations who seek to produce change by initiating protests must understand that protesting can only cast attention to a given issue. Gone are the days when only protesting can produce valid and meaningful solutions. These solutions are produced later through honest detailed analysis of the problem to produce an appropriate guideline and outline of a solution and then hard work and commitment to turning that guideline into a functioning solution. protesters must be wary though of alienating both potential allies and advisories through excessive protesting, especially the latter. Excessive protesting can definitely spur the passions of potential advisories to work harder to defeat the protester(s), not necessarily because they passionately disagree with the idea/object of the protest, but because of scorn directly towards the protesters themselves. Overall protesters must focus on advancing detailed and thorough solutions to issues they view as problems rather than focusing on simply protesting those problems with no or only piecemeal superficial solutions.

Wednesday, August 17, 2016

Does the Future of Polling Require a Trip to the Past?


One of the hotter somewhat “nerd” topics in politics of late is the rather significant inaccuracies that have been demonstrated in various public polls among numerous credible polling agencies over the last few years. These inaccuracies range from prediction failures in a number of Presidential Primaries and senate elections in the United States to Parliament elections and the British exit from the EU in Europe not withstanding inaccurate polling results in other countries as well. While layman individuals may not be overly concerned about these inaccuracies, those in the business as well as a number of political scientists are concerned for they view polls as an important element to understanding how people view the state of their country and how their values can influence the path of the country. So what are the major problems creating this inaccuracy and what can be done to address them?

One of the fortunate things about this problem in modern polling is that not only are the authorities on the matter aware that there is a problem, but they seem to have a general idea to the causes. For example two of the biggest trends creating difficulties for producing accurate polling results are: 1) the increased use of cell phones and the resultant decease in the use of landlines making it more difficult and expensive to reach people; 2) people are less inclined to actually answer surveys even when they can be reached. These two reasons are rather interesting and almost ironic in a sense.

The expansion of technology was though to make polling more convenient and cheaper, yet it seems that the opposite has occurred. The transition from landlines to cell phones has made polling more difficult in multiple respects. First, the general mobility of cell phones creates a problem in that the area code assigned to the cell phone may not match the area code of where the owner now lives. Obviously this is a problem for asking someone who lives in Maryland about a state Senate election in Washington because of their phone has a 206 area code will not produce an accurate or meaningful result.

Second, increased cell phone use has significantly increased costs associated with polling through the common random means of creating a sample size. While dual sampling frames have addressed the problem of finding the cell phone users, Federal law reduces general polling efficiency. In the past automatic dialers were utilized to speed through the process of numbers that were disconnected or were not answered only passing to a live person when the call was answered.

However, the FCC has ruled that the 1991 Telephone Consumer Protection Act prohibits calling cell phones through automatic dialers. With common call ratios commonly exceeding at least 10 times the desired end result (i.e. for a survey response of 1000 people at least 10,000 numbers are commonly dialed), these calls having to be made by live people significantly increases costs against auto dialers. Furthermore all survey participants must be compensated for the call resources (commonly cell phone minutes); in a landline dominant world any required compensation was much cheaper relative to a cell phone dominant world.

Making matters worse the transition from landlines to “cell phone only using” individuals have followed the typical rapid incorporation path of proven technology where in the U.S. the National Health Interview Survey identified only 6% of the public only used cell phones (no landlines) in 2004 with an increase to 48.3% with an additional 18% almost never using a landline by 2015. So in a sense almost 2/3rds (66.3%) of the U.S. population were more than likely not reachable via landline in 2015.1

Obviously even if a pollster is able to reach an individual that is only step one in the process for that respondent must be willing to answer the asked questions. Unfortunately for pollsters the general response rates for individuals have collapsed in a continuous trend from about 90% in 1930 to 36% in 1997 to 9% in 2012.2,3 Not surprisingly there is a concern that this lack of success produces an environment where those who do respond do not comprise an accurate representation of the demographic that is pertinent to the idea of polling. While some studies have demonstrated that so far fancy statistical footwork (so to speak) has been able to neutralize these possible holes, most believe that it is only a matter of time before these problems can no longer be marginalized.3

This dramatic reduction is somewhat ironic, especially in an Internet era; while a number of people are more than content to spill their guts out on various social media sites about the intricate details of their lives and even events that occur day to day including mundane things like pictures of the lunch they’re about to eat, they are less willing to participate in public polling. Some theorize that Americans as a whole are too busy to answer polling questions, but this explanation does nothing but paint most of those Americans as shallow for it would be easy for most of them to make time if so desired.

Another theory is that the digital age has made actual social interaction more awkward (less comfortable); people are easily able to post various types of information on social networks because the interaction is indirect with a time gap typically with somewhat known individuals, online “friends”, whereas polls are direct interaction in real-time with a stranger. This theory holds much more water than the “not enough time” theory, but is also more problematic because it demands a significant personality shift away from how society seems to be trending.

For example cell phones offer a more effective means to call screen and a number of individuals are unwilling to answer calls from unknown numbers unless one is expected (like the results from a job interview). This behavior may also explain why older individuals, those born before the digital age, are much more likely to answer pollster question; they live outside this digital bubble and have not had their personalities influenced by it.

A third theory is that people before the digital age were more likely to respond to pollsters because of the psychological belief that answering those questions granted validity and even importance to their opinions due to nature of the medium, especially over those who were not polled. However, now in the digital age where anyone can have a Facebook page or a blog to post their opinion to the world, there is less psychological value to polling producing a medium for someone to express their opinions. Tie this reality to the fact that the information ubiquitous environment of the Internet has also sullied the waters so to speak regarding what information is important and what information is meaningless. Overall it could be effectively argued that most people do not see an ego boost from participating in polls anymore; therefore little to no value is assigned to that participation, but also people are more socially awkward about participating further driving down participation probabilities.

What can be done about these issues? The most obvious suggestion is as polling moved from being face-to-face to the telephone thanks to the advancement of technology; polling must once again evolve from telephones to online. While the most obvious suggestion, there are numerous problems with such a strategy. The first and most pressing concern is that Internet polls on meaningful political issues run by reputable companies have similar response rates as telephone polls. However, the level of bias associated with respondents switches from older individuals to younger individuals, for a vast majority of Internet use is performed by younger individuals. Also drawing a statistically random sample through the Internet seems incredibly difficult in general and without a random sample, bias is almost guaranteed.

Polling can be conducted either based on a probability or non-probability scale. Probability involves creating a sample frame, a randomized selection from a population via a certain type of procedure with a specific method of contact and medium for the questions (data collection method). At times this is easy like using a employee roster at a company A to ask about working conditions; other times it is difficult, especially on larger state/national questions because the sample population is larger and more disorganized creating problems in devising an appropriate sample frame, both logistically and financially.

Non-probability samples for polling are drawn simply from a suitable collection of respondents with only small similarity, largely involving a convenience sample (i.e. those who can most easily be recruited to complete the survey). Internet polling is largely based on non-probability. This structure has problems because without self-selection it is more difficult to statistically project the opinions of those polled to the general population within the typical margin of error. Also there are problems in comparing the survey population and any target population, creating unknown bias. The inherent age and ethnicity bias with online polling also persists. Some services attempt to overcome bias via weighting, pop-up recruitment and statistical modeling.

Weighting is commonly used when a sample has a small portion of a particular demographic that is not representative of the total target population (i.e. for a national poll only 17% of the respondents are women). With the national population of women in hovering round 51% the preferences of the women in the sample would be “weighted” three times as much. Obviously the most immediate concern with this method is with the smaller number of respondents the weighting system can “conclude” that more extreme/uncommon views are more widely held if such views are present in the survey. Weighting can also lead to herding and other possible statistical manipulation, especially when compared against other similar polls. Overall one of the biggest problems with weighting is that it is rarely reported directly to the public in the polls that they see presented by media outlets.

Pop-up recruitment attempts to create a more demographic appropriate sample size by having various polling advertisements for a particular poll appear over a variety of different websites where some of those websites are primarily visited by young black men, others visited by middle aged white women and others visited by gay Hispanic men, etc. hoping to pull in enough diversity to find representation in all parties. These pop-ups also attempt to reduce “busy work” for the participants (i.e. filling out personal information forms) by using proxy demographics based on browser visitation histories. While such a strategy is viable their overall level of consistent and long-term accuracy is questionable. A meaningful problem is that the tools made to smooth out the accuracy of these methods do not appear universally applicable. Another problem is that only more politically engaged individuals bother to take note of pop-up recruitments and may have certain characteristics that skew accuracy.

Finally some organizations like RealClearPolitics.com and FiveThirtyEight.com use poll averaging including weighting historical accuracy and specific characteristics associated with certain demographics to create election models and “more complete” polls. While some champion these methods as the future, there is the concern that if most of the polls become Internet based then the feedstock for these aggregate polls will have the same general flaws and the aggregate polls will also carry over those flaws resulting in no meaningful improvement in value or accuracy.

It is interesting to note that the age bias associated with Internet polling is naturally self-correcting. Similar to how telephone bias towards more wealthy households existed in the 1940s and 50s and then self-corrected as telephones became more widespread, Internet polling will also self-correct, but in a little more grizzly fashion. The problem in Internet polling is not a lack of availability, but a lack of usage. As older individuals who have little interest in using the Internet die and have their age group replaced by individuals who became familiar with the Internet in their late 20s, age bias should significantly decrease. However, it is unlikely that polling can wait the two decades+ for this “natural” self-correction and even then there is no guarantee that inherent issues with Internet polling will be solved.

While producing an accurate and meaningful sample size is becoming more difficult and expensive, it certainly is not impossible and various polls have sufficient size and representation. So what could lead to inaccuracies in these polls outside of sampling issues?

The two most common problems in polling accuracy are inability to predict how a voter will change his/her mind before actually voting and inaccurate conclusions regarding who will actually vote. Not surprisingly the former is less the fault of the polling organization than the latter. While they can certainly attempt it, it really is not the responsibility of the polling organization to accurately forecast the probability that voter A who reports a desire to vote for candidate A will change that desire and vote for candidate B two weeks later. However, polling organizations can do a better job of determining the likelihood of a particular individual voting and weighting that probability into their polling conclusions.

For example this “probability of voting” factor is another significantly problem with Internet polling for while 95% of all 18-29 year-olds use the Internet, only 13% made up the total 2014 electorate. However, while only 60% of those 65 and older use the Internet, a significant percentage of those resort to only utilizing email, individuals 65 and older made up 28 percent of the 2014 electorate.2,4 Therefore, Internet polls completely missed a portion of the electorate and heavily overvalued the opinions of another portion. That is not the only problem; a Pew study suggested that non-probability surveys, i.e. Internet surveys, struggle to represent certain demographics, i.e. Hispanics and Blacks adults results have an average estimated bias of 15.1 and 11.3% respectively.2

It is important to note that a voter reporting a higher than actualized probability to vote is nothing new. Over the years it is common that 25% to 40% of those who say they will vote end up failing to do so.2 To combat this behavior polling organizations attempt to predict voting probability through the creation of a “likely voter” scale.

One method polling organizations utilize to estimate the likelihood of voting is to review past turnout levels in previous elections, while applying appropriate adjustments regarding voter interest due to the type of candidates, the type of prominent issues, the competitiveness of the races, ease of voting and level of voter mobilization in the polling area.2 These estimates produce a range for a voting probability, a floor and ceiling, which is used to create a cutoff region.

A pool of possible voters to compare to the voting range is created based on answers to a separate set of questions. For example a recent Pew analysis utilized the following question based to determine voting probability:2

- How much thought have you given to the coming November election? Quite a lot, some, only a little, none
- Have you ever voted in your precinct or election district? Yes, no
- Would you say you follow what’s going on in government and public affairs most of the time, some of the time, only now and then, hardly at all?
- How often would you say you vote? Always, nearly always, part of the time, seldom
- How likely are you to vote in the general election this November? Definitely will vote, probably will vote, probably will not vote, definitely will not vote
- In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote? Yes, voted; no
- Please rate your chance of voting in November on a scale of 10 to 1. 0-8, 9, 10

From these questions statistical models are created that assign a probability of voting to each participant based on their answers and the weighting of each question. Sometimes these models are also used in other present elections or even future elections, but when this occurs one must be careful to ensure the assumptions remain appropriate for accuracy considerations. This modeling method is viewed as more accurate because it incorporates all of the questions instead of focusing on one or two like the last one “Please rate your chance of voting in November on a scale of 10 to 1.” Also this method still allows for the incorporation of respondents who answer low on one particular question, like they did not vote in the last election, as possible voters.

While asking these types of questions is appropriate, polling organizations may hurt themselves because while there is no single silver bullet question to determine whether or not person A votes, different organizations use different question to produce their probability results. This lack of standardization can create inefficiencies; it seems to make more sense that all organizations would use the same questions to determine voting probability to better identify questions that are good predictors.

Past-voter history is not the only meaningful factor, it has been demonstrated as a rather effective means of predicting future turnout.2 However, there is a concern that poll participants may misremember their voting history, especially because it takes place so rarely and is rather an unmemorable event for most. Therefore, pollsters also attempt to measure voting probability by including voter history from voter registration files, but this method is somewhat inconsistent between polling organizations. The reason for this inconsistency is that most surveys still require random phone dialing or Internet recruitment and it is difficult to acquire the necessary names and addresses of the roster to tie back into the voter file due to increased work load or lack of willingness by the respondents.

Another way that voter registration files could be useful is eliminating some of the randomness when utilizing the phone to produce a poll roster. For example matching telephone numbers to a voter file can produce information that can narrow the number of calls that are needed to fill a poll roster for a certain demographic. Some organizations have claimed to reduce the number of calls required to fill poll rosters by up to 70% using this type of method.5 Such a method is also though to reduce problems associated with sampling error as well.

Interestingly enough the general response of the polling community to the issue of inaccuracy, smaller sample sizes and increase costs is to depend more on technology, data mining and statistical analysis, which have only demonstrated the ability to “hold-off” worse results, but do not appear to have any direct means at improving the situation.

However, one wonders why polling organizations do not simply return to their roots in a sense. Instead of resorting to more technology and more statistics why not simply “go out among the people”. What are the negative issues with the larger organizations producing branch offices of sorts where they can setup polling stations in high traffic areas to directly engage individuals instead of calling at awkward times or hoping to get proper sample sizes from various politically motivated Internet users while the rest ignore those pop-ups advertising a poll.

To facilitate better interaction with possible poll responders instead of an individual standing in a general location with a survey and clipboard which can put a number of people immediately on guard where some purposely alter their paths to avoid the clipboard individual, the polling agents should set up a table clearly labeling their intent. Also to compensate individuals for their time, the polling agents should offer small items in exchange for answered questions: Frisbees, lighters, little Nerf footballs, etc. It would surprise a number of individuals how many people walking down the street for other business would be willing to spend 5-10 minutes asking questions for a free little Nerf football. It would be easy to set up such an environment rather seamlessly at a farmer’s market or in a shopping mall.

The results of this information could then been reported to a main “data center” for the polling organization and pooled into a single poll relative to a national issue. Such a method should more than likely reduce overall costs while producing more accurate information. Of course this is only one possible means to address the problem without hoping that technology can “magically” fix it.

In the end the “crisis” in polling might simply be an internal one of little relevance. For example is polling even important anymore with regards to elections? Suppose candidate A has ideas A, B and C and opposes ideas D, E and F. If polling demonstrates that candidate A’s constituency values ideas A, C and F doesn’t candidate A look bad changing his position on idea F from con to pro based on that data? The change would be based on public option not an actual change in the facts surrounding idea F. Typically governance by political polling leads to poor governance.

Another important question is why is it important that the public have polling information available? Are polls only useful for individuals to have a measuring stick to the level of value that the rest of society places on a particular issue or the popularity of a particular candidate? If so, what is the value that John Q. Public has this information? Certainly person A will not change their value system if a public poll seems to produce a differing opinion.

The reality of the situation is that for the most part polling information available to most candidates to a particular office is more accurate and advanced than that information given to the public. Also only those who work for a particular issue or candidate seem to have enough motivation to be influenced by a poll result to work harder for their particular issue. Overall is media reported polling just another something for the media to talk about, a time filler? Maybe the real issue with public polling is not how can its accuracy be improved/maintained, but what role does it really serve in society? Perhaps changing the nature of polling back from an indirect activity on a computer screen or telephone to a direct face-to-face exchange between people can help answer that more important question.


--

Citations –

1. Blumberg, S, and Luke, J. “Wireless substitution: early release of estimates from the national health interview survey, July – December 2015.” National Health Interview Survey. May 2016.

2. Keeter, S, Igielnik, R, Weisel, R. “Can likely voter models be improve?” Pew Research Center. January 2016.

3. DeSilver, Drew and Keeter, Scott. “The challenges of polling when fewer people are available to be polled.” Pew Research. July 21, 2015. http://www.pewresearch.org/fact-tank/2015/07/21/the-challenges-of-polling-when-fewer-people-are-available-to-be-polled/

4. File, T. “Who Votes? Congressional Elections and the American Electorate: 1978–2014.” US Census Bureau. July. Accessed October 7 (2015): 2015.

5. Graff, Garrett. “The polls are all wrong. A startup called civis is our best hope to fix them.” Wired. June 6, 2016. http://www.wired.com/2016/06/civis-election-polling-clinton-sanders-trump/

Wednesday, July 13, 2016

Forming the Battle Plan for Addressing Teaching Reform in the 21st Century

The notion of education reform is certainly not a new concept, but it certainly seems to accomplish less and less meaningful and appropriate change as the years advance. One of the major reasons various reform movements appear to produce little success is too much focus on specific “pet” methods without critically analyzing their applicability in large-scale environments. Instead of focusing on how to better fire teachers, lauding some trendy non-scalable niche example as the solution and looking to divert money to charter schools that perform no better to worse than their public school competition, reformists should systematically look at the system, identify the flaws and then act to remove those flaws with scale appropriate solutions. So what are important elements to advancing education that reformers tend to get wrong.

An important element that must be addressed in education is facilitating student motivation with career prospects at an early age to ensure appropriate enthusiasm. Unfortunately not all students appreciate and understand the underlying benefits to education, the acquisition of information in general, thus they can reject its importance. If a student does not possess the drive to learn through some form of motivation then any teacher, regardless of overall quality, will struggle to transmit knowledge to that individual. Incorrectly most reformists believe that it is the sole responsibility of teacher to nurture and cultivate any motivation potential in a student. The idea that it is the responsibility of teachers to motivate their students is ridiculous solely, but not limited to, the simple vast diversity in psychological make-up of their students. To focus on numerous different strategies to ensure student motivation is asking for something completely unreasonable and untenable.

Most of the time motivation for learning comes from engaged and caring parents for it is standard psychology that most children want to receive praise from their parents by acting in a manner that will be received positively. Even for those that do not fit this profile, an educationally engaged parent can use his/her position as parent to command the child to “care” somewhat about education via either carrot or stick type motivators. If the parent is not engaged in the value of education the student needs to find motivation elsewhere, either through competition with other students or through their own desires, but not expect such a void to be filled by the teacher. Can it, yes, but it should not be expected. Overall though none of these motivating factors are relevant if not directed towards a meaningful conclusion.

Therefore, the entire process of education must be more cooperative both from the home environment and the school environment in identifying the passions and interests of students and applying those interests to the education process largely through demonstrating how even so-called “mundane” topics like math and various science tie into those passions. With this methodology, education becomes an amplifying positive force for that particular passion rather than a negative detracting and distracting force. In addition not only will this process provide internal motivational fuel for the student (i.e. “I want to be an astronaut”), but it will also provide a road map of sorts to achieving that passion for in the past there have been plenty of educationally motivated students that have fallen short because they were ignorant of the prerequisites and other requirements demanded by their passion.

How this methodology should be achieved will highlight the importance of guidance counselors, which has waned in modern times. Early in a student’s academic career (1st/2nd grade) guidance counselors should be the principal actors in identifying the student’s passions and deduce the best career path for that student to exercise those passions. Every two years there should be some “check-in” period to reassess passion and interests and formulate a new path if needed. This method allows guidance counselors to actually perform their assigned role and no longer burdens teachers with a task outside of their intended role, motivating the student. Now teachers can instead focus on providing an optimized educational environment in which to instruct the students, an actually appropriate expectation, rather than play cheerleader to the individual tastes of their students.

Proper management of student expectations is also important for increasing the effectiveness of education. Course syllabus must be presented early (day 1 or 2) and be transparent in how grades will be produced, what type of class behavior is expected, what students are expected to learn, schedule of events and special projects, etc. Also expectations regarding instruction is essential for despite what some critics would like the public to believe, education cannot be exciting and entertaining all the time, or heck even most of the time. Certainly quality teachers can add certain dynamic elements to lectures to produce a more “inspirational” product, but no one can make teaching something like, a literature review for a research paper to ensure proper background and sourcing, fun. Such a task is one of drudgery that demonstrates the importance of gumption and focus in the educational process.

Tied to the above point, another important element is to psychologically prepare students to embrace the discomfort of learning. Some argue that learning is not fun and education needs to reflect that, but it can be countered that such an environment for a number of students has already been attained; this is a major problem for if students acknowledge learning and education as painful and frustrating then they will be less interested in engaging in the process and will look for shortcuts (i.e. cheating) just as easily as if they think learning should always be fun and exciting.

Instead one must focus on the discomfort of learning in the context that it is frustrating when one does not know something one wants to know, but proper instruction and hard/smart work makes that frustration ephemeral. Basically learning is only “not fun” when no progress is being made. If progress is made (i.e. some knowledge being acquired piece by piece) then learning produces a noticeable sense of accomplishment and pain/frustration is limited and short-term. Therefore, one of the chief strategies in the educational process is to focus on why someone is not making progress and rectify it. This is not to say that education and learning is always effortless, but there is always a purpose to the effort.

One of the more hotly debated elements of education is the structure of how information is transmitted from the teacher to the students. Many modern “educational reformists” lament and criticize the large continuation of traditional education involving a teacher lecturing students on a given topic. These individuals frequently cite the advantages of engaging in teamwork-based activities and focusing on the Socratic Method (SM) of teacher-student engagement in lieu of basic lecturing.

The most significant advantage of the SM is that the interaction between the teacher and the individual through direct question and answer session increases the probability of understanding due to active learning rather than passive learning. During “traditional” lectures students must focus on self-motivation to ensure dynamic learning rather than hoping for learning through osmosis (in a sense). The SM takes some of the motivation burden off of the student through the direct discussion of the topic with the teacher.

Unfortunately most “educational reformist” lack classroom experience and seemingly fail to realize that most public schools have large class sizes (25+ students, usually more) that make the administration of the SM rather difficult without utilization of a scattershot strategy (randomly engaging certain individuals not everyone). A meaningful concern with the SM in large groups is that direct one-one engagement can cause other students to lapse in their attention limiting the effectiveness of the current learning experience. One thing that lectures are not given credit for is that they do provide a meaningful focal point for all students that direct one-one discussion can lack. Also too much interaction can lead to time crunches when it comes to instructing on all of the requisite information.

This misinterpretation of the “universal applicability” of the SM in public institutions largely exists because “reformists” largely focus on viewing the practices of schools with small overall enrollment and class size, typically highly privately funded charter schools, as the bases for determining “what works in the classroom” and what should be applied in public education. This mindset does nothing but continue to make real and appropriate reform more difficult. Overall as noted above the appropriate way to instruct in the modern “educational environment” appears to be the combination of the SM and lecture by periodically and consistently engaging random students in a brief 1-2 question session that captures the individual’s attention, but does not expend enough time to significantly threaten the loss of attention from the rest of the class.

The matter of teamwork is a little more interesting because the advantages of teaching to teams are significant. For example working in a team can provide a less stressful environment for certain individuals, which can eliminate the detriments of working alone, which could negatively impact the educational process. It can help interpersonal relationship development by giving individuals experience with working through problems with others in low stress/stakes environments. Also it provides growth and intellectual development by exposing individuals to additional and different viewpoints and interpretations of the lessons from other team members that may help augment understanding of the information.

However, there are some disadvantages to working in a team. The most pressing issue, that most do not either want to talk about or are not aware of, is that most of these above advantages are born from motivated students that want to learn and want to actively interact with their fellow classmates. Without this motivation, weaker and/or less enthusiastic students can hide behind stronger students letting those individuals do the work for the team and not focus on learning the material themselves. This strategy of “let the smarter kids who care about their grades do the work because they don’t want to fail” has always been a problem in teamwork related elements in primary and secondary education, especially for big large-time period projects.

This behavior is manageable in the scope of small assignments for while homework and in-class work could be performed in groups, quizzes and tests would still be individualized forcing students to limit the practice of the strategy for a vast majority of the grade is still based on their own accumulation and practice of course knowledge. However, for large projects this behavior can be significantly detrimental to the team as well as individuals because it is difficult for the teacher to dissect how important each student’s contribution was to the success or failure of the project.

One means of addressing this problem has been to have students evaluate the performance of their teammates at the conclusion of any big projects, but such a method always draws concern of bias between teammates. An alternative option for big projects may be weekly evaluations of performance on a 1-10 scale over 3-4 different categories with explanation areas for why the numeric score was given. The teacher can keep these evaluations and then use them as a metric to how the dynamic of the team may have changed and a more accurate assessment of how the students felt the workload was divided instead of relying on a single evaluation at the end of the project when emotion and tensions can influence the product as well as spotty memory interfere with accuracy.

Another concern with teaching teams is that weaker voiced/low confidence individuals can have their opinions overshadowed by stronger voiced individuals, which can lead to a reduction in their already wavering confidence. Handling this problem can be tricky because dominating personalities are not necessarily malicious and teachers cannot proctor each group to ensure all opinions are being heard and given a fair evaluation. There are two direct ways to lessening problems stemming from this type of personality clash. First, the teacher can periodically poll the group when asking for an answer inquiring how each student views the problem. Fortunately such a strategy does not appear too time consuming because once per class should be enough for more shy students to have their voices heard. Second, allow the students to form their own teams.

This issue of the origins of team formation creates a third smaller problem. Clearly allowing students to form their own groups can eliminate a large amount of potential interpersonal conflict within the team; however, allowing students to only associate with what is already familiar mitigates a lot of the advantages born from teams through the ability to work with the unfamiliar and understand different types of thought. Overall a middle solution appears most appropriate; before selecting the teams the teacher asks each student to indicate on a piece of paper the 3 classmates he/she would not like to be associated with in a team and then seeks to accommodate as many of these wishes as possible. This strategy limits the amount of interpersonal conflict in a team by eliminating individuals that might have outside conflicts while retaining enough differentiation to ensure value from working in the team. Note it is not the responsibility of the teacher to resolve these conflicts, thus they are best avoided in the classroom.

Overall with regards to teaching to teams: when possible teams should be used basic lectures, including those with a level of interactivity, but tests should be individually based to ensure a strong motivating “carrot” for individual learning. Team interactivity and creation should follow the above suggestions to maximize learning potential and effectiveness.

Another element that is widely touted as the “wave of the future” with regards to education is not only in-class teamwork, but also large team projects where the team engages in a multi-week, even multi-month, task. Clearly the motivation behind this idea is that learning by doing is one of the best way to acquire knowledge, especially to practice critical thinking and creativity; in addition such projects can provide a venue to evaluate the depth of that acquired knowledge by applying theoretical concepts in empirical practice.

Unfortunately while the sentiment is understandable a number of supporters of this methodology fail to acknowledge that such projects are very time consuming and expensive from the school’s perspective, thus such an instructional strategy is an almost guaranteed non-starter for most inner city and rural schools. Also initial project design is important to ensure students stay on task and have organized benchmarks to document progress, thus making the introduction of such a program difficult as well because to test the theory one must put it into practice which takes time and resources and redundant projects may not be valuable depending on the subject matter.

Proponents will conclude that such projects have succeeded before citing various group projects involving building robots, devising responses to various natural disasters or culturing different types of cells to determine how they interact with various types of bacteria. While there are certainly a number of success stories regarding this method, the failures are less known because they are not made public, so it is difficult to deduce the effectiveness of such programs. Overall it is reasonable for a high school to explore a single elective class that focuses on the completion of large-scale project and introduce smaller two-three week long projects for some other classes, but any expectation that such a methodology will become the norm is foolhardy until the public school system is funded at a much larger level than current.

The structure of grading is also an interesting issue with regards to the future of education. One of the more prominent discussions over the years has been the amount of homework that should be assigned to students. Before discussing the level or amount of homework it is important to establish the purpose of homework. For the course of this discussion the role of homework will be defined as: a tool to produce a means for a student to genuinely increase the probability of understanding particular concepts in a low stress environment versus proctored on-site examinations. Also for homework to be relevant it must be designed in a way that maximizes its practicality and usefulness. Rarely will reality simply give a person a single equation or thought process that will solve the problem. For example while a common math problem may read: “21 divided by 4 = ???”; this is clearly not how problems are encountered in reality, with 90%+ of the work already done. Instead such a problem should be presented to the students as:

John and Suzie want to bake some apple pies for their school’s bake sale. John has collected 10 apples from the trees around his house and Suzie has collected 11 apples from the trees around her house. If it takes 4 apples to bake 1 pie how many pies can John and Suzie bake and how many apples will they have left over after all the baking is done?

From this structure, which is much more akin to reality, a student should create the equation 21 divided by 4 = ???. So step 1 with regards to the homework aspect of knowledge evaluation is make sure homework problems properly represent real life experience.

Step 2 is to ask how homework should play into the evaluation process. One could inquire about the fairness of homework being a significant portion or even any portion of the grade if its central role is that of a low stress practice tool for understanding the general overarching concepts. What if the student does not need to do the homework to understand the material, the lecture period is enough to achieve understanding? Should that student be, in essence, forced to do the homework when he/she could use that time for other activities, either family-oriented or pleasure based? For example some students may not have a sufficient amount of time to do unnecessary, due to already achieving understanding, homework because of an imperfect family life where brothers/sisters have to take care of younger siblings, go to night work to earn extra money to help support the family, etc.

One point of argument for a high evaluation metric for homework is that it provides another avenue for students who struggle with communicating acquired knowledge in a testing environment. It cannot be argued that a test in a classroom environment inherently provides more pressure than homework assignments in an environment of the student’s choosing. Some students do not have the ability to effectively manage this increased pressure, thus their ability to demonstrate their knowledge suffers accordingly. The principle characteristic of the grade for a course is to conveniently measure how well a student acquired knowledge in a course, not how well a student can manage a high-pressure situation. Therefore, a high evaluation metric allows the grades for a student that “does not test well” to more accurately reflect the knowledge acquired within the course.

Some opponents could argue back that while addressing students that “do not test well” is a positive element for a high evaluation metric, it is more probable that highly evaluated homework conceals poor performance. Students can use homework to bolster overall grades that are detrimentally marred by poor examination results; poor results not due to mishandling stress, but simply due to lack of knowledge. Thus, this evaluation structure misrepresents a student’s knowledge in a particular topic portraying that student as more competent than they otherwise are, a disservice to colleges, future employers and the students themselves. However, this analysis only seems valid if the assigned homework is of substandard quality and/or design. If the homework is properly designed to reflect acquired concepts of the class then using homework grades as a counter measure to examination grades is reasonable.

It must be remembered that the bounds of time do not only impact students. Teachers, especially those with more dynamic topics like history, find themselves having to impart more and more information over the same fixed time period. Unfortunately the total amount of information that needs to be discussed limits the available amount of instruction time for each specific topic. Therefore, without the ability to rigorously cover a particular topic to the point where students have been exposed enough to reasonably understand the topic the probability that the students understand the topic decreases. Homework substitutes for this lack of class time to increase learning and retention probabilities. This supplementary aspect of homework hurts those who argue for no/little homework.

It can be argued that there is a typical perceived knowledge vs. actual knowledge gap for most students. There are a number of instances in school and, life in general, where an individual may think he/she has sufficient knowledge in a given subject, but when actually tested on that topic this individual quickly realizes that he/she does not have as much knowledge as previously thought. Homework provides a means to address this perception/reality gap before it becomes exposed on a test to a greater academic detriment of the student. Overall, is there a strategy that can provide a motivational aspect to do homework while not burdening those who do not need to take advantage of the practice characteristics of homework? The strategy below seems to be one way to address this issue.

• Homework is given out on a weekly basis; Every Monday an assignment is given out which will cover all of the scheduled material that will be discussed in class over that same week; the assignment will be expected to be turned in at the beginning of class on the next Monday (for example homework assigned on Oct. 13 would be turned in on Oct. 20 at the beginning of class); answers for the homework would then be posted or handed-out for the last week’s homework at the end of class on Monday.

• Homework will count for 0% of the grade. The reason is that homework, as previously discussed, is designed to give the student multiple opportunities to practice learning the given material. Taking a grade from material that is supposed to be practice is not very fair. Therefore, because homework does not count for any percentage of the grade the students do not have to do it or turn it in if they do not want to.

• Grades will be determined by 4 tests; 3 section tests worth 25% of the grade and 1 cumulative final worth 25% of the grade. As a partial motivator to do homework, students may retake one of the section tests if they turned in at least 75% of the assigned homework within the corresponding section and demonstrated a legitimate effort to learn from the homework.

Overall while the above suggestion is merely that, a suggestion, it appears that the above discussion has focused on two important principle issues in the ‘homework’ discussion. First, is the issue between homework motivation vs. maintaining the practice characteristic of homework designed to enhance learning. Second, is the issue of opportunity cost in doing homework vs. undertaking other activities. The chief element of this issue boils down to immediacy of the opportunity cost. The time crunch created by homework, which is frequently associated with increased stress, is typically developed through two methods. First, most students, especially as they advance in grade, have to deal with multiple subjects demanding multiple solution methodologies. Second, homework frequently functions through daily turnover. While the individual assignments may not account for much having to sacrifice enough of them due to more important tasks (like the job to help your family) can add up quickly damaging the overall grade when using a high evaluation metric (commonly suggested for motivational purposes).

Unfortunately there does not appear to be a single magic bullet to deal with both issues, but expanding the homework turnover scope could certainly help. As suggested above assigning homework at one particular time to account for the entire week gives the students more flexibility to address the homework. If their time is demanded by a particular activity on a given night, time can be budgeted later in the week to complete homework that would have been missed due to that activity. Another potential advantage to assigning homework in a greater than day-by-day quantity is that it may be easier for students to make connections between building block concepts when doing ‘three days work’ of homework in one sitting instead of doing the work over a three day period with multiple interruptions. Such a system could also encourage more ambitious students to ‘read ahead’ in an attempt to do the homework before the class lesson address the material.

One question that comes to mind for such a system is how does it change the grading burden on teachers? Under a more expanded turnover system with a firm homework hand-in date teachers may have more homework to grade, but by providing a universal answer key after turning it in, the teacher has more flexibility in allotted time to grade the homework and return it to the student. This increased time flexibility is important for grading homework is one of the most daunting and potentially frustrating tasks for a teacher, one that is commonly overlooked by most education reformers when considering teacher workload. Also teachers have lives outside of the educational environment, just like students, and may want to devote certain periods of that time to other tasks.

Another useful change to improve the educational experience would be more cooperation among teachers within a given field of instruction. For example synchronizing the free/prep period for all teachers of the same general subject matter, i.e. all English teachers, would provide opportunities for teachers to converse regarding the instruction of certain subjects within the field. In fact it would be appropriate for teachers to have a weekly meeting during one of these prep periods to maximize problem solving and instruction capacity.

Obviously one of the most critical elements to improve the educational system is to create an environment where the profession of teaching is respected once again. One aspect of this change would require teachers having more power in the classroom to control improper behavior. One means to accomplish this change is to allow teachers to negatively influence a student’s grade when that student provides a disruptive influence to the learning environment. A good pilot program would be that the teacher would have the authority to deduce up to a maximum of 10% from the grade of an individual for misbehavior at certain predetermined intervals.

Some might immediately object to such a system using the argument that behavior should have nothing to do with determining the class grade because the grade should be exclusively contingent on demonstration of acquired knowledge through prescribed evaluation metrics like homework, quizzes and tests. While on its face this objection may seem appropriate and fair, the problem is that it views the behavior in a vacuum. Basically it suggests the premise that negative behavior only produces a detriment towards the practicing individual and if the individual can perform at a certain level on the evaluation metrics without showing respect or paying attention in class then there should be no punishment. However, such logic is clearly incorrect because in the classroom environment a vast majority of negative behavior provides a detrimental element to the overall environment, disrupting the ability of all parties to learn the information. The behavior commonly produces a detriment towards multiple parties even if it is undesired or unwarranted by those parties.

For those who attempt to retain the purist assumption from above, Even despite this reality, it is important to acknowledge that tolerance for such negative behavior is typically not allowed in the professional workplace and if one of the chief elements of education is to prepare an individual for a career on some level, then such behavior should not be allowed in the classroom without consequence either. For example if an individual performs his/her job well, but facilitates such a negative environment that it negatively affects the performance of others to the point where the company as a whole suffers, that individual will typically be either told to change their behavior or he/she will be fired. Legal barriers prevent students from “being fired” both from the classroom and the education system in general, thus the best secondary option is affect grades.

Another possible argument against this strategy is that the individuals who have the highest probabilities for misbehavior are those who care the least about grades and school in general. Therefore, how will this punishment system act as a meaningful deterrent? Well, if the suggestions from above relating to linking various aspects of education to successful advancement of one’s passion then a vast majority of individuals should care about their grades to the point where behavior can be reasonably managed through such a punishment. Even for those who do not accept the link between their passions and education, to simply produce no consequence to disruptive behavior is irrational. For example it is widely acknowledged that various people will exceed legal speed limits over the course of their driving career, so with this reality in mind should there be no punishment for violating these laws? Certainly not for it makes no sense to eliminate a valid and appropriate punishment for the violation of a valid social norm or law. Understand that grade reduction would only be one tool in the toolbox for teachers to address bad behavior.

Another important issue addressing the improvement of education in modern society is managing the integration of technology into the classroom environment. This point is certainly not unique, however, most individuals who sing the praises of technology as a “revolutionary” force in education are not teachers; instead they are business people, entrepreneurs, educational commentators, etc. and only see the positive elements of technology in education, frequently commenting with annoyance that technology is not more widespread.

Interestingly enough if these commentators did have teaching experience they would quickly realize that technology has already penetrated almost all classrooms in the form of smartphones. Unfortunately these elements are not positive, but a net negative producing significant distractions and emboldening those who which to cheat on quizzes and tests. It is true that technology can provide a significant boon to education, but it can also provide a significant detriment and it is important that all parties acknowledge this reality. So what can be done to neutralize the detrimental aspects that technology can bring to education?

The main aspect of this issue is how to manage technological distractions? The best solution is to put instruction into place where there is no legitimate cause to need to utilize the technology and then ban its use for the duration of class time. Now it stands to reason that technophiles would cry foul to this type of strategy once again citing the importance of technology in the classroom, especially in sparking student interest due to the length of time technology is incorporated into student life outside of the classroom. This objection highlights a problem in the presented arguments from those who support technology in the classroom, the general drive to force the influence of technology into all aspects of the classroom. The simple fact is that most classroom activities do not benefit from the incorporation of individualistic technological action. Yes, teachers can typically instruct more effectively using programs like PowerPoint versus transparent slides or a chalkboard, but students are not significantly benefited by following along with the lecture on their smartphones or laptops.

In essence there needs to be a dividing point when students can use technology and when they cannot and the cannot would occur during the lecture portion of the class. Clearly there are very small and specific exceptions to this principal; for example when lecturing about computer programming it would make sense for students, if applicable, to be at computers applying the elements of the lecture to increase familiarity with the operation of the concepts. However, despite the erroneous beliefs of technophiles, most topics do not lend themselves to this type of interaction, thus the utilization of technology by students during the lecture will result in a reduced probability of comprehension not an increased probability.

What would possible penalties be for student driven technological distractions? This question leads to two schools of thought relative to the expectation of respect for the instructor. Clearly one can argue that a student that does not pay attention in class, after accounting for outside psychological factors, is not showing proper respect to the instructor. However, if this lack of attention does not create a distracting environment for others (for example the student is doodling in a notebook, but not making enough noise to draw attention to this fact), should such behavior matter?

The answer to this question boils down to two issues? What is the obligation of the student to demonstrate respect for the teacher and what is the obligation of the teacher to ensure the student pays attention to the instructed material? The simplest philosophy in this issue is that the student is chastised for the lack of attention and told to correct the behavior and the lecture will not continue until the student complies. The general goal of this practice is to reestablish the authority of the teacher in the classroom setting and ensure the student receives some benefit from the lecture.

A more interesting strategy is if the student is not demonstrating behavior that will actively disrupt class and his/her behavior is on a limited scale (only 1-2 individuals in a class of 30 is not paying attention) then the teacher should not care about the behavior leaving the student to understand the instructed material his/herself. If the individual cannot understand the material then he/she should score poorly on the evaluation metric(s) that cover the particular material, which would be the fault of the student. Again it is not part of the teacher’s job to ensure that all students pay attention. If the individual can understand the material without the assistance of the lecture, why should the student be forced to pay attention to the lecture instead of engaging in a non-class distracting alternative activity?

A more interesting question is what does the teacher do when a number of individuals demonstrate a lack of attention, which could be viewed as a lack of respect to the authority of the teacher? As from above the teacher has two options: 1) stop lecturing until the class ceases their lack of attention; 2) continue to lecture placing the individuals who are not paying attention at a possible disadvantage for later evaluation metrics. A traditional and even modern viewpoint of teaching would instantly dismiss the latter option and criticize the teacher for not being able to keep the attention of the students. Of course almost all with this opinion have never taught a day in their life in an educational environment, thus the significance of their opinion is heavily marginalized. The problem with the first option is that rarely is the lack of attention from a student acute, but typically is habitual, thus correcting the behavior is more difficult than simply telling the student to pay attention. This reality is what makes the second option interesting when combined with the career affinity option discussed earlier.

One could argue that most habitual and “disrespectful” lack of attention behavior can be addressed by applying the above strategy of tying the passions of individuals to the subject matter taught in various classes. Thus once again after accounting for outside factors the chief motivation behind a student not paying attention in class would be the internal perception of redundant knowledge. Basically the student already believes that he/she has grasp of the knowledge presented in the lecture and elects to do something else.

This perception is not a significant problem because either the student is correct and should be spending time in the classroom doing something else while not distracting others, which only arrogant teachers would find fault with (all students should pay attention to me, etc.) or the student is incorrect and this perception and resultant behavior will be corrected after a poor performance on the next evaluation metric.

The above discussion demonstrates that the important concern is not an individual distracting him/herself, but an individual distracting others. It is this point where individually utilized technology becomes the problem. All rational people will agree that there is a significant difference in noise generation between an individual doodling in a notebook or working on math homework for next period versus an individual incessantly tapping on keys/screen or periodically making a sound like a laugh in response to a piece of video. Basically the utilization of technology as the element of distraction dramatically increases the probability that the distraction distracts others who do not want to be distracted from the content of the lecture. Therefore, individual technology must be appropriately managed though similar penalties as discussed above for behavior infractions.

Overall the administration of technology in the classroom is the prerogative of the teacher despite complaints from non-teachers. A problem technophiles have with this strategy is the incorrect belief that only technology can make a modern lecture innovative, dynamic and impactful. A quality teacher can give these characteristics to a lecture with just a piece of chalk and a chalkboard and if these non-teacher commentators had any real experience in education they would have a better understanding of this reality.

One of the improvements that must be made to establish better teachers is changing the means at which training experience is acquired. Overall there is too much single-experience watching/observing versus actual multi-experience hands-on training. For example a number of training programs involve a prospective teacher sitting in and observing the behavior, style and actions of a veteran teacher. However, rarely do these prospective teachers teach in the class while receiving feedback from the veteran teacher, they do little prep-work/grading/discussion and do not interact with other veteran teachers either.

Instead of this old method, new prospective teachers during their “observation” period should act as teaching assistants doing a significant amount of the grading and preparation work for the veteran instructor and teaching for a set period of time (maybe once per week). Then the prospective teacher should move to another teacher in the same subject to experience a potential different viewpoint in how to manage a class and/or teach the subject matter. Of course the logistics associated with such a new design would require work.

Another important change to positively advance teaching is to hold charter schools to actual academic standards or disallow public funding. Some love to make the utopian argument that money does not really matter with regards to improving public education, but such arguments are incorrect and self-serving. It makes no sense that charter schools can receive public funds, but have no accountability to those who provide those funds. Therefore, charter schools must either be removed from public funding or be held accountable to the same standards as public schools.

Similarly the return of respect to the teaching profession can never be achieved as long as organizations like Teach for America are allowed to continue to undermine the profession by introducing unprepared individuals into the profession. Teach for America and similar organizations produce negative propaganda regarding teaching under the motto “its so easy anyone can do it”, but refuse to accept responsibility for the reality that over half of their “qualified” candidates exit the profession after only two years.

Similar to the general propaganda spread by Teach for America and other similar organizations, one must abandon the idea that teaching is an occupation undeserving of respect due to the perceived hours of operation. A common refrain in public discourse is that teaching is not difficult because “teachers get the summers off”. What these false criticisms fail to acknowledge is the total hours worked versus days worked. Good teachers that care about ensuring a proper learning environment work more hours than average over the course of the week and also work over the summer. Overall quality teachers, those who the public claims to want in schools, do not fit this “not real work” profile and are negatively impacted by its continued propagation.

It is appropriate to briefly touch on a couple of indirect methods that could improve the educational experience. First, it makes sense to follow scientific research regarding the way lighting and room color influence performance and behavior. For example it has been reported that “warm” yellowish white light supports a more relaxing environment that promotes play and probably material engagement, standard school lighting (neutral white) supports quiet contemplative activities like reading and “cool” bluish white light supports performance during intellectually intensive events like tests.1 Thus equipping classrooms with LED lights that could be changed between these different types of lighting tiers should provide useful advantages to both teachers and students.

Second, there is sufficient evidence to suggest that early start times in high schools and some middle schools (7:30 and earlier) have negative educational influence on students.2-4 While this issue has received attention in the past and is still receiving some attention here and there, unfortunately it is not as cut-and-dry as simply starting school 30 minutes later for there are significant logistical hurdles to the successful administration of a “later school day” policy.

One of the major problems is how to manage bus transit for a single fleet of buses tends to service one school district or region. The tiered start times for different schools (high school, middle school/junior high, elementary school) is typically necessary for transit efficiency allowing this single fleet to manage all schools. Change the start time for high school and the efficiency of bus service collapses unless start times for middle schools and elementary schools are also changed.

However, changing start times for these schools is not beneficial to younger age students because they are already starting at times later than 8:00 am, and starting even later may even be detrimental because of the much later release times (4:00 pm or later). Not surprisingly the solution of “get more buses” is a non-starter for most school districts are already rather cash strapped to begin with due to tax funding dependencies and charter schools taking money from that pie as well. This transit problem and the resultant potential detriment for younger students is exactly what Montgomery Country in Maryland experienced when they changed school hours in 2015.

Another meaningful logistic hurdle involves the administration of after-school extra-curricular activities and how they could disrupt home life due to students arriving home at 5:30 or 6:00 pm, especially during the late fall and winter months when daylight becomes limited. Also there may be increased costs for the school for heating and cooling, especially cooling for those districts in high temperature regions for starting later in the day means hotter average school hour temperatures. This issue is tough because the costs could be prohibitive for some districts and meaningless for others. Of course one significant problem is that studies involving the incorporation of later school hours only seem to focus on health and/or possible changes in academic achievement and do not address obstacles to applying later school hours, which is rather ridiculous.

In the end one of the most pressing problems in education is misrepresentation of the overall goal of education. Some reformers seem to think that the most important role for education is to foster a level of knowledge that allows an individual to gain employment in some particular field. While such a role is important, it is not so important that it should displace other important elements to education like:

1) Produce citizens that can make rational decisions, which will allow them to make positive contributions to society.
2) Produce citizens that can effectively form solutions to both qualitative and quantitative problems.
3) Produce citizens that can use both spoken and written word to effectively communicate their ideas and feelings to other individuals as well as understand and analyze the validity of the ideas and feelings of others.
4) Produce citizens that do not tolerate individuals that attempt to manipulate or deceive society for their own ends or do not tolerate those that practice and/or preach ignorance or idiocy for the sole purpose of satisfying their own personal beliefs and ends.

Overall blind devotion to test scores and technology will not help achieve these goals and without the ability to produce these types of individuals society becomes vulnerable to manipulators and opportunists that would produce net harms. It is the responsibility of education to produce a society that is not only productive, but also able to protect it from these unscrupulous individuals, thus it is the responsibility of society to ensure an educational environment that accomplishes these goals. Current reformers are not offering solutions that will produce such accomplishment, thus something must change.

==
Citation –

1. Suk, H, and Choi, K. “Dynamic lighting system for the learning environment: performance of elementary students.” Optics Express. 2016. 24(10):A907-A916.

2. Eaton, D, et Al. “Prevalence of insufficient, borderline, and optimal hours of sleep among high school students–United States, 2007.” Journal of Adolescent Health. 2010. 46(4):399-401.

3. Wahlstrom, K, et Al. “Examining the impact of later high school start times on the health and academic performance of high school students: A multi-site study.” 2014.

4. Au, R, et Al. “School start times for adolescents.” Pediatrics. 2014. 134(3):642-649.