Diseases come in many different shapes and sizes inflicting multiple tiers of damage on the sufferer and his/her family and friends. Obviously those diseases that significantly and rapidly shorten an individual’s life span are the worst; however, some would argue that non-fatal neurological diseases like Alzheimer’s and autism are insidiously devastating, taking away the enjoyment from life in a way that cannot be alleviated by simple pain killers or anti-psychotics. Among detrimental neurological conditions it can be argued that autism is one of the worst because most neurological conditions manifest at advanced ages at least giving the individual a “normal” life for a considerable period of time whereas autism typically arises during the first few years of existence negatively impacting life from the beginning.
The symptoms of autism are passively detrimental and heterogeneous with various levels of mental delay, increased probabilities for epilepsy, various levels of attention deficit, language and learning impairments, obsessive compulsive disorders and difficulty interacting with others.1-3 Also like Alzheimer’s, the incident of autism and other similar disorders (autistic
spectrum disorders (ASDs)) among the general population continues to expand increasing from 4 per 10,000 in the 1960s to 30-60 (depending on the strict definition applied) per 10,000 in 2000s.4 While life expectancies of autistic individuals are only slightly reduced, the level of supplemental care that these individuals require demands significant resource expenditure.
A number of individuals believe that this expansion in autism is driven by new genuine cases. To support this position individuals cite that there have been no corresponding significant decreases in other similar diseases to compensate for the increase in autism.5 Basically individuals are not being diagnosed with condition x and then later having the diagnosis reevaluated as autism. However, while there are few substitutions, there is little evidence to contest the position of past under-diagnosis, which would consist of individuals with autism not being properly diagnosed due to the specific diagnostic condition(s) used at the time or due to lack of experience/knowledge of the individual performing the diagnosis. Now with better therapeutic and clinical diagnostic experience and ability, this undiagnosed segment of the population is now being properly diagnosed, thus there is actually no genuine increase. While this reasoning is logical, the rate of the increase seems too great to simply be accounted for by under-diagnosis alone. The rate of increase also seems too large to be explained by genetic mutations alone, thus it stands to reason that if this increase is legitimate then environmental causes are also influencing the change.
The cause(s) of autism, either genetically or environmentally, is currently unknown notwithstanding an incredible display of stupidity from certain individuals attempting to foolishly link vaccination and an increased probability of developing autism despite zero valid scientific evidence supporting such a position. Incidentally these people must also believe that little green men dance the samba on Mars everyday at 4:45 EST because there is an equal amount of empirical evidence supporting either position. Currently there are no specific biomarkers that have been identified to reliably diagnose autism,6-8 although a number of biomarkers have been identified to have differing concentrations between autistic and non-autistic individuals. However, these differences are marginalized because of their non-specificity towards autism versus other metabolic and neurological conditions.
Most modern study of autism has focused on searching for a genetic cause and some progress has been made for the belief is that a majority of autism cases result from probabilistic interactions between multiple common gene variants where each variant contributes some small influence on the overall condition.9,10 Another 10-20% of autism cases are believed to occur through known genetic effects including Synapsin 1, SynGAP1, SHANK3, NLGN4, CNTNAP2, retinoic acid-related orphan receptor-alpha (RORA), MET and cytochrome P450 genes, some which may have significant heritability rates.11-20
However, with all of the hundreds of genes involved in autism, focusing on genetic research and any potential genetically engineered treatment appears to be an inefficient strategy. Instead it seems reasonable that more attention should be paid, by both researchers and the media (especially the media), to the mechanisms associated with the development and maintenance of ASD for addressing the mechanism provides a higher probability of general treatment versus genetic strategies. Two common biological symptoms of autism are an increase in brain size and weight and an increased probability of suffering epileptic seizures. One mechanism to explain these symptoms is a deficiency in the synaptic elimination process during development (a.ka. synaptic pruning).
Soon after birth an overabundance of synapses are formed as neurons produce multiple synapses between a singe post-synaptic and pre-synaptic neuron. During development important synapses, those that fire more frequently and/or have greater firing duration, are strengthened and neighboring synapses are weakened and non-selectively eliminated until there is a one-to-one relationship between axons and cell body.21,22 The reason this process occurs is because unlike mature neuron-axon relationships where electrical impulses either facilitate an action potential or nothing, immature neuron-axon relationships, especially in Purkinje cells (PCs), operate on an intensity mechanism.23,24 This intensity mechanism exists because there is no initial molecular method for the brain to identify which synapses are important and which are not, thus importance is determined by firing duration/rate. Most notably this firing is driven from visual stimulus, especially before a recognized conscious understanding develops in the individual.
In abnormal pruning conditions the process does not create the typical one-one ratio instead it creates a neurological environment where numerous synapses from a single pre-synaptic neuron feed into single post-synaptic neurons. These additional synapses create additional brain volume offering an explanation for the autistic symptomology, where very young children with autism (ranging from 18 months to 4 years) have 5-10% more total brain volume.25-27 The additional weight can be explained by additional myelination because the myelination process is based on proximity, thus all axons in a given proximity will express the necessary oligodendritic targets to drive myelination. MRI studies have demonstrated growth abnormalities concerning gray and white matter supporting the belief that myelination accounts for weight variances.28,29
A lack of effective pruning could also explain the typical autistic symptom of early overgrowth followed by abnormal slowed growth.26,30 The excess synapses create the overgrowth, but due to conflicting excitatory and inhibitory signaling due to unnecessary action potentials, incomplete synchronization, incomplete action potentials and/or neuronal strengthening patterns, like long-term potentiation, autistic individuals have disadvantages hindering learning and other neuronal expansion. Disruption of these processes result in a developmental regression for autistic children of 20-40% between 15 to 24 months of age31 with higher probabilities corresponding to higher autism severity.32
An interesting and ironic side effect to a lack of pruning in the development of autism could be that over time if signal interaction is continuously disrupted by confliction between the almost competing synapses then all of the synapses can be negatively affected to the point where the neuron itself dies due to inactivity. This result may explain some of the outcomes in post-mortem autistic brains that are ripe with stunted neurons, especially PCs.33,34
Overall there are four major chronological phases in the process of creating a normal functional synapse: 1) functional differentiation among multiple creeping fiber inputs; 2) evolution of a superior synapse resulting in dendritic translocation; 3) early phase synapse elimination; 4) late phase synapse elimination.24 For the purpose of autism it appears that the third and fourth phases are the significant ones. Between the two elimination phases late phases appears to be more important due to its parallel fiber (PF)-PC synapse requirements and its longer period of operation in mice and assumed longer period of operation in humans (P9-P17 versus P7-P8 in mice).21
In general early elimination removes synapses that do not form properly and late elimination removes synapses that do not “win” the synapse augmentation competition. Therefore, there is a higher probability that a faulty late elimination process will create a more detrimental outcome than a faulty early elimination process because the late elimination process has a higher probability of eliminating functional synapses or not eliminating competing synapses. However, while in general short-term occurs prior to long-term elimination the exact timing of this elimination is not homogeneous throughout the brain (different areas go through the elimination process at different times during development).35 It is thought that pruning occurs first in primary sensory and motor areas followed by temporal and parietal areas and ends with the frontal cortex.35,36 Interestingly if this pruning order is accurate then sensory and motor abnormalities should be the first autistic indicator over emotional or social interaction.
A key molecule in late stage synaptic elimination is glutamate. The principle reason for its importance is that glutamate drives neuronal excitation probabilities and magnitudes in both non-mature neuronal networks and mature ones. Due to the importance of glutamate the late stage pruning process could find its roots late in stage one during functional differentiation when creeping fibers synthesize type-2 vesicular glutamate transporters during the formation of their initial synaptic butons.24 These transporters are important because they play a key role in determining the amount of glutamate that is released into the synaptic cleft after electrical stimulation of the given neuron. Synapses that express more vesicular glutamate transporters have an advantage in the synapse competition that determines which synapses are eliminated in stage four.
The strength of the particular synapse is largely dictated by the transient rise of glutamate within the cleft at any point in time. Clearly the availability of more synaptic vesicles will increase glutamate concentration and indicate a stronger connection, but that is only one element. Another important element is expression of metabotropic glutamate receptor 1 (mGluR1). As a metabotropic receptor mGluR1, when bound, activates an intracellular G-protein in order to initiate a signal cascade. mGluR1 is a C type G-protein receptor, over the more common A type, and their activation typically results in the activation of calcium channels and protein kinase C (PKC) resulting in a single calcium concentration spike.37 Additional less common downstream effectors include phospholipase D, casein kinase 1, cyclin dependent protein kinase 5, Jun kinase, mitogen-activated protein kinase/extracellular receptor kinase (MAPK/ERK) pathway and mammalian target of rapamycin (MTOR)/p70 S6 kinase pathway, which influence among other things synaptic plasticity.38-41
mGluRs have a ligand binding site on their N-terminal domain, which is formed by two hinged globular domains, commonly known as the Venus Flytrap Domain, due to its structure and action.42 These Venus domains also have allosteric binding sites for agonist and antagonist molecules. When glutamate binds these hinged domains close initiating the activation of the associated G-protein. There is some question to whether or not glutamate binding to a single site will lead to activation (no activation versus negative cooperativity),43 but agonist binding to a protomer seems to induce cis and trans-activation.44 This cis and trans-activation may provide an interesting association with autism development.
mGluR1 can be found both on post-synaptic and pre-synaptic membranes in the cerebellum, thalamus and the hippocampus.42 mGluR1 that is pre-synaptically located could drive feedback on the pre-synaptic cell resulting in a longer duration of glutamate release; thus in such a situation greater expression of pre-synaptic mGluR1 will increase the probability of strengthening the particular synapse. Other feedback activity can also be mediated by depolarization of the post-synaptic cell and the release of endocannabinoids.45-47 Post-synaptic receptor location is highly specific regionalized around the post-synaptic density, but not within it and induces post-synaptic cellular depolarization.42 This localization is thought to be achieved by interaction with Homer family regulatory proteins.48
Deficiencies in mGluR1 operation result in cerebellar gait problems, deficits in long-term depression and long-term potentiation and abnormal levels of regression of climbing fibers from cerebellar Purkinje cells49-50 basically reducing the probability that any fiber creates a strong one-one connection to an individual PC. In addition certain GluR1 mutations generate conditions that facilitate spontaneous ataxia through changes in the ligand-binding domain.51 Other important elements in pruning are GluRdelta2 and Cbln1, which bind to each other and are essential for the formation, organization and maintenance of PF-PC synapses52,53 and the proper execution of late phase synapse elimination as well as PLCbeta4 and PKCgamma which are part of the mGluR1 signaling cascade.24,54
Once the “winning” synapse has been identified, the creeping fibers that make up the synapse finish translocation to dendrites after the early stage elimination while the “losing” synapses remain around the soma. This proximity association creates two questions: 1) does the “winning” synapse produce some molecule that drives translocation towards the dendrites or do the “losing” synapses produce some molecule that inhibits translocation; 2) are somas “cluster pruned” or do each “losing” synapse produce a signal of sorts that targets it for elimination?
For the first question studies of the neuromuscular junction developed the “punishment model” of synapse competition where the strong synapse, created through the process discussed above, “punishes” other synapses by inducing two post-synaptic signals, one that protects itself and one that increases the probability of pruning for other synapses in the area.55,56 However, while this model works at the neuromuscular junction there is no direct evidence suggesting its accuracy for the central nervous system, although it has been the suggestion that complement proteins act as the protectors and punishers and induce microglia-based pruning.57,58
Recently microglia have become a leading candidate in the pruning process, including the process during development. The chief evidence to support this candidacy is electron microscopy imaging demonstrating microglia in the general proximity of RGC pre-synaptic inputs during the period of time when pruning is most likely to occur in developing dLGN.59,60 Further support comes from experiments where disruption of microglia during early development leads to incomplete and abnormal synaptic pruning.58 Based on this information one theory regarding synaptic pruning involves microglia CR3 receptors binding to C3 complement proteins produced in the pre-synaptic compartments due to a lack of glutamate release and appropriate feedback, which leads to synaptic engulfment.
However, there is an interesting immediate question regarding this potential role microglia play in developmental pruning? For example assume the pruning system fails and as a result autism develops. If microglia were involved in this process then how could microglia be available to eliminate/prune damaged neuronal connections later on in life? The general life span of individuals with autism, without another detrimental condition like epilepsy, implies that their brains are able to neutralize detrimental effects stemming from damaged neurons otherwise one would expect their life spans to be significantly shorter. How is this neutralization accomplished if the pruning process were damaged? The most plausible explanation is that there is a different mechanism to govern pruning during development versus during normal non-developmental operation. If so then specific experiments would need to be designed to determine the development pruning and developed pruning mechanisms.
Epilepsy frequency in autistic individuals is much higher than in non-autistic individuals with an occurrence rate ranging from 5% to 39-44% versus around 0.63%.61,62 The driving feature of epilepsy is excessive and abnormally consistent excitation of various neurons leading to abnormal synchronization resulting in spontaneous seizures and movements and potential brain damage. This additional excitation is thought to occur through one (or both) of two principle pathways: 1) excess excitatory neurotransmitters, usually glutamate, in synaptic clefts, which increase synaptic residence times increasing the probability of post-synaptic depolarization; 2) a disrupted inhibitory pathway usually due to a lack of GABA neurotransmitter release, which increases the probability of post-synaptic depolarization. Autistic individuals are thought to have an increased probability for developing epilepsy because of the loss of GABAergic interneurons in the cortical mini-columns and other areas.63,64 The general though is that because these mini-columns are less compact, but of greater number there is more extensive innervation for increased activation, but diminished lateral inhibition.64
It is thought that the relationship between autism and epilepsy is one in which the emergence of one increases the probability of developing the other. There is evidence to support the idea that autism develops prior to the clinical seizures that emphasize epilepsy and there is evidence to support the development of autism in individuals with more severe cases of epilepsy due to excessive neuronal damage.61,65 The development of epilepsy in autistic individuals could explain the level of neuronal death seen in some post-mortem autistic brains. Finally there appear to be two peak times of development of epilepsy in autistic individuals: early childhood and early adolescences.3 Not surprisingly younger preadolescent autistic children only have an epilepsy probability of slightly under 10%,66-68 due to limited compounded neuron damage, where adolescences and adults have an epilepsy probability of slight over 39%.69,70 This probability result makes sense as discussed above the nature of neuronal death and under/over excitement neuronal networks takes time to develop detrimental influence outside of severe cases of autism.
One of the interesting elements of autism is that the disorder shows a bias towards males by a ratio of around four to one.71 Numerous studies have supported this ratio through association of increased concentrations of testosterone in individuals with autism, but there is no clear understanding of any molecular mechanism that governs testosterone-based autistic augmentation.71-74 A recent study identified retinoic acid-related orphan receptor-alpha (RORA) as a possible candidate gene that influences autism development.16 RORA is responsible for regulating aromatase, an enzyme that converts testosterone to estrogen, thus a lack of aromatase will result in an increased concentration of testosterone.
There may be at least two different means of testosterone action within the pruning mechanism that could explain its role in increasing the probability of autism. First, testosterone may act directly to support synaptic pruning creating a situation where instead of multiple synapses being created through a “masking” process no strong synapses are created between certain pre-synaptic and post-synaptic cells.75 This action could be driven by testosterone augmenting the expression of complementary protein C, which may be the signal marker that induces microglia-based synapse elimination. It would seem that this type of process would be competitive with mGluR1 processes, thus if a given synapses was producing enough glutamate release and mGluR1 interaction it could ward off the effect of testosterone.
Successful application of such a response could create gaps in neuronal signaling disrupting synchronization and creating the under-connectivity environment associated with autism. An explanation to why such an outcome is not fatal could be explained by the existence of two different pruning systems, as mentioned earlier one that operates during development and one that operates after development. Therefore, over time due to brain plasticity the pre-synaptic cell may be able to partially extend an axon creating a smaller synaptic cleft, thus increasing synchronization probability. Whether or not more potent hormones like 5-adihydrotestosterone (DHT) have similar influence is unknown.
Second, the change in estradiol concentration, which is produced from cholesterol and as an active metabolic product of testosterone, may act as a neuroprotectant through two different mechanisms. The first mechanism involves enhancing RORA expression, which in turn reduces testosterone concentration limiting possible testosterone synapse elimination augmentation or other mechanism in which testosterone seems to enhance autism.16 The second mechanism involves estradiol down-regulation of mGluR1 and mGluR5.76 Down-regulation of mGluR1 could reduce the probability of abnormal mGluR1 enhancement through environmental or mutation-derived agonists, which would increase the probability of multiple synapses through masking, while also reducing the probability of excessive competition from testosterone. However, it should be acknowledged that too much estradiol may increase mGluR1 down-regulation to the point where it becomes more detrimental than beneficial.
One might ask how these testosterone influences could occur without genetic mutations. Recall that even without mutations, autism, and most neurological detrimental conditions for that instance, does not develop the same way in all individuals. There are issues of random quantum proximity where testosterone concentrations may change minutely to the point that exerts an effect that otherwise may not occur at a slight smaller concentration. For example even without mutations, testosterone has a negative feedback effect on RORA, which can create a small cascade under certain conditions greatly enhancing testosterone concentrations. Basically there is an element of randomness as well as other direct environmental elements.
While a defective pruning mechanism may not explain all autism cases, obvious there some cases that are dictated by certain genetic mutations that exist outside the pruning mechanism, it appears to be a plausible explanation for most cases. One of the major lingering questions regarding the failure of the pruning mechanism is whether it prunes too many synapses or not enough. While one of the hallmarks of autism appears to be functional underconnectivity that leads to neuronal hypo-synchronization and a general lack of neuronal firing/excitation there is also a level of overconnectivity.77-80 For example autistic individuals suffer underconnectivity when having to process dynamic events like facial movements and auditory elements, but exhibit overconnectivity when having to process static images.81
This general neuronal trend in autism has been expanded through fMRI to demonstrate reduced functional long-range connectivity, which would affect synchronization, versus short-range connectivity.82-84 This reduced long-range connectivity could center around an imbalance between short-range excitatory and long-range inhibitory connections, involve the death of PCs.85 and negatively influence the interaction between cortical and limbic systems producing extreme negative emotional responses to change.86 So how does this over-connectivity/under-connectivity contrast develop from the pruning process?
The overall mechanism of pruning could be viewed in the following manner. Numerous synapses form, but none are dominant. The multiple synapses can be thought to be in competition with each other, so when receiving neuronal input from the cell body and dendrites the total potential of that input is divided between the synapses with the total weight of that division unknown. Suppose for the purpose of this example there are five synapses receiving inputs of 15%, 15%, 20%, 20% and 30%. Over time due to higher weighted percentages, more glutamate vesicles, greater feedback from post-synaptic response, etc. one synapse starts to exert dominance receiving more of the initial excitation where the weaker synapses receive less. Returning to the above breakdown after a short period the inputs change to something like 7%, 8%, 14%, 17% and 54%. Eventually the weaker synapses are eliminated and the single dominant synapse receive 100% of the dendritic input.
In scenarios where pruning is not functioning properly one of two situations can arise: 1) pruning fails to eliminate all but one of the existing synapses leaving more than one synapse feeding into post-synaptic dendrites; 2) pruning removes all synapses leaving no synapse feeding into post-synaptic dendrites. It is understandable to assume that the first situation is more frequent than the second because in the second the post-synaptic neuron would have a higher probability of future apoptosis and potential brain damage. The first means of failure (multiple synapses surviving) can be further broken down into two failures: 1) a breakdown in the pruning system itself during development where a dominant synapse is created, but the weaker synapses are not pruned because of a deficiency in the pruning pathway; 2) weakness masking where weaker synapses are masked and interpreted by the pruning pathway as stronger and unworthy of pruning.
The first means of failure is rather self-explanatory where one or more of the agents responsible for pruning are not functioning properly. The second is a little more complicated because of multiple potential targets. The most obvious means is some external environmental agent acting as an agonist for mGluR1. Most likely the system that evaluates the winning synapse does not function on a relative scale (synapse 1 is releasing 70% of the glutamate versus synapse 2 releasing 30% so synapse 2 should be pruned), but instead on an absolute scale with a competition threshold (synapse 1 is releasing 70 mmols of glutamate versus synapse 2 releasing 30 mmols of glutamate so synapse 2 should be pruned because it is not releasing enough “don’t prune me” compounds stemming from that glutamate release). Thus mGluR1 agonist binding could increase overall glutamate release from all synapses “masking” synapse weakness resulting in less overall pruning. This additional agonist from an environmental agent could also help explain the increased rate of autism if the agent is a toxin of some sort.
Again it is difficult to reconcile the bigger brain seen in most autistic patients and less connectivity from excessive pruning, so one must assume for the moment that the above dynamic is correct that there are regions of the brain with multiple synapses instead of just one. How does an environment of multiple synapses create a reduced level of connectivity and activation for dynamic visual and/or audio processes? One possible explanation is that these synapses function differently than normal synapses.
In a multiple synapse situation none of the synapse may extend as far as the single dominant synapse situation, thus there is a larger than normal synaptic cleft. This scenario could make sense because translocation involves signals from both the pre and post-synaptic regions and one could expect a limited amount of signal from the post-synaptic region. If this signal is spread between multiple targets versus a single target shorter translocation is possible. This larger synaptic cleft reduces the total amount of neurotransmitter that is able to bind to receptors on the appropriate post-synaptic cleft. In such a scenario even if the multiple synapses were able to provide more total neurotransmitters due to firing similarity and rate, the additional space that the neurotransmitter has to cover should limit the total binding potential. For example if one were to standardize the one-one situation at a binding potential of 1 a larger synaptic cleft in a multiple synapse scenario would have a binding potential less than 1. This lower binding potential would reduce the activation probability of the post-synaptic cell resulting in slower and/or less neuronal synchronization, which is required for optimal processing of dynamic stimulation where multiple isolated firings of single neurons is limited.
Two possible synaptic explanations for the over-excitation when an autistic individual experiences a static stimulus could involve a lack of axonal evolution and a lack of feedback. First, as previously mentioned developing synapses function on a graduated release system over an all or nothing system, which later develops in more mature neuronal systems. Perhaps this evolution does not occur in the presence of multiple synapses and the system maintains the graduated release response. With this response it would be easier for the system to induce multiple firings during periods of intent focus (static stimulus), thus releasing more neurotransmitters than normal synaptic areas creating a state of over-excitation or over-synchronization. Second, non-pruned synaptic areas could have abnormal feedback systems, which could result in multiple firings and excess neurotransmitters in the cleft. Taking the binding potential example from above instead of being less than 1 in the situation of static stimuli autistic individuals would have a binding potential at these multiple synapses of greater than 1.
As mentioned above there is a question to how the rate of autism prevalence is increasing at the current rate. It stands to reason that natural genetic mutations are not the cause of this increase due to the required increased variance rate. Therefore, it makes sense to assume that there is some environmental catalyst that is increasing autism prevalence rates. Assume for a moment that these environmental factors exist how do they influence autism rates? There appear to be two valid rationalities: 1) These environmental factors influence the rate of genetic mutation in one or more key genetic neuronal development elements; 2) These factors influence the rate of activity, either positively or negatively, for neuronal development. For example one popular rational could be different factors acting as agonists or antagonists on mGluR1, which could influence pruning rates resulting in the increase in autism rates.
From a standpoint of treatment if abnormal pruning is the chief problem then addressing the pruning would be a little tricky because of the multiple elemnets involved in the pruning mechanism. The principle target would be mGluR1 and based on initial assumptions one would have to neutralize multiple synapse masking, thus one would pharmaceutically introduce an mGluR1 antagonist. However, the dosage and effect this antagonist would have on other parts of the brain is unclear. Autistic diagnosis perhaps could be improved by early observation of visual and motor function over emotional and communication abnormalities based on assumed abnormal pruning alterations. Overall while the pruning theory for autism has been proposed before at points in the past87 it seems to have been neglected in the mist of more gene research, which is unfortunate and is more than likely detrimental to the future treatment of autism.
Citations –
1. Gepner, B, and Tardif, C. “Autism, movement, time and thought. E-motion mis-sight and
other temporo-spatial processing disorders in autism.” in: M. Vanchevsky (ed). Frontiers
in Cognitive Psychology. New York: Nova Science Publishers. 2006. 1-30.
2. Volkmar, R and Pauls, D. “Autism”. Lancet. 2003. 362:1133-1141.
3. Lecavalier, L. “Behavioral and emotional problems in young people with pervasive
developmental disorders: relative prevalence, effects of subject characteristics, and
empirical classification.” J Autism Dev Disord. 2006. 36:1101-1114.
4. Rutter, M. “Incidence of autism spectrum disorders: changes over time and their
meaning.” Acta Paediatr. 2005. 94:2-15.
5. Sullivan, M. “Autism increase not a result of reclassification.” Clin. Psychiat. News. 2005. May:68.
6. Posey, D, et Al. “Antipsychotics in the treatment of autism.” J. Clin. Invest. 2008. 118:6–14.
7. Ecker, C, et Al. “Describing the brain in autism in five dimensions–magnetic resonance imaging-assisted diagnosis of autism spectrum disorder using a multiparameter classification approach.” J. Neurosci. 2010. 30:10612–10623.
8. Gepner, B, and Féron, F. “Autism: a world changing too fast for a mis-wired brain?”
Neuroscience and Biobehavioral Reviews. 2009. 33:1227-1242.
9. Abrahams, B, and Geschwind, D. “Advances in autism genetics: on the threshold of a
new neurobiology.” Nat Rev Genet. 2008. 9:341-355.
10. Persico, A, and Bourgeron, T. “Searching for ways out of the autism maze: genetic,
epigenetic and environmental cues.” Trends Neurosci. 2006. 29:349-358.
11. Fassio, A, et Al. “SYN1 loss-of-function mutations in autism and partial epilepsy cause impaired synaptic function.” Human Molecular Genetics. 2011 20:2297–2307.
12. Hamdan, F, et Al. “De novo SYNGAP1 mutations in nonsyndromic intellectual disability and autism.” Biological Psychiatry. 2011. 69:898–901.
13. Bozdagi, O, et Al. “Haploinsufficiency of the autism-associated Shank3 gene leads to deficits in synaptic function, social interaction, and social communication.” Molecular Autism. 2010. 1:15.
14. Durand, C, et Al. “Mutations in the gene encoding the synaptic scaffolding protein SHANK3 are associated with autism spectrum disorders.” Nature Genetics. 2006. 39:25–27.
15. Laumonnier, F, et Al. “X-linked mental retardation and autism are associated with a mutation in the NLGN4 gene, a member of the neuroligin family.” American Journal of Human Genetics. 2004. 74:552–557.
16. Sarachana, T, et Al. “Sex Hormones in Autism: Androgens and Estrogens Differentially and Reciprocally Regulate RORA, a Novel Candidate Gene for Autism.” PLoS ONE. 2011. 6(2):e17116.
17. Campbell, D, et Al. “A genetic variant that disrupts MET transcription is associated with autism.” PNAS. 2006. 103(45):16834–9.
18. Hu, V, et Al. “Gene Expression Profiling of Lymphoblasts from Autistic and Nonaffected Sib Pairs: Altered Pathways in Neuronal Development and Steroid Biosynthesis.” PLoS ONE. 2009. 4(6):e5775.
19. Awadalla, P, et Al. “Direct measure of the de novo mutation rate in autism and schizophrenia cohorts.” American Journal of Human Genetics. 2010. 87:316–324.
20. O’Roak, B, et Al. “Exome sequencing in sporadic autism spectrum disorders identifies severe de novo mutations.” Nature Genetics. 2011. 43:585–589.
21. Crepel, F. “Maturation of climbing fiber responses in the rat.” Brain Res. 1971. 35:272-276.
22. Mariani, J, and Changeux, J. “Ontogenesis of olivocerebellar relationships. I Studies by intracellular recordings of the multiple innervation of Purkinje cells by climbing fibers in the developing rat cerebellum.” J. Neurosci. 1981. 1:696-702.
23. Crepel, F, Mariani, J, and Delhaye-Bouchaud, N. “Evidence for a multiple innervation of Purkinje cells by climbing fibers in the immature rat cerebellum.” J. Neurobiol. 1976. 7:567-578.
24. Watanabe, M, and Kano, M. “Climbing fiber synapse elimination in cerebellar Purkinje cells.” European Journal of Neuroscience. 2011. 34(10):1697-1710.
25. Hazlett, H, et Al. “Magnetic resonance imaging and head circumference study of brain size in autism: birth through age 2 years.” Arch. Gen. Psychiatry. 2005. 62:366–1376.
26. Courchesne, E, et Al. “Unusual brain growth patterns in early life in patients with autistic disorder: an MRI study.” Neurology. 2001. 57:245–254.
27. Sparks, B, et Al. “Brain structural abnormalities in young children with autism spectrum disorder.” Neurology. 2002. 59:184–192.
28. Courchesne, E, “Autism at the beginning: microstructural and growth abnormalities underlying the cognitive and behavioral phenotype of autism.” Dev Psychopathol. 2005. 17:577-597.
29. Courchesne, E, et Al. “Mapping early brain development in autism.” Neuron. 2007 56:399-413.
30. Courchesne, E, Carper, R, and Akshoomoff, N. “Evidence of brain overgrowth in the first year of life in autism.” JAMA. 2003. 290:337–344.
31. Richler, J, et Al. “Is there a “regressive phenotype” of autism spectrum disorder associated with the measles-mumps-rubella vaccine? A CPEA study.” Journal of Autism and Developmental Disorders. 2006. 36:299–316.
32. Baird, G, et Al. “Regression, developmental trajectory and association problems in disorders in the autism spectrum: The SNAP study.” Journal of Autism and Developmental Disorders. 2008. 38:1827–1836.
33. Bauman, M, and Kemper, T. “Neuroanatomic observations of the brain in autism: a
review and future directions.” Int J Dev Neurosci. 2005. 23:183-187.
34. Whitney, E, et Al. “Density of cerebellar basket and stellate cells in autism: Evidence for a late developmental loss of Purkinje cells.” J. Neurosci Res. 2009. 87(10):2245–2254.
35. Huttenlocher, P, and Dabholkar, A. “Regional differences in synaptogenesis in human cerebral cortex.” Journal of Comparative Neurology. 1997. 387:167–178.
36. Huttenlocher, P. “Neural plasticity: The effects of environment on the development of the cerebral cortex.” 2002. Cambridge, MA: Harvard University Press.
37. Kawabata, S, et Al. “Control of calcium oscillations by phosphorylation of metabotropic glutamate receptors.” 1996. 89:92.
38. Li, X, et Al. “JNK1 contributes to metabotropic glutamate receptor-dependent long-term depression and short-term synaptic plasticity in the mice area hippocampal CA1.” Eur. J. Neurosci. 2007. 25(2):391–96.
39. Bordi, F, and Ugolini, A. “Group I metabotropic glutamate receptors: implications for brain diseases.” Progress in neurobiology. 1999. 59(1):55.
40. Page, G, et Al. “Group I metabotropic glutamate receptors activate the p70S6 kinase via both mammalian target of rapamycin (mTOR) and extracellular signalregulated kinase (ERK 1/2) signaling pathways in rat striatal and hippocampal synaptoneurosomes.” Neurochem. Int. 2006. 49(4):413–21.
41. Hou, L, and Klann, E. “Activation of the phosphoinositide 3-kinase-Akt-mammalian target of rapamycin signaling pathway is required for metabotropic glutamate receptor-dependent long-term depression.” J. Neurosci. 2004. 24(28):6352–61.
42. Niswender, C, and Conn, J. “Metabotropic Glutamate Receptors: Physiology, Pharmacology, and Disease.” Annu Rev Pharmacol Toxicol. 2010. 50:295–322.
43. Suzuki, Y, et Al. “Negative cooperativity of glutamate binding in the dimeric metabotropic glutamate receptor subtype 1.” J. Biol. Chem. 2004. 279(34):35526–34.
44. Brock, C, et Al. “Activation of a dimeric metabotropic glutamate receptor by intersubunit rearrangement.” J. Biol Chem. 2007. 282:33000–33008.
45. Anwyl, R. “Metabotropic glutamate receptors: electrophysiological properties and role in plasticity.” Brain Res. Rev. 1999. 29(1):83–120.
46. Bellone, C, Luscher, C, and Mameli, M. “Mechanisms of synaptic depression triggered by metabotropic glutamate receptors.” Cell. Mol. Life Sci. 2008. 65(18):2913–23.
47. Pinheiro, P, and Mulle, C. “Presynaptic glutamate receptors: physiological functions and mechanisms of action.” Nat. Rev. Neurosci. 2008. 9(6):423–36.
48. Ronesi, J, and Huber, K. “Homer interactions are necessary for metabotropic glutamate receptor induced long-term depression and translational activation.” J. Neurosci. 2008. 28(2):543–47.
49. Aiba, A, et Al. “Deficient cerebellar long-term depression and impaired motor learning in mGluR1 mutant mice.” Cell. 1994. 79(2):377–88.
50. Levenes, C, et Al. “Incomplete regression of multiple climbing fibre innervation of cerebellar Purkinje cells in mGLuR1 mutant mice.” Neuroreport. 1997. 8(2):571–74.
51. Sachs, A, et Al. “The mouse mutants recoil wobbler and nmf373 represent a series of Grm1 mutations.” Mamm. Genome. 2007. 18(11):749–56.
52. Matsuda, K, et Al. “Cbln1 is a ligand for an orphan glutamate receptor g2, a bidirectional synapse organizer.” Science. 2010. 328:363-368.
53. Takeuchi, T, et Al. “Control of synaptic connection by glutamate receptor g2 in the adult cerebellum.” J. Neurosci. 2005. 25:2146-2156.
54. Nakamura, M, et Al. “Signaling complex formation of phospholipase Cbeta4 with mGluR1apha and IP3R1 at the perisynapse and endoplasmic reticulum in the mouse brain.” Eur. J. Neurosci. 2004. 20:2929-2944.
55. Balice-Gordon, R, and Lichtman, J. “Long-term synapse loss induced by focal blockade of postsynaptic receptors.” Nature. 1994. 372:519–24.
56. Jennings, C. “Developmental neurobiology. Death of a synapse.” Nature. 1994. 372:498–99.
57. Huberman, A, Feller, M, and Chapman, B. “Mechanisms underlying development of visual maps and receptive fields.” Annu. Rev. Neurosci. 2008. 31:479–509
58. Stephan, A, Barres, B, and Stevens, B. “The Complement System: An Unexpected Role in Synaptic Pruning During Development and Disease.” Annu. Rev. Neurosci. 2012. 35:369–89.
59. Hughes, V. “The Constant Gardener.” Nature. 2012. 485(31): 570-572.
60. Schafer, D, et Al. “Microglia sculpt postnatal neural circuits in an activity and complement-dependent manner.” Neuron. 2012. 74(4):691-705.
61. Tuchman, R, and Rapin, I. “Epilepsy in autism.” Lancet Neurol. 2002. 1:352–358
62. Moshe, S, and Tuchman, R. “Convulsing toward the pathophysiology of autism.” Brain Dev. 2009. 31(2):95–103.
63. Casanova, M, et Al. “Minicolumnar pathology in autism.” Neurology. 2002. 58:428–432.
64. Casanova, M, Buxhoeveden, D, and Brown, C. “Clinical and macroscopic correlates of minicolumnar pathology in autism.” J. Child Neurol. 2002. 17:692–695.
65. Giedd, J, et Al. “Brain development during childhood and adolescence: a longitudinal MRI study.” Nat. Neurosci. 1999. 2:861–863
66. Voigt, R, et Al. “Laboratory evaluation of children with autistic spectrum disorders: a guide for primary care pediatricians.” Clin Pediatr (Phila). 2000. 39:669–671.
67. Fattal-Valevski, A, et Al. “Characterization and comparison of autistic subgroups: 10 years’ experience with autistic children.” Dev Med Child Neurol. 1999. 41:21–25.
68. Hoshino, Y, et Al. “Clinical features of autistic children with setback course in their infancy.” Jpn J Psychiatry Neurol. 1987. 41:237–245.
69. Giovanardi-Rossi, P, Posar, A, and Parmeggiani, A. “Epilepsy in adolescents and young adults with autistic disorder.” Brain Dev. 2000. 22:102–106.
70. Kawasaki, Y, et Al. “Brief report: electroencephalographic paroxysmal activities in the frontal area emerged in middle childhood and during adolescence in a follow-up study of autism.” J. Autism Dev Disord. 1997. 27:605–620.
71. Baron-Cohen, S, Knickmeyer, R, and Belmonte, M. “Sex differences in the brain: Implications for explaining autism.” Science. 2005. 310(5749):819–23.
72. Auyeung, B, et Al. “Foetal testosterone and autistic traits in 18 to 24-month-old children.” Molecular Autism. 2010. 1(1):11.
73. Chura, L et Al. “Organizational effects of fetal testosterone on human corpus callosum size and asymmetry.” Psychoneuroendocrinology. 2010. 35(1):122–132.
74. Just, M, et Al. “Functional and anatomical cortical underconnectivity in autism: Evidence from an FMRI study of an executive function task and corpus callosum morphometry.” Cereb. Cortex. 2007. 17(4):951 61.
75. Zehr, J, et Al. “Adolescent Development of Neuron Structure in Dentate Gyrus Granule Cells of Male Syrian Hamsters.” Develop Neurobiol. 2008. 68:1517–1526.
76. Hilton, G, et Al. “Glutamate-mediated excitotoxicity in neonatal hippocampal neurons is mediated by mGluR-induced release of Ca++ from intracellular stores and is prevented by estradiol.” Eur J Neurosci. 2006. 24:3008–16.
77. Nishitani, N, Avikainen, S, and Hari, R. “Abnormal imitation-related cortical activation sequences in Asperger’s syndrome.” 2004. Ann Neurol. 55:558-562.
78. Wickelgren, I. “Autistic brains out of synch?” Science. 2005. 308:1856-1858.
79. Geschwind, D, and Levitt, P. “Autism spectrum disorders: developmental disconnection syndromes.” Curr Opin Neurobiol. 2007. 17:103-111.
80. Minshew, N, and Williams, D. “The new neurobiology of autism: cortex, connectivity and neuronal organization.” Arch Neurol. 2007. 64:945-950.
81. Bertone, A, et Al. “Motion perception in autism: a ‘complex’ issue.” J. Cogn Neurosci. 2003. 15:218-225.
82. Belmonte, M, et Al. “Autism and abnormal development of brain connectivity.” J. Neurosci. 2004. 24:9228-9231.
83. Williams, E, and Casanova, M. “Autism and dyslexia: A spectrum of cognitive styles as defined by minicolumnar morphometry.” Medical Hypotheses. 2010. 74:59–62.
84. Koshino, H, et Al. “fMRI investigation of working memory for faces in autism: Visual coding and underconnectivity with frontal areas.” Cerebral Cortex. 2008. 18:289–300.
85. Gustafsson, L. “Inadequate cortical feature maps: A neural circuit theory of autism.” Biological Psychiatry. 1997. 42:1138–1147.
86. Grossberg, S, and Seidman, D. “Neural dynamics of autistic behaviors: Cognitive, emotional, and timing substrates.” Psychological Review. 2006. 113:483–525.
87. Frith, C. “What do imaging studies tell us about the neural basis of autism?” In G. Bock & J. Goode (Eds.), Autism: Neural basis and treatment possibilities: Novartis Foundation Symposium. 2003. 251:149–176. Chichester, England: Wiley.
Monday, June 17, 2013
Thursday, May 23, 2013
Returning to the Old Days of the News Media
An unfortunate evolution in the modern news media has been the character change from providing truthful information with little to no bias to providing information that the public is willing to watch, regardless of quality, in effort to acquire the highest ratings in order to attract the most advertising dollars. The evolution of the Internet to easily conduct information has catalyzed this devolution by increasing the number of competitors. Unfortunately the zest to compete with the Internet and other less reputable news sources as well as the glut of new opinions and information has created an environment of niche radicalization leading to segregated demographics, greater polarization of the population and the hastened demise of quality in these very news organizations.
One missing element that could staunch the speed of the characteristic conversion of news reporting is the existence of popular, authoritative and trustworthy anchors. In the past major news organizations had anchors like Walter Cronkite, Edward Murrow, Dan Rather, etc. that the viewing public respected and received quality and professional performances from. There was still bias at some levels in the news media in the past, but the existence of these strong and professional anchor personalities were able to mitigate significant elements of that bias. Reintroducing high quality distributors of information on a general level over the sea of specificity that currently exists should go a long way in addressing the radicalization of the public through the elimination of ideas that cannot be substantiated in reality. Basically there are not enough effective mediators of existing information and most of the public appears too invested or lazy to conduct the evaluation themselves. Restoring these mediators should be an important step for news organizations to revitalize their decaying role in society.
General Concept Idea:
To find prospective new young anchor talent to hasten an increased rate of credibility and movement away from the polarized sniping that currently exists on television and the Internet. The search will be conducted through the administration of a television show. The show will have a reality feel in part because it will document and demonstrate what actually goes into producing news broadcasts and training anchors on a legitimate level, not a Hollywood-derived way, which could help people understand the work that goes into producing the news leading to the public giving it a new respect for the news media itself.
Network to Air the Show:
All major networks would be appropriate for such a show with the exception of Fox due to their corruption of news reporting replacing it with propaganda that is obviously not concerned with an accurate presentation of facts over a presentation of a specific ideology.
Advancement/Judging:
Judgment will be split into two different formats – 1. A 3-judge panel (first choice selections – David Brooks/Shepard Smith, Dan Rather and Keith Olbermann) will critique performance; 2. A randomly selected panel of 20 individuals will watch performances and rate each candidate on a score of 1-5 with elimination of the two lowest and two highest scores. The candidate with the lowest combined score will be eliminated with the three judges breaking any ties. The show will be taped so each “episode” will have a new randomized panel of 20 individuals; the point of the “focus group judgment” is that one of the central premises of the show is to have a news anchor that people want to watch and trust versus simply reading faceless Associated Press or opinion columns on the Internet, a characteristic that cannot be fully deducted from the intuition of the judges alone. However, to those who would instead favor an open vote, similar to most reality shows where the viewing public votes on who should stay and who should go, such an idea is not appropriate. Opening the decision making to the entire public may seem like the right idea, but another central premise of this show is to develop an individual(s) that will return credibility to the news media. An open public vote has too much potential to digress to a simple popularity and/or ideology contest, which would sully this aspect of the show defeating its purpose.
Initial Candidate Screening:
The reality portion of this show is a component of its production, not the show itself thus casting will not utilize abstract personality elements or “hero/villain” dynamics. The general age demographic will be college junior at the floor to post-doctoral student at the ceiling (20-26) with either a degree in journalism or broadcasting or a desire for a career in the news media as an on-air talent as a requirement. Candidate selection can proceed one of two ways:
1. One central place of audition somewhere in the country (probably LA) where candidates are interviewed and the 50 top selections move on to the final elimination selection sequence (not broadcast, but could be shown in a DVD-like extra);
2. Various interviews are held on college campuses and a select quota taken from each campus (say 10 different campuses with 5 individuals from each campus); some could argue that such a quota system is not fair to certain candidates that may be 7th best at Northwestern, but 3rd best at Columbia and such an argument would be valid. However, another point of contention is what is the probability that the 7th best at Northwestern would end up winning the show? Overall one could execute a multiple venue tryout session without quotas to address this concern.
Candidates will be reduced from 50 to 20 in an elimination round that will require a live-camera interview of an individual (an actor) after introducing the story pertaining to the interview. Candidates will be segregated from shoot until after completion. Judges will then decide on which candidates demonstrate sufficient potential to achieve the goals of the show who will move on to the televised portion of the show.
Show Format:
While on-air communication ability is a critical element to restoring credibility and popularity to media organizations, there are other important elements that occur off-camera. Most reality competition based programs are divided into two competition elements. The first element is a quiz or small task format where the competition is small scale and the winner receives an advantage for the second element, the elimination challenge; upon the conclusion of the elimination challenge some evaluation criteria is used to remove some number of competitors from the competition. That initial part of that methodology will be applied to this show as well.
The first portion of the competition will involve testing a candidate’s off-camera skill such as fact-checking stories, proposing different angles in which to present the story, editing a written copy of the on-air story, etc. As long as at least 10 candidates remain in the competition then based on the performance the judges will divide the candidates into three tiers: 1st tier will be the best performers with the remaining competitors divided into 2nd and 3rd tiers. The 1st tier will have the option of performing three different attempts during the elimination competition, the 2nd tier individuals will get the option of performing two different attempts and the 3rd tier individuals will be allowed one attempt. When the number of candidates drop below 10 then the division will consist of only two tiers with those in the better tier allowed to make a second attempt versus only one attempt made by those in the lower tier. Tier division is yet to be determined, but it stands to reason that the higher the tier the fewer individuals will be placed in it. For example when placing 20 candidates after the first challenge 4 would reside in the 1st tier, 6 in the 2nd tier and 10 in the 3rd tier.
This tiered structure is appropriate because it makes sense to include some small form of reward for doing well in the smaller challenge, but including some crass reward such as immunity or cash prizes for the completion of sub-challenges during the course of the show belittles the idea of show and allows unqualified individuals the opportunity to continue to progress at the expense of more worthy candidates. However, one lingering issue is whether or not individuals will be able to select their best performance or will have to utilize their most recent performance as the material that will be judged by the focus group. Basically will a candidate in the 1st tier be allowed to perform the task three times and select from the best or if the task is performed three times will the 3rd performance be the one seen by the focus group regardless of how the candidate performed?
Challenge Topics:
The topics addressed during the competition must reflect the diversity of topics facing not only a seasoned anchor, but one that will capture the nation’s attention. Each challenge will consist of an overall topic broken down into two structures: a written or preparation section, which will act as the “miniature” challenge and an on-air or conference section, which will act as the elimination challenge. The topics that will be addressed include, but are not limited to:
- Investigative Journalism;
- Reporting an Emergency Situation/Event;
- General News Summary;
- Arranging Interviews;
- Conducting Exclusive Interviews in a non-softball manner;
- Appropriate story telling and narration for sustained delivery;
- Identifying inaccurate elements in a given position and effective presentation of those inaccuracies;
- Fact checking and following up with appropriate sources;
- Editorializing;
- Experience the relationship between the legal system and journalism (unfortunately necessary given the current existing relationship between government and the 4th estate);
The state of television journalism, as well as print journalism, is a cause for concern. While an argument could be made that some existing anchors are effective and passionate, the public fervor to acknowledge these traits is lacking. In addition of all of the anchors one would theorize fitting the above classification none of them are young, thus their time as anchors is drawing to a close. Conducting the above competition should not only provide a new medium to create talent that can be better prepared to produce high quality news pieces, but also will hopefully reestablish positive sentiment in the public for the professionalism that proper journalism demands.
Eliminating the flash and glitter that has overwhelmed the substance of presenting and analyzing important events in the public domain is one of the most important steps to creating the appropriate respect for reporting and establishing a trusted mouthpiece. One could make the argument that attempting to eliminate flash and glitter through a reality show medium is hypocritical, but that is why such a program must focus on the elements of reporting not on manufactured drama between the contestants. Overall if the United States is to have a chance at maintaining its 1st tier position in the global community, journalism must become more effective at reporting the news and establishing accurate and appropriate positions in the public forum and to do this it must have new and impressive talent able to effectively articulate these characteristics; otherwise the continued polarization and maintenance of untenable positions in politics and other areas of discourse will erode the foundation of the United States resulting in a steady fall from that 1st tier.
One missing element that could staunch the speed of the characteristic conversion of news reporting is the existence of popular, authoritative and trustworthy anchors. In the past major news organizations had anchors like Walter Cronkite, Edward Murrow, Dan Rather, etc. that the viewing public respected and received quality and professional performances from. There was still bias at some levels in the news media in the past, but the existence of these strong and professional anchor personalities were able to mitigate significant elements of that bias. Reintroducing high quality distributors of information on a general level over the sea of specificity that currently exists should go a long way in addressing the radicalization of the public through the elimination of ideas that cannot be substantiated in reality. Basically there are not enough effective mediators of existing information and most of the public appears too invested or lazy to conduct the evaluation themselves. Restoring these mediators should be an important step for news organizations to revitalize their decaying role in society.
General Concept Idea:
To find prospective new young anchor talent to hasten an increased rate of credibility and movement away from the polarized sniping that currently exists on television and the Internet. The search will be conducted through the administration of a television show. The show will have a reality feel in part because it will document and demonstrate what actually goes into producing news broadcasts and training anchors on a legitimate level, not a Hollywood-derived way, which could help people understand the work that goes into producing the news leading to the public giving it a new respect for the news media itself.
Network to Air the Show:
All major networks would be appropriate for such a show with the exception of Fox due to their corruption of news reporting replacing it with propaganda that is obviously not concerned with an accurate presentation of facts over a presentation of a specific ideology.
Advancement/Judging:
Judgment will be split into two different formats – 1. A 3-judge panel (first choice selections – David Brooks/Shepard Smith, Dan Rather and Keith Olbermann) will critique performance; 2. A randomly selected panel of 20 individuals will watch performances and rate each candidate on a score of 1-5 with elimination of the two lowest and two highest scores. The candidate with the lowest combined score will be eliminated with the three judges breaking any ties. The show will be taped so each “episode” will have a new randomized panel of 20 individuals; the point of the “focus group judgment” is that one of the central premises of the show is to have a news anchor that people want to watch and trust versus simply reading faceless Associated Press or opinion columns on the Internet, a characteristic that cannot be fully deducted from the intuition of the judges alone. However, to those who would instead favor an open vote, similar to most reality shows where the viewing public votes on who should stay and who should go, such an idea is not appropriate. Opening the decision making to the entire public may seem like the right idea, but another central premise of this show is to develop an individual(s) that will return credibility to the news media. An open public vote has too much potential to digress to a simple popularity and/or ideology contest, which would sully this aspect of the show defeating its purpose.
Initial Candidate Screening:
The reality portion of this show is a component of its production, not the show itself thus casting will not utilize abstract personality elements or “hero/villain” dynamics. The general age demographic will be college junior at the floor to post-doctoral student at the ceiling (20-26) with either a degree in journalism or broadcasting or a desire for a career in the news media as an on-air talent as a requirement. Candidate selection can proceed one of two ways:
1. One central place of audition somewhere in the country (probably LA) where candidates are interviewed and the 50 top selections move on to the final elimination selection sequence (not broadcast, but could be shown in a DVD-like extra);
2. Various interviews are held on college campuses and a select quota taken from each campus (say 10 different campuses with 5 individuals from each campus); some could argue that such a quota system is not fair to certain candidates that may be 7th best at Northwestern, but 3rd best at Columbia and such an argument would be valid. However, another point of contention is what is the probability that the 7th best at Northwestern would end up winning the show? Overall one could execute a multiple venue tryout session without quotas to address this concern.
Candidates will be reduced from 50 to 20 in an elimination round that will require a live-camera interview of an individual (an actor) after introducing the story pertaining to the interview. Candidates will be segregated from shoot until after completion. Judges will then decide on which candidates demonstrate sufficient potential to achieve the goals of the show who will move on to the televised portion of the show.
Show Format:
While on-air communication ability is a critical element to restoring credibility and popularity to media organizations, there are other important elements that occur off-camera. Most reality competition based programs are divided into two competition elements. The first element is a quiz or small task format where the competition is small scale and the winner receives an advantage for the second element, the elimination challenge; upon the conclusion of the elimination challenge some evaluation criteria is used to remove some number of competitors from the competition. That initial part of that methodology will be applied to this show as well.
The first portion of the competition will involve testing a candidate’s off-camera skill such as fact-checking stories, proposing different angles in which to present the story, editing a written copy of the on-air story, etc. As long as at least 10 candidates remain in the competition then based on the performance the judges will divide the candidates into three tiers: 1st tier will be the best performers with the remaining competitors divided into 2nd and 3rd tiers. The 1st tier will have the option of performing three different attempts during the elimination competition, the 2nd tier individuals will get the option of performing two different attempts and the 3rd tier individuals will be allowed one attempt. When the number of candidates drop below 10 then the division will consist of only two tiers with those in the better tier allowed to make a second attempt versus only one attempt made by those in the lower tier. Tier division is yet to be determined, but it stands to reason that the higher the tier the fewer individuals will be placed in it. For example when placing 20 candidates after the first challenge 4 would reside in the 1st tier, 6 in the 2nd tier and 10 in the 3rd tier.
This tiered structure is appropriate because it makes sense to include some small form of reward for doing well in the smaller challenge, but including some crass reward such as immunity or cash prizes for the completion of sub-challenges during the course of the show belittles the idea of show and allows unqualified individuals the opportunity to continue to progress at the expense of more worthy candidates. However, one lingering issue is whether or not individuals will be able to select their best performance or will have to utilize their most recent performance as the material that will be judged by the focus group. Basically will a candidate in the 1st tier be allowed to perform the task three times and select from the best or if the task is performed three times will the 3rd performance be the one seen by the focus group regardless of how the candidate performed?
Challenge Topics:
The topics addressed during the competition must reflect the diversity of topics facing not only a seasoned anchor, but one that will capture the nation’s attention. Each challenge will consist of an overall topic broken down into two structures: a written or preparation section, which will act as the “miniature” challenge and an on-air or conference section, which will act as the elimination challenge. The topics that will be addressed include, but are not limited to:
- Investigative Journalism;
- Reporting an Emergency Situation/Event;
- General News Summary;
- Arranging Interviews;
- Conducting Exclusive Interviews in a non-softball manner;
- Appropriate story telling and narration for sustained delivery;
- Identifying inaccurate elements in a given position and effective presentation of those inaccuracies;
- Fact checking and following up with appropriate sources;
- Editorializing;
- Experience the relationship between the legal system and journalism (unfortunately necessary given the current existing relationship between government and the 4th estate);
The state of television journalism, as well as print journalism, is a cause for concern. While an argument could be made that some existing anchors are effective and passionate, the public fervor to acknowledge these traits is lacking. In addition of all of the anchors one would theorize fitting the above classification none of them are young, thus their time as anchors is drawing to a close. Conducting the above competition should not only provide a new medium to create talent that can be better prepared to produce high quality news pieces, but also will hopefully reestablish positive sentiment in the public for the professionalism that proper journalism demands.
Eliminating the flash and glitter that has overwhelmed the substance of presenting and analyzing important events in the public domain is one of the most important steps to creating the appropriate respect for reporting and establishing a trusted mouthpiece. One could make the argument that attempting to eliminate flash and glitter through a reality show medium is hypocritical, but that is why such a program must focus on the elements of reporting not on manufactured drama between the contestants. Overall if the United States is to have a chance at maintaining its 1st tier position in the global community, journalism must become more effective at reporting the news and establishing accurate and appropriate positions in the public forum and to do this it must have new and impressive talent able to effectively articulate these characteristics; otherwise the continued polarization and maintenance of untenable positions in politics and other areas of discourse will erode the foundation of the United States resulting in a steady fall from that 1st tier.
Saturday, May 18, 2013
Solar and Wind Need to Step up to the Plate
Over the last few years there has been a steady back and forth between various individuals and groups about the viability of wind and solar power to account for a vast majority of the future energy infrastructure. Despite legitimate concerns with the potential effectiveness and consistency of such a system, wind and solar proponents continue to place their faith in its viability with no scale evidence validating this faith. In addition proponents of these technologies believe that the public must be made more aware of the “adaptability” of solar and wind versus fossil fuel generators. However, why does the solar and wind manufacturing and deployment community allow this uncertainty to linger instead opting for its new plan of engaging in a new more aggressive marketing plan to “sell” the public on the idea of solar and wind? Providing evidence to support one position or the other is quite possible; at least it must be for numerous wind and solar power supporters continue to claim that the limiting factor to “greening” the energy infrastructure is the deployment rate for wind and solar plants over existing technology.
Note that this experiment must exceed the testing irrelevancies of electricity aggregation or renewable energy credits (RECs), which do nothing significant to demonstrate the reliability of wind and solar energy providers. Wind and solar proponents envision an energy infrastructure that incorporates wind and solar into comprising at least 70%+ of the entire electricity providing system (maybe even 70%+ of all energy at some point eventually), thus that number seems to be a good threshold point. However, this dream should only be pursued if these technologies are actually able accomplish this goal; therefore, as stated above the current legitimacy of this system must be tested and its strengths and weaknesses must be adjudicated. So how would such an experiment be conducted?
A simple starting methodology must include, but not be limited to, the following boundary conditions/rules:
- Approach a small city (approximately 10,000-15,000 population) and receive permission to change the electricity provision infrastructure from the current existing mix to 70% solar/wind with a 30% other;
- At no time could the electricity provided to the city from non-solar or wind sources exceed 30% or the test would be considered a failure of the experiment (note this condition includes all electricity derived from storage sources like batteries); note that this condition could be modified based on how many brownouts/blackouts the selected city was willing to accept;
- All participating companies will have a year to prepare for the switch from existing mix to solar/wind dominated mix; a good idea would be for a coalition of solar/wind companies to select a city by July 31, 2013 and then start the experiment on July 31, 2014.
- An independent auditor will track electricity use and costs associated with that use and any addition construction related to that electricity infrastructure;
- The above point must consider net electricity use not gross electricity production. For example suppose 1000 MW are produced, but only 500 MW are used due to a lack of storage, the produced count must be 500 MW not 1000 MW in percentage calculations in order to not overstate the used production rate of renewable sources; i.e. utilized reserves is what matters not name plate capacity.
- Note that the city does not have to remain static once the test has begun, it can still add or subtract electricity infrastructure features; however, these changes must be incorporated into the evaluation metric for the efficiency and validity of the renewable sources tested;
If the simplicity and viability of the above methodology is to be believed then what is stopping one (or more through a coalition) of the various solar and wind companies from administering it? Realistically there seem to be only two reasons. First, wind and solar proponents are not accurate in their assessment that these energy technologies are advanced enough to effectively substitute for fossil fuel technologies and wind and solar companies know it, but don’t want to admit it. Second, solar and wind companies are not sure whether or not such an experiment will be successful and are afraid that if it is implemented and fails the negative publicity surrounding such a failure will produce a significant handicap to their future growth.
This potential “fear of failure” attitude is interesting because most intelligent people realize that failure is an integral part of technological growth, so why pass up an opportunity to explore the strengths and weaknesses of what numerous people hope is the future energy infrastructure in a real experimental environment over the worthless rooftop-like piecemeal “experiments” that are currently conducted? One possibility is that these companies believe that a failure in the above type of experiment will be regarded by society not as a learning experience, but as an inherent flaw in the technology itself, thus a higher probability of possible abandonment and lose of millions. This is a fear that solar and wind companies need to get over if they want to better serve society both now and in the future because if the technology does have flaws that are not corrected for before their application then society loses more than just money.
Despite the above concerns the apparent trepidation by solar and wind energy companies when it comes to this type of experiment is disconcerting. If they were confident in the maturity of the technology and its ability to provide consistent electricity to a populous then such an experiment should have already been conducted. Success would produce a valuable and powerful point of evidence that would support the rapid expansion and deployment of solar and wind energy technology and further demonstrate what strengths would benefit society from such a system and what weaknesses exist, which could be improved upon over time.
Two final notes, first some may point out that the city of Lancaster, California has recently spearheaded a large growth of solar power, but understand that it is no where near the capacity necessary to power the city and is probable too large a city for this initial test (population 157,000+).
Second, some may argue that Portugal’s 2013 1st quarter success of 70% power generation from renewables demonstrates its validity as the future energy infrastructure. However, when looking at the details of the renewable breakdown this success becomes less and less impressive and repeatable in the short-term. For example while Portugal is an above average producer of energy from non fossil fuel sources, favorable weather is more of a reason for the higher renewable percentage than anything else. A large spike in existing hydroelectric efficiency (312% increase) drove most of the change with no significant new project builds. In fact Portugal solar photovoltaic penetration only made up approximately 0.7% of energy use (in 2012), which is in direct contrast to most plans put forth by solar enthusiasts regarding massive solar deployment and is counter to the challenge presented in this blog. Therefore, Portugal’s success proves nothing about the validity of a solar/wind energy infrastructure, just the usefulness of hydroelectric power.
Overall if the benefit of success is a significant increase in growth rate of the industry, why has no one in the solar/wind industry attempted such an experiment? This lack of experimentation reminds one of the blind faith that some have in electrical vehicles acting as mobile storage batteries to augment solar and wind power, yet no one has ever demonstrated the viability of such a strategy in a community of 5,000 people let alone a country of 300+ million. If the solar and wind industry want their technology to be taken seriously as a substitute for existing fossil fuels by all parties then they have to demonstrate the ability to do the heavy lifting with minimal assistance from other energy providers. Otherwise why waste time playing with expensive toys in lieu of proven fossil fuel substitutes like nuclear and geothermal?
Note that this experiment must exceed the testing irrelevancies of electricity aggregation or renewable energy credits (RECs), which do nothing significant to demonstrate the reliability of wind and solar energy providers. Wind and solar proponents envision an energy infrastructure that incorporates wind and solar into comprising at least 70%+ of the entire electricity providing system (maybe even 70%+ of all energy at some point eventually), thus that number seems to be a good threshold point. However, this dream should only be pursued if these technologies are actually able accomplish this goal; therefore, as stated above the current legitimacy of this system must be tested and its strengths and weaknesses must be adjudicated. So how would such an experiment be conducted?
A simple starting methodology must include, but not be limited to, the following boundary conditions/rules:
- Approach a small city (approximately 10,000-15,000 population) and receive permission to change the electricity provision infrastructure from the current existing mix to 70% solar/wind with a 30% other;
- At no time could the electricity provided to the city from non-solar or wind sources exceed 30% or the test would be considered a failure of the experiment (note this condition includes all electricity derived from storage sources like batteries); note that this condition could be modified based on how many brownouts/blackouts the selected city was willing to accept;
- All participating companies will have a year to prepare for the switch from existing mix to solar/wind dominated mix; a good idea would be for a coalition of solar/wind companies to select a city by July 31, 2013 and then start the experiment on July 31, 2014.
- An independent auditor will track electricity use and costs associated with that use and any addition construction related to that electricity infrastructure;
- The above point must consider net electricity use not gross electricity production. For example suppose 1000 MW are produced, but only 500 MW are used due to a lack of storage, the produced count must be 500 MW not 1000 MW in percentage calculations in order to not overstate the used production rate of renewable sources; i.e. utilized reserves is what matters not name plate capacity.
- Note that the city does not have to remain static once the test has begun, it can still add or subtract electricity infrastructure features; however, these changes must be incorporated into the evaluation metric for the efficiency and validity of the renewable sources tested;
If the simplicity and viability of the above methodology is to be believed then what is stopping one (or more through a coalition) of the various solar and wind companies from administering it? Realistically there seem to be only two reasons. First, wind and solar proponents are not accurate in their assessment that these energy technologies are advanced enough to effectively substitute for fossil fuel technologies and wind and solar companies know it, but don’t want to admit it. Second, solar and wind companies are not sure whether or not such an experiment will be successful and are afraid that if it is implemented and fails the negative publicity surrounding such a failure will produce a significant handicap to their future growth.
This potential “fear of failure” attitude is interesting because most intelligent people realize that failure is an integral part of technological growth, so why pass up an opportunity to explore the strengths and weaknesses of what numerous people hope is the future energy infrastructure in a real experimental environment over the worthless rooftop-like piecemeal “experiments” that are currently conducted? One possibility is that these companies believe that a failure in the above type of experiment will be regarded by society not as a learning experience, but as an inherent flaw in the technology itself, thus a higher probability of possible abandonment and lose of millions. This is a fear that solar and wind companies need to get over if they want to better serve society both now and in the future because if the technology does have flaws that are not corrected for before their application then society loses more than just money.
Despite the above concerns the apparent trepidation by solar and wind energy companies when it comes to this type of experiment is disconcerting. If they were confident in the maturity of the technology and its ability to provide consistent electricity to a populous then such an experiment should have already been conducted. Success would produce a valuable and powerful point of evidence that would support the rapid expansion and deployment of solar and wind energy technology and further demonstrate what strengths would benefit society from such a system and what weaknesses exist, which could be improved upon over time.
Two final notes, first some may point out that the city of Lancaster, California has recently spearheaded a large growth of solar power, but understand that it is no where near the capacity necessary to power the city and is probable too large a city for this initial test (population 157,000+).
Second, some may argue that Portugal’s 2013 1st quarter success of 70% power generation from renewables demonstrates its validity as the future energy infrastructure. However, when looking at the details of the renewable breakdown this success becomes less and less impressive and repeatable in the short-term. For example while Portugal is an above average producer of energy from non fossil fuel sources, favorable weather is more of a reason for the higher renewable percentage than anything else. A large spike in existing hydroelectric efficiency (312% increase) drove most of the change with no significant new project builds. In fact Portugal solar photovoltaic penetration only made up approximately 0.7% of energy use (in 2012), which is in direct contrast to most plans put forth by solar enthusiasts regarding massive solar deployment and is counter to the challenge presented in this blog. Therefore, Portugal’s success proves nothing about the validity of a solar/wind energy infrastructure, just the usefulness of hydroelectric power.
Overall if the benefit of success is a significant increase in growth rate of the industry, why has no one in the solar/wind industry attempted such an experiment? This lack of experimentation reminds one of the blind faith that some have in electrical vehicles acting as mobile storage batteries to augment solar and wind power, yet no one has ever demonstrated the viability of such a strategy in a community of 5,000 people let alone a country of 300+ million. If the solar and wind industry want their technology to be taken seriously as a substitute for existing fossil fuels by all parties then they have to demonstrate the ability to do the heavy lifting with minimal assistance from other energy providers. Otherwise why waste time playing with expensive toys in lieu of proven fossil fuel substitutes like nuclear and geothermal?
Monday, April 29, 2013
Education and Experience in the Job Market
The U.S. economy continues to sputter along in its recovery from the Great Depression, but while most turn their gaze to the unemployment percentage or how many jobs government and private corporations added/subtracted over a given month or quarter this focus is misdirected. Unfortunately most people do not consider the type of jobs that are driving the “recovery”. Most of the job gains are lower paying service and healthcare jobs, which exist near minimum wage salaries and part-time work. For example 19.1 percent of employees work part time (fewer than 35 hours a week) versus 16.9 percent when the Great Recession started.1 Note that while some may mitigate the absolute percentage change consider that this percentage encompasses over 150 million people, thus the percentage change involves millions.
Most commentators believe that these lower paying near-minimum wage service jobs and other part time work are a byproduct of the general lack of education of U.S. citizens. The common counter solution is rhetoric extolling the need to invest in education. Basically these individuals are lamenting that if only the general citizenry were more educated then most of the problems surrounding under and unemployment would be significantly alleviated. Therefore, the key to re-energizing the economy is widespread focus on and reform of education. Believers attempt to support this position by citing the numerous unfilled jobs that various companies have had open for months, if not years, due to a lack of appropriate candidates. Unfortunately this evidence damages the simplistic education argument in that it exposes more of the complexity of the modern relationship between the work force and quality jobs.
The principle reason why most high quality jobs remain unfilled over years is a lack of experience not a lack of education. Excluding this element from the economic issue is especially troubling because some pundits seem to assume that if a person has a college degree they will get a job, a reality that is illusionary. The quality job issue boils down to education and experience. When some comment about the lack of qualified individuals to fill all of these open positions in science, engineering and manufacturing they neglect to discuss the required experience only focusing on the education requirement. For example most of these job openings read something like this: company x is looking for an engineer with a Masters in electrical engineering and has 10 years experience in MEMS, not company x is looking for an engineer with a Masters in electrical engineering only. There are plenty of individuals with Masters in electrical engineering, but few individuals with that education and 10 years experience in MEMS. The principle reason for the lack of quality candidates is that over recent years most companies have cut back significantly on the number of entry-level positions they hire.
Unfortunately this reduction has created a form of negative feedback loop because if one is not hiring entry-level positions then those individuals who would take those jobs are not acquiring appropriate experience and are instead gaining experience in something else or nothing at all. Thus, this “redirection” of experience reduces the amount of individuals with requisite experience limiting the pool of candidates for companies to draw from when they require someone with certain experience. At present this experience pool for science, engineering and other high quality non-financial based jobs is so diminished that most companies have trouble filling these “experience required” positions almost requiring them to poach these qualified individuals from other companies, which does not solve the overall problem just the problem for company A while creating a new problem for company B (the company that loses the employee). Instead companies need to train for these positions by increasing entry-level hires versus looking for one who already has the experience.
Filling these higher quality jobs is important because they have a higher probability than most jobs of being both high paying and long lasting. This long lasting element is one commonly missed by those championing the Green Job revolution for most of those jobs are transient being temporary construction and manufacturing jobs. Also the supporters of the Green Job revolution forget the aspect of skill mismatch in that constructing a wind turbine is not equivalent to fixing a roof. Thus individuals without proper experience in construction and manufacturing will not be able to take significant advantage of any Green Job-type revolution. Even if the millions of jobs promised by the supports of the Green Job revolution were an accurate appraisal of the scenario, those jobs would not go to millions of different people, just the same general pool of hundreds of thousands of appropriately qualified individuals.
Another problem is that education needs to be more focused because not all education is created equal when it comes to employment. Pharmaceutical majors and philosophy majors may both receive an excellent education, but what they can do with that education after college is vastly different. In addition corporations need to restore confidence that university distinction is nominal in the selection process. Clearly there is a significant difference in the rigor and education one is expected to receive from John Smith State University versus Stanford, but the difference between a university like Purdue and Stanford is far less. Unfortunately employers do not honor the notion that education and other non-tangible characteristics (leadership, etc.) are what matter instead allowing nepotism, favoritism and connections to win out in the end, elements that are more easily acquired at Ivy League institutions over other high-ranking schools.
It is also worth noting that Ivy League graduates have higher salaries versus other universities over the initial stages of their careers regardless of career choice and despite a lack of demonstrated superior ability.2 If this problem persists then no amount of “education reform” and/or additional access to education will change the quality job concern because these “top-tier” schools have had declining acceptance rates over the last few years, thus the enhancement strategy would simply leave more people with an education, but no job adequate for it and some unnecessary amount of debt.
Two key steps to addressing this experience/quality job shortfall is that corporations will have to start expanding entry level hires and generate more advanced and extensive training methods. Basically corporations have to stop waiting for someone else to solve their problems (company a hires and trains individual A who is later poached by company b) and start solving it themselves (remember corporations are legally people now) by spending some of that billions in profit they have accumulated over the last few years since the Great Recession. Corporations must also form more cooperative relationships with universities establishing more paid internships linked to graduation credits. While rather obvious this relationship will help enhance early experience and enlightened students regarding the compatibility of actually career operation to their interests.
Overall there has been a shift towards part time work over full time work in recent years driven by two elements beyond the circumstances of the Great Recession (which should not be utilized as a excuse anymore). First, creating an efficient schedule organizing appropriate time for all full time workers requires sophisticated math or software scheduling versus a much more simplistic scheme with part time workers. In some respects laziness is carrying the day and augmenting the “advantages” of having part time workers. Second, the concern of increased health care costs, which will be born from the healthcare reform legislation of 2009 (the Affordable Care Act) that demands corporations employing at least 50 full-time employees in the previous calendar year provide health insurance for all or face penalties. Hire a lot of part time employees over a smaller number of full time employees and avoid the mandate.
Society must determine whether it is more appropriate for various industries to have numerous part time workers or a slightly smaller number of full time workers. If the latter wins out, yet corporations are unwilling to address that desire then government will have to pass legislation limiting the percentage of part time workers that a corporation can hire for most part time worker strategies are executed solely for the company’s bottom line and not for the health of society or the employees.
--
Citations –
1. Rampell. C. “Part-Time Work Becomes Full-Time Wait for Better Job”. NY Times. April 19, 2013. http://www.nytimes.com/2013/04/20/business/part-time-work-becomes-full-time-wait-for-better-job.html?pagewanted=1&_r=1&hp&.
2. 2012-2013 Payscale College Salary Report. Payscale.com http://www.payscale.com/college-salary-report-2013.
Most commentators believe that these lower paying near-minimum wage service jobs and other part time work are a byproduct of the general lack of education of U.S. citizens. The common counter solution is rhetoric extolling the need to invest in education. Basically these individuals are lamenting that if only the general citizenry were more educated then most of the problems surrounding under and unemployment would be significantly alleviated. Therefore, the key to re-energizing the economy is widespread focus on and reform of education. Believers attempt to support this position by citing the numerous unfilled jobs that various companies have had open for months, if not years, due to a lack of appropriate candidates. Unfortunately this evidence damages the simplistic education argument in that it exposes more of the complexity of the modern relationship between the work force and quality jobs.
The principle reason why most high quality jobs remain unfilled over years is a lack of experience not a lack of education. Excluding this element from the economic issue is especially troubling because some pundits seem to assume that if a person has a college degree they will get a job, a reality that is illusionary. The quality job issue boils down to education and experience. When some comment about the lack of qualified individuals to fill all of these open positions in science, engineering and manufacturing they neglect to discuss the required experience only focusing on the education requirement. For example most of these job openings read something like this: company x is looking for an engineer with a Masters in electrical engineering and has 10 years experience in MEMS, not company x is looking for an engineer with a Masters in electrical engineering only. There are plenty of individuals with Masters in electrical engineering, but few individuals with that education and 10 years experience in MEMS. The principle reason for the lack of quality candidates is that over recent years most companies have cut back significantly on the number of entry-level positions they hire.
Unfortunately this reduction has created a form of negative feedback loop because if one is not hiring entry-level positions then those individuals who would take those jobs are not acquiring appropriate experience and are instead gaining experience in something else or nothing at all. Thus, this “redirection” of experience reduces the amount of individuals with requisite experience limiting the pool of candidates for companies to draw from when they require someone with certain experience. At present this experience pool for science, engineering and other high quality non-financial based jobs is so diminished that most companies have trouble filling these “experience required” positions almost requiring them to poach these qualified individuals from other companies, which does not solve the overall problem just the problem for company A while creating a new problem for company B (the company that loses the employee). Instead companies need to train for these positions by increasing entry-level hires versus looking for one who already has the experience.
Filling these higher quality jobs is important because they have a higher probability than most jobs of being both high paying and long lasting. This long lasting element is one commonly missed by those championing the Green Job revolution for most of those jobs are transient being temporary construction and manufacturing jobs. Also the supporters of the Green Job revolution forget the aspect of skill mismatch in that constructing a wind turbine is not equivalent to fixing a roof. Thus individuals without proper experience in construction and manufacturing will not be able to take significant advantage of any Green Job-type revolution. Even if the millions of jobs promised by the supports of the Green Job revolution were an accurate appraisal of the scenario, those jobs would not go to millions of different people, just the same general pool of hundreds of thousands of appropriately qualified individuals.
Another problem is that education needs to be more focused because not all education is created equal when it comes to employment. Pharmaceutical majors and philosophy majors may both receive an excellent education, but what they can do with that education after college is vastly different. In addition corporations need to restore confidence that university distinction is nominal in the selection process. Clearly there is a significant difference in the rigor and education one is expected to receive from John Smith State University versus Stanford, but the difference between a university like Purdue and Stanford is far less. Unfortunately employers do not honor the notion that education and other non-tangible characteristics (leadership, etc.) are what matter instead allowing nepotism, favoritism and connections to win out in the end, elements that are more easily acquired at Ivy League institutions over other high-ranking schools.
It is also worth noting that Ivy League graduates have higher salaries versus other universities over the initial stages of their careers regardless of career choice and despite a lack of demonstrated superior ability.2 If this problem persists then no amount of “education reform” and/or additional access to education will change the quality job concern because these “top-tier” schools have had declining acceptance rates over the last few years, thus the enhancement strategy would simply leave more people with an education, but no job adequate for it and some unnecessary amount of debt.
Two key steps to addressing this experience/quality job shortfall is that corporations will have to start expanding entry level hires and generate more advanced and extensive training methods. Basically corporations have to stop waiting for someone else to solve their problems (company a hires and trains individual A who is later poached by company b) and start solving it themselves (remember corporations are legally people now) by spending some of that billions in profit they have accumulated over the last few years since the Great Recession. Corporations must also form more cooperative relationships with universities establishing more paid internships linked to graduation credits. While rather obvious this relationship will help enhance early experience and enlightened students regarding the compatibility of actually career operation to their interests.
Overall there has been a shift towards part time work over full time work in recent years driven by two elements beyond the circumstances of the Great Recession (which should not be utilized as a excuse anymore). First, creating an efficient schedule organizing appropriate time for all full time workers requires sophisticated math or software scheduling versus a much more simplistic scheme with part time workers. In some respects laziness is carrying the day and augmenting the “advantages” of having part time workers. Second, the concern of increased health care costs, which will be born from the healthcare reform legislation of 2009 (the Affordable Care Act) that demands corporations employing at least 50 full-time employees in the previous calendar year provide health insurance for all or face penalties. Hire a lot of part time employees over a smaller number of full time employees and avoid the mandate.
Society must determine whether it is more appropriate for various industries to have numerous part time workers or a slightly smaller number of full time workers. If the latter wins out, yet corporations are unwilling to address that desire then government will have to pass legislation limiting the percentage of part time workers that a corporation can hire for most part time worker strategies are executed solely for the company’s bottom line and not for the health of society or the employees.
--
Citations –
1. Rampell. C. “Part-Time Work Becomes Full-Time Wait for Better Job”. NY Times. April 19, 2013. http://www.nytimes.com/2013/04/20/business/part-time-work-becomes-full-time-wait-for-better-job.html?pagewanted=1&_r=1&hp&.
2. 2012-2013 Payscale College Salary Report. Payscale.com http://www.payscale.com/college-salary-report-2013.
Thursday, March 28, 2013
3 Key Issues for the Future Global Energy Infrastructure
One of the concerns with the environmental movement is the myopic view that establishing a solar and wind energy infrastructure is the correct path without actually proving that it is the correct path. Unfortunately the lack of evidence for the viability of a solar and wind energy infrastructure should be a concern in environmental groups, but it is an issue they typically ignore. Instead most think that the chief problem facing the environment in the future is the current lack of will and sense of sacrifice in society, thus getting people to realize the severity of the situation is the only genuine issue of importance for everything else will fall in place afterwards. This post is a challenge to the environmental community to PROVE that the common strategy points they champion are the correct ones. The three key points that must be addressed by environmentalists are as followed:
1. Argue that both components of geo-engineering (solar radiation management and carbon remediation) will not be needed for decades in the future; as it stands the more vocal environmentalists claim that geo-engineering is a non-starter no matter what due to uncertainty fears with the popular opinion of planting trees and creating some small amount of bio-char being sufficient.
Note – Remember the elimination of coal as an energy source will eliminate a percentage of negative radiative forcing aerosols that are released into the atmosphere through coal combustion. So is there confidence that society can manage an addition 0.2-0.7 degrees C of warming while global emission profiles are still around 60-70% of current levels? If so, what is the origin of this confidence?
2. Argue that building a solar and wind energy infrastructure (65-80% is the range most environmentalists envision) is more economically and structurally viable than building a small modular nuclear reactor and generation III nuclear energy infrastructure.
Note – Be wary of citing anything from Mark Jacobson for this issue. Jacobson’s solar and wind analyses (along with basically everyone else) fail on three major levels:
A. They do not use detailed real numbers when calculating actual required energy and its integration into a future grid; instead he utilizes broad concepts like smart grid, widespread multi-geographical integration, demand-response management, etc. to “magically” eliminate future energy concerns or changes like network congestion and other grid-level system costs. Without using actual numbers in an in-depth analysis of costs and methodological action it is difficult to view legitimacy in his claims largely due only to scale issues let alone other problems. The few analyses that actually use numbers are plagued by optimistic assumptions addressed in the next two points.
B. They fail to address anything remotely specific about the required energy storage for a solar/wind energy infrastructure simply stating that some storage will be required (hypothesized amounts are not mentioned). Also little mention of what it will be (beyond pumped hydro which is supply short), how much it will cost, how it materializes and how its construction will affect other industries. The failure to discuss the intermittency aspect when calculating cost is especially prevalent when wind and solar supporters make claims like wind and/or solar costs in country x are now lower than coal costs. Wind/solar proponents seem not to understand that 1 GW of solar is not equivalent to 1 GW of coal when taking operational capacities into consideration over nameplate capacities (i.e. what occurs in the real world 10 –35% for solar or wind (depending on the country and time of year) of nameplate versus 75-90% for coal.)
C. They fail to address supply shortages that will be created when constructing a solar/wind infrastructure, especially for rare earths (most notably heavy REs like dysprosium), concrete, steel and aluminum. These shortages will significantly increase costs for the construction of this infrastructure largely because of the low generation per unit ratios that solar and wind installations have due to their scale capacity and intermittency limitations.
Due to these problems, only citing Jacobson demonstrates a lack of caring about these critical issues and thus place the future of the planet in the “hope and pray” column. Basically a mindset that is similar to that of a global warming denier.
3. How will the creation of a large (40%+) electrical vehicle fleet and a large (40%+) wind energy infrastructure be created at economic cost when both utilize the same rare earths (dysprosium, neodymium and praseodymium) to significantly limit costs? Basically as of now it is most probable that one will have to be sacrificed for the other, so if electrical vehicles and wind are deemed necessary for the future, how will this be achieved at feasible costs?
Answers to these questions need to be as specific as possible because simply stating broad concepts like “smart grid” or “rare earth substitutes” does little good without the specifics of how those concept would actually solve certain problems within the core issues. For those who would suggest that this blog do the work, this blog has posted a good portion of discussion about these issues coming to conclusions that oppose the common belief that pursuing a generic solar and wind energy infrastructure is viable and appropriate given the available time remaining to confront global warming. Also because these concerns embody elements critical to the viability of the common environmentalist argument for the future global energy infrastructure the burden of proof is on the environmentalists that support this infrastructure to demonstrate that it will have a significant probability of being successful. It is imperative that these issues be resolved as soon as possible with a significant probability of certainty because the time to act is now and acting with the wrong strategy is just as useless as not acting at all.
1. Argue that both components of geo-engineering (solar radiation management and carbon remediation) will not be needed for decades in the future; as it stands the more vocal environmentalists claim that geo-engineering is a non-starter no matter what due to uncertainty fears with the popular opinion of planting trees and creating some small amount of bio-char being sufficient.
Note – Remember the elimination of coal as an energy source will eliminate a percentage of negative radiative forcing aerosols that are released into the atmosphere through coal combustion. So is there confidence that society can manage an addition 0.2-0.7 degrees C of warming while global emission profiles are still around 60-70% of current levels? If so, what is the origin of this confidence?
2. Argue that building a solar and wind energy infrastructure (65-80% is the range most environmentalists envision) is more economically and structurally viable than building a small modular nuclear reactor and generation III nuclear energy infrastructure.
Note – Be wary of citing anything from Mark Jacobson for this issue. Jacobson’s solar and wind analyses (along with basically everyone else) fail on three major levels:
A. They do not use detailed real numbers when calculating actual required energy and its integration into a future grid; instead he utilizes broad concepts like smart grid, widespread multi-geographical integration, demand-response management, etc. to “magically” eliminate future energy concerns or changes like network congestion and other grid-level system costs. Without using actual numbers in an in-depth analysis of costs and methodological action it is difficult to view legitimacy in his claims largely due only to scale issues let alone other problems. The few analyses that actually use numbers are plagued by optimistic assumptions addressed in the next two points.
B. They fail to address anything remotely specific about the required energy storage for a solar/wind energy infrastructure simply stating that some storage will be required (hypothesized amounts are not mentioned). Also little mention of what it will be (beyond pumped hydro which is supply short), how much it will cost, how it materializes and how its construction will affect other industries. The failure to discuss the intermittency aspect when calculating cost is especially prevalent when wind and solar supporters make claims like wind and/or solar costs in country x are now lower than coal costs. Wind/solar proponents seem not to understand that 1 GW of solar is not equivalent to 1 GW of coal when taking operational capacities into consideration over nameplate capacities (i.e. what occurs in the real world 10 –35% for solar or wind (depending on the country and time of year) of nameplate versus 75-90% for coal.)
C. They fail to address supply shortages that will be created when constructing a solar/wind infrastructure, especially for rare earths (most notably heavy REs like dysprosium), concrete, steel and aluminum. These shortages will significantly increase costs for the construction of this infrastructure largely because of the low generation per unit ratios that solar and wind installations have due to their scale capacity and intermittency limitations.
Due to these problems, only citing Jacobson demonstrates a lack of caring about these critical issues and thus place the future of the planet in the “hope and pray” column. Basically a mindset that is similar to that of a global warming denier.
3. How will the creation of a large (40%+) electrical vehicle fleet and a large (40%+) wind energy infrastructure be created at economic cost when both utilize the same rare earths (dysprosium, neodymium and praseodymium) to significantly limit costs? Basically as of now it is most probable that one will have to be sacrificed for the other, so if electrical vehicles and wind are deemed necessary for the future, how will this be achieved at feasible costs?
Answers to these questions need to be as specific as possible because simply stating broad concepts like “smart grid” or “rare earth substitutes” does little good without the specifics of how those concept would actually solve certain problems within the core issues. For those who would suggest that this blog do the work, this blog has posted a good portion of discussion about these issues coming to conclusions that oppose the common belief that pursuing a generic solar and wind energy infrastructure is viable and appropriate given the available time remaining to confront global warming. Also because these concerns embody elements critical to the viability of the common environmentalist argument for the future global energy infrastructure the burden of proof is on the environmentalists that support this infrastructure to demonstrate that it will have a significant probability of being successful. It is imperative that these issues be resolved as soon as possible with a significant probability of certainty because the time to act is now and acting with the wrong strategy is just as useless as not acting at all.
Labels:
Environment,
global warming,
Nuclear Power,
Solar Power,
Wind Power
Saturday, March 16, 2013
Placebos in Clinical Trials
The placebo effect has been an important consideration in medical treatment and clinical drug trials since its official identification in 1955.1 Placebos are typically regarded as inert agents or procedures designed to facilitate action against a given detrimental condition regardless of whether the action will have any positive effect on the condition. Some quibble with the use of the word ‘inert’ because the placebo effect encompasses an effect derived from the placebo. However, such a thought takes the word ‘inert’ to an unreasonable extreme. Clearly for the purposes of the definition, inert is assigned to the placebo element not having a direct biological effect against the targeted condition. Thus the placebo effect is regarded as any improvement of a detrimental condition without the aid of direct treatment under the presumption that such a treatment is being given.
One of the most troublesome areas of drug research complicated by the placebo effect is pain management or pain analgesia. For analgesia the biochemical methodology that governs the placebo effect is thought to occur through the release of endorphins stemming from signals derived from the prefrontal cortex with additional activity in the orbitofrontal cortex (OFC), dorsolateral prefrontal cortex (DLPFC), rostral anterior cingulate cortex (rACC), and midbrain periaqueductal gray (PAG).2 This belief is further supported by information from Alzheimer’s patients in that neuronal degeneration of the DLPFC, OFC and rACC results in a loss of the placebo effect.3 The endorphin component of the placebo effect regarding analgesia was identified when such analgesia was neutralized by naloxone, an opioid antagonist.4 Follow up research confirmed this relationship between endorphins and placebo analgesia through the use of cholecystokinin (CCK), an anti-opioid peptide.5-7 Direct demonstration of opioid released occurred in 2005 through the PET observation of increased μ-opioid receptor neurotransmission in the ACC, OFC, the insula, and the nucleus accumbens.8
Despite a somewhat better understanding of the psychological and physiological mechanisms behind the placebo response there is still questions regarding its lack of universal application. For example in randomized controlled trials the drug-placebo difference has been reported at 40% for functional disorders,9 29% in depression, 31% in bipolar mania, 21% for treatment of migraines.10,11 Some conclude that the variable placebo rates involve differences in sample size, study time, patient recruitment and design characteristics. The best option seems to be design characteristics.12,13
Design characteristics are important because the placebo effect also has a conditioning expectation component, which exists in two separate parts beyond pure biological activation. The first component is the inherent anticipatory psychological reaction of a potential new stimuli and its influence. Basically if one is given an unknown drug by an individual of expertise and/or authority and is told that it will have a certain effect then one expects that effect to occur regardless of whether or not it actually will. Basically there is a value association with the treatment. The value impact is also applicable in that more expensive drugs are thought to have more positive influence.14
The second component is a conditioning physiological/psychological reaction derived from multiple exposures to a single given element. One of the chief examples of this placebo effect response is after numerous doses of morphine-like compound (buprenorphine), which had a side effect of reduced respiration, at a specific time in a specific manner, individuals were then given an inert compound in the same manner at the same time and the body responded by reducing respiration.15,16 Thus, there appears to be a Pavlovian biological response, which can induce a placebo effect beyond conscious psychological expectations. This response is somewhat mysterious in that it overcomes even an opioid inhibitory treatment, like naloxone, when using a non-opioid primer.16 Overall this behavior implies a ‘mimicry’ effect perhaps associated with long term potentiation (LTP).
One significant element that dictates the strength of these elements is the concentration and interaction rates of two separate neurotransmitters: dopamine and norepinephrine. In essence while dopamine is given chief credit as the ‘reward’ molecule its influence is augmented or dampened, depending on the situation, by norepinephrine. Basically dopamine determines what elements/actions are characterized as rewarding and norepinephrine grants the required focus, both consciously and seemingly unconsciously, to achieve and recognize the rewarding action. Therefore, if norephinephrine is not at a sufficient concentration the placebo effect ceases to operate. If this is the case then enzymes catechol-O-methyltransferase (COMT) and monoamine oxidase A (MAO-A) are also important because they are responsible for the destruction of dopamine and norepinephrine respectively.17
Destruction of dopamine is important because the placebo effect has association with expectation. This expectation, either conscious or unconscious, will create an acute and significant increase in dopamine concentration; however, if dopamine concentrations are already high the additional increase provided by the placebo effect will not induce significant biochemical change because the percentage change will be small. For the placebo effect the percentage change is more important than the absolute change. Thus, in the context of treating pain and also depression the ‘ideal’ placebo effect candidate has high COMT activity (low dopamine concentration) and low MAO-A activity (high norepinephrine concentration).
Identifying potential placebo beneficial or detrimental genetic agents produces a useful secondary treatment strategy. For example individuals who are the ideal placebo candidates have an advantage in that they could be treated for pain through the placebo effect if other direct pain mitigation methods are unavailable due to bad reactions and/or excessive side effects. Also with this new information pertaining to the function of the placebo effect with regards to pain management one can design diets to augment this aspect of the placebo effect. Clearly there are ethical concerns with giving a known placebo to a patient in lieu of actual medication, but there are numerous situations where the side effects associated with the standard pain medication may reduce the quality of life of the patient to a level where taking the medication is not viewed as significantly beneficial. Therefore, individuals who have mid to high COMT activity and low to mid MAO-A activity could be candidates for an enhanced placebo effect either using direct placebos or even simple changes in diet which would augment dompaine and norepinephrine levels.
Another important consideration of the placebo effect beyond actual treatment is how it can influence results in clinical trials. Most pharmaceutical companies know that the clinical trial is the most frustrating part of creating a new drug because after spending hundreds of thousands to millions on research and development if clinical trial results are ambiguous or inconclusive the developed drug enters a state of limbo which can be even worse than if it simply fails to demonstrate improvement.
Part of the problem with identifying the placebo effects is the idea of responders and non-responders (those who exhibit a placebo effect and those who do not) is typically ignored. Instead in both clinical studies and placebo studies differences between group averages are analyzed over individual responses. Such practice creates situations where the identical mean change between a placebo group and non-treatment group can be demonstrated either by large number of individuals showing an average response or a small number of individuals showing a large response and others showing no response. This is one of numerous statistical situations where averages without standard deviations are dangerous.
When individual results are taken one of the big problems with confirming the placebo effect is the nature of pain progression relative to regression to the mean. Extremes in self-reported pain intensity, either high or low, eventually shifts towards average because they are extreme and more than likely short-lived. With regards to the placebo effect clearly the high extreme is more relevant than the low because individuals involved in these trials are in pain. However, the pain will abate over time from a self-reporting standpoint as the body adapts to the pain regardless whether or not the placebo effect activates. Therefore, in such a situation it is difficult to differentiate between the placebo effect reducing pain or regression to the mean adaptation to the pain.
There are a number of strategies designed to “neutralize” the placebo effect in clinical trails. The crossover design is where individuals serve as their own controls reducing inter-subject variability. Another focuses on eliminating expectations through concealing the psychological component by either initiating the medical treatment without the knowledge of the patient or by introducing uncertainty in its administration. Basically hiding the injection or infusion method from the patient eliminating knowledge of its administration or telling the patient that the treatment could make the patient feel better, but there are no guarantees.
However, as mentioned above despite strategies to reduce psychological triggering of the placebo effects the one thing that cannot be concealed is that patients do receive some form of treatment. Therefore, genetic elements like COMT and MAO-A will still play a significant role in driving the placebo effect. Identifying individuals who possess these genetic characteristics would allow for better post-experiment analysis regarding potential placebo effects pertaining to pain treatment and other treatments given the particular genetic agent. There are two immediate questions regarding the collection of this genetic information: first, the screenings would have to comprise of only the above agents, not a complete genetic analysis to ensure the privacy of the research subject. Second, the research must remain a double-blind study, thus the genetic information should only be used post-analysis and not revealed to anyone prior to or during the experiment.
Finally there has been some question to the legitimacy of the placebo effect.18 The problem with this critical analysis is it is too expansive coving too many different disease and treatment methodologies. Such analysis method makes it more difficult to see the statistical nuances in each separate study, which could muddle results and conclusions. For example it would be akin to judging the quality of an orange when eating it in a fruit salad. Therefore, it is difficult to view conclusions questioning the placebo effect without skepticism.
Overall the use of genetic mapping and better statistical analysis techniques should allow researchers to better separate real changes between drugs versus artificial changes like the placebo effect. Analysis accuracy is obviously important because if analysis remains suspect then society wastes time, money and resources on statistically insignificant drugs. The most important element to a more robust analysis methodology would be to standardize it through all laboratories. Such standardization would ensure effective analysis checking to maximize the probability of correct conclusions and effective drug action.
Citations –
1. Beecher, H. “The powerful placebo.” JAMA. 1955. 159(17):1602-1606.
2. Miller, E and Cohen, J. “An integrative theory of prefrontal cortex function.” Annu. Rev. Neurosci. 2001. 24:167-202.
3. Thompson, P, et Al. “Dynamics of gray matter loss in Alzheimer’s disease.” 2003. J. Neurosci. 23:994-1005.
4. Levine, J, Gordon, N, and Fields, H. “The mechanisms of placebo analgesia.” Lancet. 1978. 2:654-657.
5. Levine, J and Gordon, N. “Influence of the method of drug administration on analgesic response.” Nature. 1984. 312:755-756.
6. Benedetti, F. “The opposite effects of the opiate antagonist naloxone and the cholecystokinin antagonist proglumide on placebo analgesia.” Pain. 1996. 64:535-543.
7. Benedetti, F and Amanzio M. “The neurobiology of placebo analgesia: from endogenous opioids to cholecystokinin.” Prog. Neurobiol. 1997. 52:109-125.
8. Zubieta, J and et Al. “Placebo effects mediated by endogenous opioid activity on u-opioid receptors.” J. Neurosci. 2005. 25:7754-7762.
9. Enck, P and Klosterhalfen, S. “The placebo response in functional bowel disorders: perspectives and putative mechanisms.” Neurogastroenterol Motil. 2005. 17:325-331.
10. Sysko R and Walsh, B. “A systematic review of placebo response in studies of bipolar mania.” J. Clin. Psychiatry. 2007. 68:1213-1270.
11. Macedo, A, Banos, J, and Farre, M. “Placebo response in the prophylaxis of migraine: A meta-analysis.” Eur. J. Pain. 2008. 12:68-75.
12. Walsh, B, et Al. “Placebo response in studies of major depression – variable, substantial and growing.” JAMA. 2002. 287:1840-1847.
13. Kobak, K, et Al. “Why do clinical trials fail? The problem of measurement error in clinical trials: time to test new paradigms?” J. Clin. Psychopharmacol. 2007. 27:1-5.
14. Waber, R, et Al. “Commercial features of placebo and therapeutic efficacy.” JAMA. 2008. 299(9):1016-1017.
15. Benedetti, F, et Al. “The specific effects of prior opioid exposure on placebo analgesia and placebo respiratory depression.” Pain. 1998. 75:313-319.
16. Benedtti, F, et Al. “Inducing placebo respiratory depressant responses in humans via opioid receptors.” Eur. J. Neurosci. 1999. 11:625-631
17. Leuchter, A, et Al. “Monoamine Oxidase A and Catechol-O-Methyltransferase Functional Polymorphisms and the Placebo Response in Major Depressive Disorder.” Journal of Clinical Psychopharmacology. 2009. 29(4):372-377.
18. Hrobjartsson, A and Gotzsche, P. “Placebo interventions for all clinical conditions.” Cochrane Database Syst. Rev. 2010. 106(1).
One of the most troublesome areas of drug research complicated by the placebo effect is pain management or pain analgesia. For analgesia the biochemical methodology that governs the placebo effect is thought to occur through the release of endorphins stemming from signals derived from the prefrontal cortex with additional activity in the orbitofrontal cortex (OFC), dorsolateral prefrontal cortex (DLPFC), rostral anterior cingulate cortex (rACC), and midbrain periaqueductal gray (PAG).2 This belief is further supported by information from Alzheimer’s patients in that neuronal degeneration of the DLPFC, OFC and rACC results in a loss of the placebo effect.3 The endorphin component of the placebo effect regarding analgesia was identified when such analgesia was neutralized by naloxone, an opioid antagonist.4 Follow up research confirmed this relationship between endorphins and placebo analgesia through the use of cholecystokinin (CCK), an anti-opioid peptide.5-7 Direct demonstration of opioid released occurred in 2005 through the PET observation of increased μ-opioid receptor neurotransmission in the ACC, OFC, the insula, and the nucleus accumbens.8
Despite a somewhat better understanding of the psychological and physiological mechanisms behind the placebo response there is still questions regarding its lack of universal application. For example in randomized controlled trials the drug-placebo difference has been reported at 40% for functional disorders,9 29% in depression, 31% in bipolar mania, 21% for treatment of migraines.10,11 Some conclude that the variable placebo rates involve differences in sample size, study time, patient recruitment and design characteristics. The best option seems to be design characteristics.12,13
Design characteristics are important because the placebo effect also has a conditioning expectation component, which exists in two separate parts beyond pure biological activation. The first component is the inherent anticipatory psychological reaction of a potential new stimuli and its influence. Basically if one is given an unknown drug by an individual of expertise and/or authority and is told that it will have a certain effect then one expects that effect to occur regardless of whether or not it actually will. Basically there is a value association with the treatment. The value impact is also applicable in that more expensive drugs are thought to have more positive influence.14
The second component is a conditioning physiological/psychological reaction derived from multiple exposures to a single given element. One of the chief examples of this placebo effect response is after numerous doses of morphine-like compound (buprenorphine), which had a side effect of reduced respiration, at a specific time in a specific manner, individuals were then given an inert compound in the same manner at the same time and the body responded by reducing respiration.15,16 Thus, there appears to be a Pavlovian biological response, which can induce a placebo effect beyond conscious psychological expectations. This response is somewhat mysterious in that it overcomes even an opioid inhibitory treatment, like naloxone, when using a non-opioid primer.16 Overall this behavior implies a ‘mimicry’ effect perhaps associated with long term potentiation (LTP).
One significant element that dictates the strength of these elements is the concentration and interaction rates of two separate neurotransmitters: dopamine and norepinephrine. In essence while dopamine is given chief credit as the ‘reward’ molecule its influence is augmented or dampened, depending on the situation, by norepinephrine. Basically dopamine determines what elements/actions are characterized as rewarding and norepinephrine grants the required focus, both consciously and seemingly unconsciously, to achieve and recognize the rewarding action. Therefore, if norephinephrine is not at a sufficient concentration the placebo effect ceases to operate. If this is the case then enzymes catechol-O-methyltransferase (COMT) and monoamine oxidase A (MAO-A) are also important because they are responsible for the destruction of dopamine and norepinephrine respectively.17
Destruction of dopamine is important because the placebo effect has association with expectation. This expectation, either conscious or unconscious, will create an acute and significant increase in dopamine concentration; however, if dopamine concentrations are already high the additional increase provided by the placebo effect will not induce significant biochemical change because the percentage change will be small. For the placebo effect the percentage change is more important than the absolute change. Thus, in the context of treating pain and also depression the ‘ideal’ placebo effect candidate has high COMT activity (low dopamine concentration) and low MAO-A activity (high norepinephrine concentration).
Identifying potential placebo beneficial or detrimental genetic agents produces a useful secondary treatment strategy. For example individuals who are the ideal placebo candidates have an advantage in that they could be treated for pain through the placebo effect if other direct pain mitigation methods are unavailable due to bad reactions and/or excessive side effects. Also with this new information pertaining to the function of the placebo effect with regards to pain management one can design diets to augment this aspect of the placebo effect. Clearly there are ethical concerns with giving a known placebo to a patient in lieu of actual medication, but there are numerous situations where the side effects associated with the standard pain medication may reduce the quality of life of the patient to a level where taking the medication is not viewed as significantly beneficial. Therefore, individuals who have mid to high COMT activity and low to mid MAO-A activity could be candidates for an enhanced placebo effect either using direct placebos or even simple changes in diet which would augment dompaine and norepinephrine levels.
Another important consideration of the placebo effect beyond actual treatment is how it can influence results in clinical trials. Most pharmaceutical companies know that the clinical trial is the most frustrating part of creating a new drug because after spending hundreds of thousands to millions on research and development if clinical trial results are ambiguous or inconclusive the developed drug enters a state of limbo which can be even worse than if it simply fails to demonstrate improvement.
Part of the problem with identifying the placebo effects is the idea of responders and non-responders (those who exhibit a placebo effect and those who do not) is typically ignored. Instead in both clinical studies and placebo studies differences between group averages are analyzed over individual responses. Such practice creates situations where the identical mean change between a placebo group and non-treatment group can be demonstrated either by large number of individuals showing an average response or a small number of individuals showing a large response and others showing no response. This is one of numerous statistical situations where averages without standard deviations are dangerous.
When individual results are taken one of the big problems with confirming the placebo effect is the nature of pain progression relative to regression to the mean. Extremes in self-reported pain intensity, either high or low, eventually shifts towards average because they are extreme and more than likely short-lived. With regards to the placebo effect clearly the high extreme is more relevant than the low because individuals involved in these trials are in pain. However, the pain will abate over time from a self-reporting standpoint as the body adapts to the pain regardless whether or not the placebo effect activates. Therefore, in such a situation it is difficult to differentiate between the placebo effect reducing pain or regression to the mean adaptation to the pain.
There are a number of strategies designed to “neutralize” the placebo effect in clinical trails. The crossover design is where individuals serve as their own controls reducing inter-subject variability. Another focuses on eliminating expectations through concealing the psychological component by either initiating the medical treatment without the knowledge of the patient or by introducing uncertainty in its administration. Basically hiding the injection or infusion method from the patient eliminating knowledge of its administration or telling the patient that the treatment could make the patient feel better, but there are no guarantees.
However, as mentioned above despite strategies to reduce psychological triggering of the placebo effects the one thing that cannot be concealed is that patients do receive some form of treatment. Therefore, genetic elements like COMT and MAO-A will still play a significant role in driving the placebo effect. Identifying individuals who possess these genetic characteristics would allow for better post-experiment analysis regarding potential placebo effects pertaining to pain treatment and other treatments given the particular genetic agent. There are two immediate questions regarding the collection of this genetic information: first, the screenings would have to comprise of only the above agents, not a complete genetic analysis to ensure the privacy of the research subject. Second, the research must remain a double-blind study, thus the genetic information should only be used post-analysis and not revealed to anyone prior to or during the experiment.
Finally there has been some question to the legitimacy of the placebo effect.18 The problem with this critical analysis is it is too expansive coving too many different disease and treatment methodologies. Such analysis method makes it more difficult to see the statistical nuances in each separate study, which could muddle results and conclusions. For example it would be akin to judging the quality of an orange when eating it in a fruit salad. Therefore, it is difficult to view conclusions questioning the placebo effect without skepticism.
Overall the use of genetic mapping and better statistical analysis techniques should allow researchers to better separate real changes between drugs versus artificial changes like the placebo effect. Analysis accuracy is obviously important because if analysis remains suspect then society wastes time, money and resources on statistically insignificant drugs. The most important element to a more robust analysis methodology would be to standardize it through all laboratories. Such standardization would ensure effective analysis checking to maximize the probability of correct conclusions and effective drug action.
Citations –
1. Beecher, H. “The powerful placebo.” JAMA. 1955. 159(17):1602-1606.
2. Miller, E and Cohen, J. “An integrative theory of prefrontal cortex function.” Annu. Rev. Neurosci. 2001. 24:167-202.
3. Thompson, P, et Al. “Dynamics of gray matter loss in Alzheimer’s disease.” 2003. J. Neurosci. 23:994-1005.
4. Levine, J, Gordon, N, and Fields, H. “The mechanisms of placebo analgesia.” Lancet. 1978. 2:654-657.
5. Levine, J and Gordon, N. “Influence of the method of drug administration on analgesic response.” Nature. 1984. 312:755-756.
6. Benedetti, F. “The opposite effects of the opiate antagonist naloxone and the cholecystokinin antagonist proglumide on placebo analgesia.” Pain. 1996. 64:535-543.
7. Benedetti, F and Amanzio M. “The neurobiology of placebo analgesia: from endogenous opioids to cholecystokinin.” Prog. Neurobiol. 1997. 52:109-125.
8. Zubieta, J and et Al. “Placebo effects mediated by endogenous opioid activity on u-opioid receptors.” J. Neurosci. 2005. 25:7754-7762.
9. Enck, P and Klosterhalfen, S. “The placebo response in functional bowel disorders: perspectives and putative mechanisms.” Neurogastroenterol Motil. 2005. 17:325-331.
10. Sysko R and Walsh, B. “A systematic review of placebo response in studies of bipolar mania.” J. Clin. Psychiatry. 2007. 68:1213-1270.
11. Macedo, A, Banos, J, and Farre, M. “Placebo response in the prophylaxis of migraine: A meta-analysis.” Eur. J. Pain. 2008. 12:68-75.
12. Walsh, B, et Al. “Placebo response in studies of major depression – variable, substantial and growing.” JAMA. 2002. 287:1840-1847.
13. Kobak, K, et Al. “Why do clinical trials fail? The problem of measurement error in clinical trials: time to test new paradigms?” J. Clin. Psychopharmacol. 2007. 27:1-5.
14. Waber, R, et Al. “Commercial features of placebo and therapeutic efficacy.” JAMA. 2008. 299(9):1016-1017.
15. Benedetti, F, et Al. “The specific effects of prior opioid exposure on placebo analgesia and placebo respiratory depression.” Pain. 1998. 75:313-319.
16. Benedtti, F, et Al. “Inducing placebo respiratory depressant responses in humans via opioid receptors.” Eur. J. Neurosci. 1999. 11:625-631
17. Leuchter, A, et Al. “Monoamine Oxidase A and Catechol-O-Methyltransferase Functional Polymorphisms and the Placebo Response in Major Depressive Disorder.” Journal of Clinical Psychopharmacology. 2009. 29(4):372-377.
18. Hrobjartsson, A and Gotzsche, P. “Placebo interventions for all clinical conditions.” Cochrane Database Syst. Rev. 2010. 106(1).
Thursday, March 14, 2013
A Brief Discussion of Election Voting Reform
The aftermath of any election season brings numerous complaints about the inefficiencies and/or unfairness of the voting process. Voters wait too long, voter ID laws are too stringent, bias against certain voting parties, inconsistencies in the early voting process, etc. Unfortunately while most critics are correct that there is some gross inefficiency in the system, most do not actually consider the entire system and how it reflects on the problem, they simply focus on their specific complaint. Also critics do not seem to provide many solutions to the identified problems beyond, “Fix it!”. An important element that must be considered is that because the ability of the electorate to actually wield power is so scant, hundreds of millions of people will vote during major election season. Therefore, there will be significant wait times to vote in most situations regardless of the designed system; the goal must be to manage those wait times.
A pathetic element regarding the state of the voting system in the U.S. is how overcomplicated it has become. Voting ID requirements are a principle example of this aspect for they are irrelevant and actually exacerbate the problem due to the difficulties some have in acquiring appropriate IDs. The purported purpose of voting IDs is to eliminate voter fraud. Confusingly voter fraud is hardly rampant and it is economically inefficient to demand IDs to combat such an incredibly small problem. However, if combating voter fraud is still regarded as a moral issue then there is a more effective way to combat it; a method that will also significantly reduce any potential inequality associated with the ability to acquire an appropriate ID.
Instead of using voting IDs a simpler and less expensive solution would be to add one additional element to voting registration. When an individual registers to vote he/she will simply declare a 4-digit voter PIN number. Then when this individual goes to vote the identification procedure in acquiring a ballot will simply require giving the correct name and voter PIN number. The election worker will then check the number and if correct hand over a ballot and cross the name off.
If the name and voter PIN do not match up, the potential voter has one of two actions available. First, the voter can display a valid state or federal ID confirming his/her identity. Second, each voter will be allowed to answer a “security question”, similar to those asked when individuals forget their computer passwords, which was filled out on the new voter registration form. Inability to comply with either of these steps will result in the election worker asking the potential voter to leave. Note that forgetting a voter PIN number will be very difficult because it will be listed on an individual’s voter registration card. For absentee voting the voter PIN number would be required just below the signature and dating portion of the ballot. There is a voter ID number given upon registration, but it is longer and not uniquely determined making it more difficult to manage; however, the issued voter ID number could work fine as well.
Another criticism in both 2010 and 2012 U.S. elections was the reduction of time available to conduct early voting, which some believe was an indirect attack against minority voting capacity. The chief purpose of early voting is to accommodate those who would be unable to vote or face great stress in voting on Election Day. Individuals making criticism against more restrictive early voting must tread carefully because keeping polling areas open on various days obviously requires spending money, money that both state and local governments have in less supply due to the slow recovery from the Great Recession.
Also most local election officials, not surprisingly, see changes in the voting system in terms of cost. Most early voting increases long-term workload, yet states tend not to hire anywhere near the amount of temporary workers to compensate for this increased workload. Therefore, most election officials have to work longer at typically a greater inconvenience level at the same general salary. Also the time between elections does not allow for the creation of a “familiarity groove” of sorts. Basically because elections occur once every two years, most early voting coordination seems like it is happening for the first time, especially if temporary help is new, thus there is little significant consistency; this consistency reduces workload both literally and psychologically.
The best strategy to address early voting seems to be utilizing no excuse absentee ballots, which could be picked up at Federal Buildings, police departments or post offices for no charge. After voting these ballots can be mailed to the appropriate election office. However, there would have to be a minimum date of return for effective early voting processing on the ballot consistent among all states. For example a domestic no-excuse absentee ballot would have to be postmarked no later than five days prior to the election date. Some would argue that online balloting can take the place of mailed ballots, but either security or cost concerns for online balloting will almost always be significant enough to limit its role in major elections. Also despite what some appear to believe wide swatches of the country do not have routine access to the Internet, which could create problems for an unsupervised and unorganized voting period and disenfranchise some voters.
One point of note is that while there are questions to the utility of early voting in increasing turnout rate,1-3 expansion of Election Day registration (EDR) may not be appropriate. There is evidence to suggest that EDR does increase voter turnout rate, but from a logical perspective it offers the highest probability to generate voter fraud.4,5 Also EDR expansion will further increase the workload of election officials and increase wait times for all voters. The purpose of EDR is strange because it validates and encourages laziness. Registering to vote is not difficult, even despite the effort of some to make it so, and people typically have sufficient time (months to years), thus failure to be successfully registered during election season is purely the fault of the unregistered individual.
Finally the Federal government may have to expand its role when states and districts fail at their administrative responsibilities. The Federal government should create a minimum threshold for discrimination complaints against a particular district/state and if that threshold is met, then it should strip the autonomy of that district/state to conduct elections. Some may argue that this is an unjustified reach of government power against state rights, but those making such an argument would be incorrect because those ‘encroached upon’ areas have only themselves to blame because they did not fulfill the responsibilities associated with its afforded power. Also the Elections Clause of the Constitution affords the Federal government the power to circumvent state power when it comes to election regulations.
A voter PIN number should eliminate almost all voter fraud as well as be convenient enough to keep the actual vote casting process fair. A stronger “refereeing” role needs to be taken by the Federal government to ensure that states behave appropriately and fairly when conducting elections. Finally appropriate access to early balloting through availability of absentee ballots should ensure that everyone who wants to vote would have the opportunity to do so. Overall creating and maintaining an effective, fair and honest voting system is rather simple if one does not attempt to derail it through the addition of unnecessary complexities.
Citations –
1. Burden, B, et Al. “The effects and costs of early voting, elections day registration, and same day registration in the 2008 elections.” Pew Charitable Trust. Dec. 21, 2009.
2. Fitzgerald, M. “Greater Convenience but not Greater Turnout: The Impact of Alternative Voting Methods on Electoral Participation in the United States.” American Politics Research 2005. 33:842-67.
3. Gronke, P, Galanes-Rosenbaum, E, and Miller, P. “Early Voting and Turnout.” PS: Political Science and Politics. 2007. 40:639-45.
4. EAC. “The Impact of the National Voter Registration Act of 1993 on the Administration of Elections for Federal Office 2007-2008: A Report to the 111th Congress.” U.S. Election Assistance Commission. June 30, 2009. http://www.eac.gov.
5. Leighley, J and Nagler, J. “Electoral Laws and Turnout: 1972-2008.” Paper presented at the Fourth Annual Conference on Empirical Legal Studies, University of Southern California. 2009.
A pathetic element regarding the state of the voting system in the U.S. is how overcomplicated it has become. Voting ID requirements are a principle example of this aspect for they are irrelevant and actually exacerbate the problem due to the difficulties some have in acquiring appropriate IDs. The purported purpose of voting IDs is to eliminate voter fraud. Confusingly voter fraud is hardly rampant and it is economically inefficient to demand IDs to combat such an incredibly small problem. However, if combating voter fraud is still regarded as a moral issue then there is a more effective way to combat it; a method that will also significantly reduce any potential inequality associated with the ability to acquire an appropriate ID.
Instead of using voting IDs a simpler and less expensive solution would be to add one additional element to voting registration. When an individual registers to vote he/she will simply declare a 4-digit voter PIN number. Then when this individual goes to vote the identification procedure in acquiring a ballot will simply require giving the correct name and voter PIN number. The election worker will then check the number and if correct hand over a ballot and cross the name off.
If the name and voter PIN do not match up, the potential voter has one of two actions available. First, the voter can display a valid state or federal ID confirming his/her identity. Second, each voter will be allowed to answer a “security question”, similar to those asked when individuals forget their computer passwords, which was filled out on the new voter registration form. Inability to comply with either of these steps will result in the election worker asking the potential voter to leave. Note that forgetting a voter PIN number will be very difficult because it will be listed on an individual’s voter registration card. For absentee voting the voter PIN number would be required just below the signature and dating portion of the ballot. There is a voter ID number given upon registration, but it is longer and not uniquely determined making it more difficult to manage; however, the issued voter ID number could work fine as well.
Another criticism in both 2010 and 2012 U.S. elections was the reduction of time available to conduct early voting, which some believe was an indirect attack against minority voting capacity. The chief purpose of early voting is to accommodate those who would be unable to vote or face great stress in voting on Election Day. Individuals making criticism against more restrictive early voting must tread carefully because keeping polling areas open on various days obviously requires spending money, money that both state and local governments have in less supply due to the slow recovery from the Great Recession.
Also most local election officials, not surprisingly, see changes in the voting system in terms of cost. Most early voting increases long-term workload, yet states tend not to hire anywhere near the amount of temporary workers to compensate for this increased workload. Therefore, most election officials have to work longer at typically a greater inconvenience level at the same general salary. Also the time between elections does not allow for the creation of a “familiarity groove” of sorts. Basically because elections occur once every two years, most early voting coordination seems like it is happening for the first time, especially if temporary help is new, thus there is little significant consistency; this consistency reduces workload both literally and psychologically.
The best strategy to address early voting seems to be utilizing no excuse absentee ballots, which could be picked up at Federal Buildings, police departments or post offices for no charge. After voting these ballots can be mailed to the appropriate election office. However, there would have to be a minimum date of return for effective early voting processing on the ballot consistent among all states. For example a domestic no-excuse absentee ballot would have to be postmarked no later than five days prior to the election date. Some would argue that online balloting can take the place of mailed ballots, but either security or cost concerns for online balloting will almost always be significant enough to limit its role in major elections. Also despite what some appear to believe wide swatches of the country do not have routine access to the Internet, which could create problems for an unsupervised and unorganized voting period and disenfranchise some voters.
One point of note is that while there are questions to the utility of early voting in increasing turnout rate,1-3 expansion of Election Day registration (EDR) may not be appropriate. There is evidence to suggest that EDR does increase voter turnout rate, but from a logical perspective it offers the highest probability to generate voter fraud.4,5 Also EDR expansion will further increase the workload of election officials and increase wait times for all voters. The purpose of EDR is strange because it validates and encourages laziness. Registering to vote is not difficult, even despite the effort of some to make it so, and people typically have sufficient time (months to years), thus failure to be successfully registered during election season is purely the fault of the unregistered individual.
Finally the Federal government may have to expand its role when states and districts fail at their administrative responsibilities. The Federal government should create a minimum threshold for discrimination complaints against a particular district/state and if that threshold is met, then it should strip the autonomy of that district/state to conduct elections. Some may argue that this is an unjustified reach of government power against state rights, but those making such an argument would be incorrect because those ‘encroached upon’ areas have only themselves to blame because they did not fulfill the responsibilities associated with its afforded power. Also the Elections Clause of the Constitution affords the Federal government the power to circumvent state power when it comes to election regulations.
A voter PIN number should eliminate almost all voter fraud as well as be convenient enough to keep the actual vote casting process fair. A stronger “refereeing” role needs to be taken by the Federal government to ensure that states behave appropriately and fairly when conducting elections. Finally appropriate access to early balloting through availability of absentee ballots should ensure that everyone who wants to vote would have the opportunity to do so. Overall creating and maintaining an effective, fair and honest voting system is rather simple if one does not attempt to derail it through the addition of unnecessary complexities.
Citations –
1. Burden, B, et Al. “The effects and costs of early voting, elections day registration, and same day registration in the 2008 elections.” Pew Charitable Trust. Dec. 21, 2009.
2. Fitzgerald, M. “Greater Convenience but not Greater Turnout: The Impact of Alternative Voting Methods on Electoral Participation in the United States.” American Politics Research 2005. 33:842-67.
3. Gronke, P, Galanes-Rosenbaum, E, and Miller, P. “Early Voting and Turnout.” PS: Political Science and Politics. 2007. 40:639-45.
4. EAC. “The Impact of the National Voter Registration Act of 1993 on the Administration of Elections for Federal Office 2007-2008: A Report to the 111th Congress.” U.S. Election Assistance Commission. June 30, 2009. http://www.eac.gov.
5. Leighley, J and Nagler, J. “Electoral Laws and Turnout: 1972-2008.” Paper presented at the Fourth Annual Conference on Empirical Legal Studies, University of Southern California. 2009.
Subscribe to:
Posts (Atom)