Wednesday, August 17, 2016

Does the Future of Polling Require a Trip to the Past?


One of the hotter somewhat “nerd” topics in politics of late is the rather significant inaccuracies that have been demonstrated in various public polls among numerous credible polling agencies over the last few years. These inaccuracies range from prediction failures in a number of Presidential Primaries and senate elections in the United States to Parliament elections and the British exit from the EU in Europe not withstanding inaccurate polling results in other countries as well. While layman individuals may not be overly concerned about these inaccuracies, those in the business as well as a number of political scientists are concerned for they view polls as an important element to understanding how people view the state of their country and how their values can influence the path of the country. So what are the major problems creating this inaccuracy and what can be done to address them?

One of the fortunate things about this problem in modern polling is that not only are the authorities on the matter aware that there is a problem, but they seem to have a general idea to the causes. For example two of the biggest trends creating difficulties for producing accurate polling results are: 1) the increased use of cell phones and the resultant decease in the use of landlines making it more difficult and expensive to reach people; 2) people are less inclined to actually answer surveys even when they can be reached. These two reasons are rather interesting and almost ironic in a sense.

The expansion of technology was though to make polling more convenient and cheaper, yet it seems that the opposite has occurred. The transition from landlines to cell phones has made polling more difficult in multiple respects. First, the general mobility of cell phones creates a problem in that the area code assigned to the cell phone may not match the area code of where the owner now lives. Obviously this is a problem for asking someone who lives in Maryland about a state Senate election in Washington because of their phone has a 206 area code will not produce an accurate or meaningful result.

Second, increased cell phone use has significantly increased costs associated with polling through the common random means of creating a sample size. While dual sampling frames have addressed the problem of finding the cell phone users, Federal law reduces general polling efficiency. In the past automatic dialers were utilized to speed through the process of numbers that were disconnected or were not answered only passing to a live person when the call was answered.

However, the FCC has ruled that the 1991 Telephone Consumer Protection Act prohibits calling cell phones through automatic dialers. With common call ratios commonly exceeding at least 10 times the desired end result (i.e. for a survey response of 1000 people at least 10,000 numbers are commonly dialed), these calls having to be made by live people significantly increases costs against auto dialers. Furthermore all survey participants must be compensated for the call resources (commonly cell phone minutes); in a landline dominant world any required compensation was much cheaper relative to a cell phone dominant world.

Making matters worse the transition from landlines to “cell phone only using” individuals have followed the typical rapid incorporation path of proven technology where in the U.S. the National Health Interview Survey identified only 6% of the public only used cell phones (no landlines) in 2004 with an increase to 48.3% with an additional 18% almost never using a landline by 2015. So in a sense almost 2/3rds (66.3%) of the U.S. population were more than likely not reachable via landline in 2015.1

Obviously even if a pollster is able to reach an individual that is only step one in the process for that respondent must be willing to answer the asked questions. Unfortunately for pollsters the general response rates for individuals have collapsed in a continuous trend from about 90% in 1930 to 36% in 1997 to 9% in 2012.2,3 Not surprisingly there is a concern that this lack of success produces an environment where those who do respond do not comprise an accurate representation of the demographic that is pertinent to the idea of polling. While some studies have demonstrated that so far fancy statistical footwork (so to speak) has been able to neutralize these possible holes, most believe that it is only a matter of time before these problems can no longer be marginalized.3

This dramatic reduction is somewhat ironic, especially in an Internet era; while a number of people are more than content to spill their guts out on various social media sites about the intricate details of their lives and even events that occur day to day including mundane things like pictures of the lunch they’re about to eat, they are less willing to participate in public polling. Some theorize that Americans as a whole are too busy to answer polling questions, but this explanation does nothing but paint most of those Americans as shallow for it would be easy for most of them to make time if so desired.

Another theory is that the digital age has made actual social interaction more awkward (less comfortable); people are easily able to post various types of information on social networks because the interaction is indirect with a time gap typically with somewhat known individuals, online “friends”, whereas polls are direct interaction in real-time with a stranger. This theory holds much more water than the “not enough time” theory, but is also more problematic because it demands a significant personality shift away from how society seems to be trending.

For example cell phones offer a more effective means to call screen and a number of individuals are unwilling to answer calls from unknown numbers unless one is expected (like the results from a job interview). This behavior may also explain why older individuals, those born before the digital age, are much more likely to answer pollster question; they live outside this digital bubble and have not had their personalities influenced by it.

A third theory is that people before the digital age were more likely to respond to pollsters because of the psychological belief that answering those questions granted validity and even importance to their opinions due to nature of the medium, especially over those who were not polled. However, now in the digital age where anyone can have a Facebook page or a blog to post their opinion to the world, there is less psychological value to polling producing a medium for someone to express their opinions. Tie this reality to the fact that the information ubiquitous environment of the Internet has also sullied the waters so to speak regarding what information is important and what information is meaningless. Overall it could be effectively argued that most people do not see an ego boost from participating in polls anymore; therefore little to no value is assigned to that participation, but also people are more socially awkward about participating further driving down participation probabilities.

What can be done about these issues? The most obvious suggestion is as polling moved from being face-to-face to the telephone thanks to the advancement of technology; polling must once again evolve from telephones to online. While the most obvious suggestion, there are numerous problems with such a strategy. The first and most pressing concern is that Internet polls on meaningful political issues run by reputable companies have similar response rates as telephone polls. However, the level of bias associated with respondents switches from older individuals to younger individuals, for a vast majority of Internet use is performed by younger individuals. Also drawing a statistically random sample through the Internet seems incredibly difficult in general and without a random sample, bias is almost guaranteed.

Polling can be conducted either based on a probability or non-probability scale. Probability involves creating a sample frame, a randomized selection from a population via a certain type of procedure with a specific method of contact and medium for the questions (data collection method). At times this is easy like using a employee roster at a company A to ask about working conditions; other times it is difficult, especially on larger state/national questions because the sample population is larger and more disorganized creating problems in devising an appropriate sample frame, both logistically and financially.

Non-probability samples for polling are drawn simply from a suitable collection of respondents with only small similarity, largely involving a convenience sample (i.e. those who can most easily be recruited to complete the survey). Internet polling is largely based on non-probability. This structure has problems because without self-selection it is more difficult to statistically project the opinions of those polled to the general population within the typical margin of error. Also there are problems in comparing the survey population and any target population, creating unknown bias. The inherent age and ethnicity bias with online polling also persists. Some services attempt to overcome bias via weighting, pop-up recruitment and statistical modeling.

Weighting is commonly used when a sample has a small portion of a particular demographic that is not representative of the total target population (i.e. for a national poll only 17% of the respondents are women). With the national population of women in hovering round 51% the preferences of the women in the sample would be “weighted” three times as much. Obviously the most immediate concern with this method is with the smaller number of respondents the weighting system can “conclude” that more extreme/uncommon views are more widely held if such views are present in the survey. Weighting can also lead to herding and other possible statistical manipulation, especially when compared against other similar polls. Overall one of the biggest problems with weighting is that it is rarely reported directly to the public in the polls that they see presented by media outlets.

Pop-up recruitment attempts to create a more demographic appropriate sample size by having various polling advertisements for a particular poll appear over a variety of different websites where some of those websites are primarily visited by young black men, others visited by middle aged white women and others visited by gay Hispanic men, etc. hoping to pull in enough diversity to find representation in all parties. These pop-ups also attempt to reduce “busy work” for the participants (i.e. filling out personal information forms) by using proxy demographics based on browser visitation histories. While such a strategy is viable their overall level of consistent and long-term accuracy is questionable. A meaningful problem is that the tools made to smooth out the accuracy of these methods do not appear universally applicable. Another problem is that only more politically engaged individuals bother to take note of pop-up recruitments and may have certain characteristics that skew accuracy.

Finally some organizations like RealClearPolitics.com and FiveThirtyEight.com use poll averaging including weighting historical accuracy and specific characteristics associated with certain demographics to create election models and “more complete” polls. While some champion these methods as the future, there is the concern that if most of the polls become Internet based then the feedstock for these aggregate polls will have the same general flaws and the aggregate polls will also carry over those flaws resulting in no meaningful improvement in value or accuracy.

It is interesting to note that the age bias associated with Internet polling is naturally self-correcting. Similar to how telephone bias towards more wealthy households existed in the 1940s and 50s and then self-corrected as telephones became more widespread, Internet polling will also self-correct, but in a little more grizzly fashion. The problem in Internet polling is not a lack of availability, but a lack of usage. As older individuals who have little interest in using the Internet die and have their age group replaced by individuals who became familiar with the Internet in their late 20s, age bias should significantly decrease. However, it is unlikely that polling can wait the two decades+ for this “natural” self-correction and even then there is no guarantee that inherent issues with Internet polling will be solved.

While producing an accurate and meaningful sample size is becoming more difficult and expensive, it certainly is not impossible and various polls have sufficient size and representation. So what could lead to inaccuracies in these polls outside of sampling issues?

The two most common problems in polling accuracy are inability to predict how a voter will change his/her mind before actually voting and inaccurate conclusions regarding who will actually vote. Not surprisingly the former is less the fault of the polling organization than the latter. While they can certainly attempt it, it really is not the responsibility of the polling organization to accurately forecast the probability that voter A who reports a desire to vote for candidate A will change that desire and vote for candidate B two weeks later. However, polling organizations can do a better job of determining the likelihood of a particular individual voting and weighting that probability into their polling conclusions.

For example this “probability of voting” factor is another significantly problem with Internet polling for while 95% of all 18-29 year-olds use the Internet, only 13% made up the total 2014 electorate. However, while only 60% of those 65 and older use the Internet, a significant percentage of those resort to only utilizing email, individuals 65 and older made up 28 percent of the 2014 electorate.2,4 Therefore, Internet polls completely missed a portion of the electorate and heavily overvalued the opinions of another portion. That is not the only problem; a Pew study suggested that non-probability surveys, i.e. Internet surveys, struggle to represent certain demographics, i.e. Hispanics and Blacks adults results have an average estimated bias of 15.1 and 11.3% respectively.2

It is important to note that a voter reporting a higher than actualized probability to vote is nothing new. Over the years it is common that 25% to 40% of those who say they will vote end up failing to do so.2 To combat this behavior polling organizations attempt to predict voting probability through the creation of a “likely voter” scale.

One method polling organizations utilize to estimate the likelihood of voting is to review past turnout levels in previous elections, while applying appropriate adjustments regarding voter interest due to the type of candidates, the type of prominent issues, the competitiveness of the races, ease of voting and level of voter mobilization in the polling area.2 These estimates produce a range for a voting probability, a floor and ceiling, which is used to create a cutoff region.

A pool of possible voters to compare to the voting range is created based on answers to a separate set of questions. For example a recent Pew analysis utilized the following question based to determine voting probability:2

- How much thought have you given to the coming November election? Quite a lot, some, only a little, none
- Have you ever voted in your precinct or election district? Yes, no
- Would you say you follow what’s going on in government and public affairs most of the time, some of the time, only now and then, hardly at all?
- How often would you say you vote? Always, nearly always, part of the time, seldom
- How likely are you to vote in the general election this November? Definitely will vote, probably will vote, probably will not vote, definitely will not vote
- In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote? Yes, voted; no
- Please rate your chance of voting in November on a scale of 10 to 1. 0-8, 9, 10

From these questions statistical models are created that assign a probability of voting to each participant based on their answers and the weighting of each question. Sometimes these models are also used in other present elections or even future elections, but when this occurs one must be careful to ensure the assumptions remain appropriate for accuracy considerations. This modeling method is viewed as more accurate because it incorporates all of the questions instead of focusing on one or two like the last one “Please rate your chance of voting in November on a scale of 10 to 1.” Also this method still allows for the incorporation of respondents who answer low on one particular question, like they did not vote in the last election, as possible voters.

While asking these types of questions is appropriate, polling organizations may hurt themselves because while there is no single silver bullet question to determine whether or not person A votes, different organizations use different question to produce their probability results. This lack of standardization can create inefficiencies; it seems to make more sense that all organizations would use the same questions to determine voting probability to better identify questions that are good predictors.

Past-voter history is not the only meaningful factor, it has been demonstrated as a rather effective means of predicting future turnout.2 However, there is a concern that poll participants may misremember their voting history, especially because it takes place so rarely and is rather an unmemorable event for most. Therefore, pollsters also attempt to measure voting probability by including voter history from voter registration files, but this method is somewhat inconsistent between polling organizations. The reason for this inconsistency is that most surveys still require random phone dialing or Internet recruitment and it is difficult to acquire the necessary names and addresses of the roster to tie back into the voter file due to increased work load or lack of willingness by the respondents.

Another way that voter registration files could be useful is eliminating some of the randomness when utilizing the phone to produce a poll roster. For example matching telephone numbers to a voter file can produce information that can narrow the number of calls that are needed to fill a poll roster for a certain demographic. Some organizations have claimed to reduce the number of calls required to fill poll rosters by up to 70% using this type of method.5 Such a method is also though to reduce problems associated with sampling error as well.

Interestingly enough the general response of the polling community to the issue of inaccuracy, smaller sample sizes and increase costs is to depend more on technology, data mining and statistical analysis, which have only demonstrated the ability to “hold-off” worse results, but do not appear to have any direct means at improving the situation.

However, one wonders why polling organizations do not simply return to their roots in a sense. Instead of resorting to more technology and more statistics why not simply “go out among the people”. What are the negative issues with the larger organizations producing branch offices of sorts where they can setup polling stations in high traffic areas to directly engage individuals instead of calling at awkward times or hoping to get proper sample sizes from various politically motivated Internet users while the rest ignore those pop-ups advertising a poll.

To facilitate better interaction with possible poll responders instead of an individual standing in a general location with a survey and clipboard which can put a number of people immediately on guard where some purposely alter their paths to avoid the clipboard individual, the polling agents should set up a table clearly labeling their intent. Also to compensate individuals for their time, the polling agents should offer small items in exchange for answered questions: Frisbees, lighters, little Nerf footballs, etc. It would surprise a number of individuals how many people walking down the street for other business would be willing to spend 5-10 minutes asking questions for a free little Nerf football. It would be easy to set up such an environment rather seamlessly at a farmer’s market or in a shopping mall.

The results of this information could then been reported to a main “data center” for the polling organization and pooled into a single poll relative to a national issue. Such a method should more than likely reduce overall costs while producing more accurate information. Of course this is only one possible means to address the problem without hoping that technology can “magically” fix it.

In the end the “crisis” in polling might simply be an internal one of little relevance. For example is polling even important anymore with regards to elections? Suppose candidate A has ideas A, B and C and opposes ideas D, E and F. If polling demonstrates that candidate A’s constituency values ideas A, C and F doesn’t candidate A look bad changing his position on idea F from con to pro based on that data? The change would be based on public option not an actual change in the facts surrounding idea F. Typically governance by political polling leads to poor governance.

Another important question is why is it important that the public have polling information available? Are polls only useful for individuals to have a measuring stick to the level of value that the rest of society places on a particular issue or the popularity of a particular candidate? If so, what is the value that John Q. Public has this information? Certainly person A will not change their value system if a public poll seems to produce a differing opinion.

The reality of the situation is that for the most part polling information available to most candidates to a particular office is more accurate and advanced than that information given to the public. Also only those who work for a particular issue or candidate seem to have enough motivation to be influenced by a poll result to work harder for their particular issue. Overall is media reported polling just another something for the media to talk about, a time filler? Maybe the real issue with public polling is not how can its accuracy be improved/maintained, but what role does it really serve in society? Perhaps changing the nature of polling back from an indirect activity on a computer screen or telephone to a direct face-to-face exchange between people can help answer that more important question.


--

Citations –

1. Blumberg, S, and Luke, J. “Wireless substitution: early release of estimates from the national health interview survey, July – December 2015.” National Health Interview Survey. May 2016.

2. Keeter, S, Igielnik, R, Weisel, R. “Can likely voter models be improve?” Pew Research Center. January 2016.

3. DeSilver, Drew and Keeter, Scott. “The challenges of polling when fewer people are available to be polled.” Pew Research. July 21, 2015. http://www.pewresearch.org/fact-tank/2015/07/21/the-challenges-of-polling-when-fewer-people-are-available-to-be-polled/

4. File, T. “Who Votes? Congressional Elections and the American Electorate: 1978–2014.” US Census Bureau. July. Accessed October 7 (2015): 2015.

5. Graff, Garrett. “The polls are all wrong. A startup called civis is our best hope to fix them.” Wired. June 6, 2016. http://www.wired.com/2016/06/civis-election-polling-clinton-sanders-trump/

Wednesday, July 13, 2016

Forming the Battle Plan for Addressing Teaching Reform in the 21st Century

The notion of education reform is certainly not a new concept, but it certainly seems to accomplish less and less meaningful and appropriate change as the years advance. One of the major reasons various reform movements appear to produce little success is too much focus on specific “pet” methods without critically analyzing their applicability in large-scale environments. Instead of focusing on how to better fire teachers, lauding some trendy non-scalable niche example as the solution and looking to divert money to charter schools that perform no better to worse than their public school competition, reformists should systematically look at the system, identify the flaws and then act to remove those flaws with scale appropriate solutions. So what are important elements to advancing education that reformers tend to get wrong.

An important element that must be addressed in education is facilitating student motivation with career prospects at an early age to ensure appropriate enthusiasm. Unfortunately not all students appreciate and understand the underlying benefits to education, the acquisition of information in general, thus they can reject its importance. If a student does not possess the drive to learn through some form of motivation then any teacher, regardless of overall quality, will struggle to transmit knowledge to that individual. Incorrectly most reformists believe that it is the sole responsibility of teacher to nurture and cultivate any motivation potential in a student. The idea that it is the responsibility of teachers to motivate their students is ridiculous solely, but not limited to, the simple vast diversity in psychological make-up of their students. To focus on numerous different strategies to ensure student motivation is asking for something completely unreasonable and untenable.

Most of the time motivation for learning comes from engaged and caring parents for it is standard psychology that most children want to receive praise from their parents by acting in a manner that will be received positively. Even for those that do not fit this profile, an educationally engaged parent can use his/her position as parent to command the child to “care” somewhat about education via either carrot or stick type motivators. If the parent is not engaged in the value of education the student needs to find motivation elsewhere, either through competition with other students or through their own desires, but not expect such a void to be filled by the teacher. Can it, yes, but it should not be expected. Overall though none of these motivating factors are relevant if not directed towards a meaningful conclusion.

Therefore, the entire process of education must be more cooperative both from the home environment and the school environment in identifying the passions and interests of students and applying those interests to the education process largely through demonstrating how even so-called “mundane” topics like math and various science tie into those passions. With this methodology, education becomes an amplifying positive force for that particular passion rather than a negative detracting and distracting force. In addition not only will this process provide internal motivational fuel for the student (i.e. “I want to be an astronaut”), but it will also provide a road map of sorts to achieving that passion for in the past there have been plenty of educationally motivated students that have fallen short because they were ignorant of the prerequisites and other requirements demanded by their passion.

How this methodology should be achieved will highlight the importance of guidance counselors, which has waned in modern times. Early in a student’s academic career (1st/2nd grade) guidance counselors should be the principal actors in identifying the student’s passions and deduce the best career path for that student to exercise those passions. Every two years there should be some “check-in” period to reassess passion and interests and formulate a new path if needed. This method allows guidance counselors to actually perform their assigned role and no longer burdens teachers with a task outside of their intended role, motivating the student. Now teachers can instead focus on providing an optimized educational environment in which to instruct the students, an actually appropriate expectation, rather than play cheerleader to the individual tastes of their students.

Proper management of student expectations is also important for increasing the effectiveness of education. Course syllabus must be presented early (day 1 or 2) and be transparent in how grades will be produced, what type of class behavior is expected, what students are expected to learn, schedule of events and special projects, etc. Also expectations regarding instruction is essential for despite what some critics would like the public to believe, education cannot be exciting and entertaining all the time, or heck even most of the time. Certainly quality teachers can add certain dynamic elements to lectures to produce a more “inspirational” product, but no one can make teaching something like, a literature review for a research paper to ensure proper background and sourcing, fun. Such a task is one of drudgery that demonstrates the importance of gumption and focus in the educational process.

Tied to the above point, another important element is to psychologically prepare students to embrace the discomfort of learning. Some argue that learning is not fun and education needs to reflect that, but it can be countered that such an environment for a number of students has already been attained; this is a major problem for if students acknowledge learning and education as painful and frustrating then they will be less interested in engaging in the process and will look for shortcuts (i.e. cheating) just as easily as if they think learning should always be fun and exciting.

Instead one must focus on the discomfort of learning in the context that it is frustrating when one does not know something one wants to know, but proper instruction and hard/smart work makes that frustration ephemeral. Basically learning is only “not fun” when no progress is being made. If progress is made (i.e. some knowledge being acquired piece by piece) then learning produces a noticeable sense of accomplishment and pain/frustration is limited and short-term. Therefore, one of the chief strategies in the educational process is to focus on why someone is not making progress and rectify it. This is not to say that education and learning is always effortless, but there is always a purpose to the effort.

One of the more hotly debated elements of education is the structure of how information is transmitted from the teacher to the students. Many modern “educational reformists” lament and criticize the large continuation of traditional education involving a teacher lecturing students on a given topic. These individuals frequently cite the advantages of engaging in teamwork-based activities and focusing on the Socratic Method (SM) of teacher-student engagement in lieu of basic lecturing.

The most significant advantage of the SM is that the interaction between the teacher and the individual through direct question and answer session increases the probability of understanding due to active learning rather than passive learning. During “traditional” lectures students must focus on self-motivation to ensure dynamic learning rather than hoping for learning through osmosis (in a sense). The SM takes some of the motivation burden off of the student through the direct discussion of the topic with the teacher.

Unfortunately most “educational reformist” lack classroom experience and seemingly fail to realize that most public schools have large class sizes (25+ students, usually more) that make the administration of the SM rather difficult without utilization of a scattershot strategy (randomly engaging certain individuals not everyone). A meaningful concern with the SM in large groups is that direct one-one engagement can cause other students to lapse in their attention limiting the effectiveness of the current learning experience. One thing that lectures are not given credit for is that they do provide a meaningful focal point for all students that direct one-one discussion can lack. Also too much interaction can lead to time crunches when it comes to instructing on all of the requisite information.

This misinterpretation of the “universal applicability” of the SM in public institutions largely exists because “reformists” largely focus on viewing the practices of schools with small overall enrollment and class size, typically highly privately funded charter schools, as the bases for determining “what works in the classroom” and what should be applied in public education. This mindset does nothing but continue to make real and appropriate reform more difficult. Overall as noted above the appropriate way to instruct in the modern “educational environment” appears to be the combination of the SM and lecture by periodically and consistently engaging random students in a brief 1-2 question session that captures the individual’s attention, but does not expend enough time to significantly threaten the loss of attention from the rest of the class.

The matter of teamwork is a little more interesting because the advantages of teaching to teams are significant. For example working in a team can provide a less stressful environment for certain individuals, which can eliminate the detriments of working alone, which could negatively impact the educational process. It can help interpersonal relationship development by giving individuals experience with working through problems with others in low stress/stakes environments. Also it provides growth and intellectual development by exposing individuals to additional and different viewpoints and interpretations of the lessons from other team members that may help augment understanding of the information.

However, there are some disadvantages to working in a team. The most pressing issue, that most do not either want to talk about or are not aware of, is that most of these above advantages are born from motivated students that want to learn and want to actively interact with their fellow classmates. Without this motivation, weaker and/or less enthusiastic students can hide behind stronger students letting those individuals do the work for the team and not focus on learning the material themselves. This strategy of “let the smarter kids who care about their grades do the work because they don’t want to fail” has always been a problem in teamwork related elements in primary and secondary education, especially for big large-time period projects.

This behavior is manageable in the scope of small assignments for while homework and in-class work could be performed in groups, quizzes and tests would still be individualized forcing students to limit the practice of the strategy for a vast majority of the grade is still based on their own accumulation and practice of course knowledge. However, for large projects this behavior can be significantly detrimental to the team as well as individuals because it is difficult for the teacher to dissect how important each student’s contribution was to the success or failure of the project.

One means of addressing this problem has been to have students evaluate the performance of their teammates at the conclusion of any big projects, but such a method always draws concern of bias between teammates. An alternative option for big projects may be weekly evaluations of performance on a 1-10 scale over 3-4 different categories with explanation areas for why the numeric score was given. The teacher can keep these evaluations and then use them as a metric to how the dynamic of the team may have changed and a more accurate assessment of how the students felt the workload was divided instead of relying on a single evaluation at the end of the project when emotion and tensions can influence the product as well as spotty memory interfere with accuracy.

Another concern with teaching teams is that weaker voiced/low confidence individuals can have their opinions overshadowed by stronger voiced individuals, which can lead to a reduction in their already wavering confidence. Handling this problem can be tricky because dominating personalities are not necessarily malicious and teachers cannot proctor each group to ensure all opinions are being heard and given a fair evaluation. There are two direct ways to lessening problems stemming from this type of personality clash. First, the teacher can periodically poll the group when asking for an answer inquiring how each student views the problem. Fortunately such a strategy does not appear too time consuming because once per class should be enough for more shy students to have their voices heard. Second, allow the students to form their own teams.

This issue of the origins of team formation creates a third smaller problem. Clearly allowing students to form their own groups can eliminate a large amount of potential interpersonal conflict within the team; however, allowing students to only associate with what is already familiar mitigates a lot of the advantages born from teams through the ability to work with the unfamiliar and understand different types of thought. Overall a middle solution appears most appropriate; before selecting the teams the teacher asks each student to indicate on a piece of paper the 3 classmates he/she would not like to be associated with in a team and then seeks to accommodate as many of these wishes as possible. This strategy limits the amount of interpersonal conflict in a team by eliminating individuals that might have outside conflicts while retaining enough differentiation to ensure value from working in the team. Note it is not the responsibility of the teacher to resolve these conflicts, thus they are best avoided in the classroom.

Overall with regards to teaching to teams: when possible teams should be used basic lectures, including those with a level of interactivity, but tests should be individually based to ensure a strong motivating “carrot” for individual learning. Team interactivity and creation should follow the above suggestions to maximize learning potential and effectiveness.

Another element that is widely touted as the “wave of the future” with regards to education is not only in-class teamwork, but also large team projects where the team engages in a multi-week, even multi-month, task. Clearly the motivation behind this idea is that learning by doing is one of the best way to acquire knowledge, especially to practice critical thinking and creativity; in addition such projects can provide a venue to evaluate the depth of that acquired knowledge by applying theoretical concepts in empirical practice.

Unfortunately while the sentiment is understandable a number of supporters of this methodology fail to acknowledge that such projects are very time consuming and expensive from the school’s perspective, thus such an instructional strategy is an almost guaranteed non-starter for most inner city and rural schools. Also initial project design is important to ensure students stay on task and have organized benchmarks to document progress, thus making the introduction of such a program difficult as well because to test the theory one must put it into practice which takes time and resources and redundant projects may not be valuable depending on the subject matter.

Proponents will conclude that such projects have succeeded before citing various group projects involving building robots, devising responses to various natural disasters or culturing different types of cells to determine how they interact with various types of bacteria. While there are certainly a number of success stories regarding this method, the failures are less known because they are not made public, so it is difficult to deduce the effectiveness of such programs. Overall it is reasonable for a high school to explore a single elective class that focuses on the completion of large-scale project and introduce smaller two-three week long projects for some other classes, but any expectation that such a methodology will become the norm is foolhardy until the public school system is funded at a much larger level than current.

The structure of grading is also an interesting issue with regards to the future of education. One of the more prominent discussions over the years has been the amount of homework that should be assigned to students. Before discussing the level or amount of homework it is important to establish the purpose of homework. For the course of this discussion the role of homework will be defined as: a tool to produce a means for a student to genuinely increase the probability of understanding particular concepts in a low stress environment versus proctored on-site examinations. Also for homework to be relevant it must be designed in a way that maximizes its practicality and usefulness. Rarely will reality simply give a person a single equation or thought process that will solve the problem. For example while a common math problem may read: “21 divided by 4 = ???”; this is clearly not how problems are encountered in reality, with 90%+ of the work already done. Instead such a problem should be presented to the students as:

John and Suzie want to bake some apple pies for their school’s bake sale. John has collected 10 apples from the trees around his house and Suzie has collected 11 apples from the trees around her house. If it takes 4 apples to bake 1 pie how many pies can John and Suzie bake and how many apples will they have left over after all the baking is done?

From this structure, which is much more akin to reality, a student should create the equation 21 divided by 4 = ???. So step 1 with regards to the homework aspect of knowledge evaluation is make sure homework problems properly represent real life experience.

Step 2 is to ask how homework should play into the evaluation process. One could inquire about the fairness of homework being a significant portion or even any portion of the grade if its central role is that of a low stress practice tool for understanding the general overarching concepts. What if the student does not need to do the homework to understand the material, the lecture period is enough to achieve understanding? Should that student be, in essence, forced to do the homework when he/she could use that time for other activities, either family-oriented or pleasure based? For example some students may not have a sufficient amount of time to do unnecessary, due to already achieving understanding, homework because of an imperfect family life where brothers/sisters have to take care of younger siblings, go to night work to earn extra money to help support the family, etc.

One point of argument for a high evaluation metric for homework is that it provides another avenue for students who struggle with communicating acquired knowledge in a testing environment. It cannot be argued that a test in a classroom environment inherently provides more pressure than homework assignments in an environment of the student’s choosing. Some students do not have the ability to effectively manage this increased pressure, thus their ability to demonstrate their knowledge suffers accordingly. The principle characteristic of the grade for a course is to conveniently measure how well a student acquired knowledge in a course, not how well a student can manage a high-pressure situation. Therefore, a high evaluation metric allows the grades for a student that “does not test well” to more accurately reflect the knowledge acquired within the course.

Some opponents could argue back that while addressing students that “do not test well” is a positive element for a high evaluation metric, it is more probable that highly evaluated homework conceals poor performance. Students can use homework to bolster overall grades that are detrimentally marred by poor examination results; poor results not due to mishandling stress, but simply due to lack of knowledge. Thus, this evaluation structure misrepresents a student’s knowledge in a particular topic portraying that student as more competent than they otherwise are, a disservice to colleges, future employers and the students themselves. However, this analysis only seems valid if the assigned homework is of substandard quality and/or design. If the homework is properly designed to reflect acquired concepts of the class then using homework grades as a counter measure to examination grades is reasonable.

It must be remembered that the bounds of time do not only impact students. Teachers, especially those with more dynamic topics like history, find themselves having to impart more and more information over the same fixed time period. Unfortunately the total amount of information that needs to be discussed limits the available amount of instruction time for each specific topic. Therefore, without the ability to rigorously cover a particular topic to the point where students have been exposed enough to reasonably understand the topic the probability that the students understand the topic decreases. Homework substitutes for this lack of class time to increase learning and retention probabilities. This supplementary aspect of homework hurts those who argue for no/little homework.

It can be argued that there is a typical perceived knowledge vs. actual knowledge gap for most students. There are a number of instances in school and, life in general, where an individual may think he/she has sufficient knowledge in a given subject, but when actually tested on that topic this individual quickly realizes that he/she does not have as much knowledge as previously thought. Homework provides a means to address this perception/reality gap before it becomes exposed on a test to a greater academic detriment of the student. Overall, is there a strategy that can provide a motivational aspect to do homework while not burdening those who do not need to take advantage of the practice characteristics of homework? The strategy below seems to be one way to address this issue.

• Homework is given out on a weekly basis; Every Monday an assignment is given out which will cover all of the scheduled material that will be discussed in class over that same week; the assignment will be expected to be turned in at the beginning of class on the next Monday (for example homework assigned on Oct. 13 would be turned in on Oct. 20 at the beginning of class); answers for the homework would then be posted or handed-out for the last week’s homework at the end of class on Monday.

• Homework will count for 0% of the grade. The reason is that homework, as previously discussed, is designed to give the student multiple opportunities to practice learning the given material. Taking a grade from material that is supposed to be practice is not very fair. Therefore, because homework does not count for any percentage of the grade the students do not have to do it or turn it in if they do not want to.

• Grades will be determined by 4 tests; 3 section tests worth 25% of the grade and 1 cumulative final worth 25% of the grade. As a partial motivator to do homework, students may retake one of the section tests if they turned in at least 75% of the assigned homework within the corresponding section and demonstrated a legitimate effort to learn from the homework.

Overall while the above suggestion is merely that, a suggestion, it appears that the above discussion has focused on two important principle issues in the ‘homework’ discussion. First, is the issue between homework motivation vs. maintaining the practice characteristic of homework designed to enhance learning. Second, is the issue of opportunity cost in doing homework vs. undertaking other activities. The chief element of this issue boils down to immediacy of the opportunity cost. The time crunch created by homework, which is frequently associated with increased stress, is typically developed through two methods. First, most students, especially as they advance in grade, have to deal with multiple subjects demanding multiple solution methodologies. Second, homework frequently functions through daily turnover. While the individual assignments may not account for much having to sacrifice enough of them due to more important tasks (like the job to help your family) can add up quickly damaging the overall grade when using a high evaluation metric (commonly suggested for motivational purposes).

Unfortunately there does not appear to be a single magic bullet to deal with both issues, but expanding the homework turnover scope could certainly help. As suggested above assigning homework at one particular time to account for the entire week gives the students more flexibility to address the homework. If their time is demanded by a particular activity on a given night, time can be budgeted later in the week to complete homework that would have been missed due to that activity. Another potential advantage to assigning homework in a greater than day-by-day quantity is that it may be easier for students to make connections between building block concepts when doing ‘three days work’ of homework in one sitting instead of doing the work over a three day period with multiple interruptions. Such a system could also encourage more ambitious students to ‘read ahead’ in an attempt to do the homework before the class lesson address the material.

One question that comes to mind for such a system is how does it change the grading burden on teachers? Under a more expanded turnover system with a firm homework hand-in date teachers may have more homework to grade, but by providing a universal answer key after turning it in, the teacher has more flexibility in allotted time to grade the homework and return it to the student. This increased time flexibility is important for grading homework is one of the most daunting and potentially frustrating tasks for a teacher, one that is commonly overlooked by most education reformers when considering teacher workload. Also teachers have lives outside of the educational environment, just like students, and may want to devote certain periods of that time to other tasks.

Another useful change to improve the educational experience would be more cooperation among teachers within a given field of instruction. For example synchronizing the free/prep period for all teachers of the same general subject matter, i.e. all English teachers, would provide opportunities for teachers to converse regarding the instruction of certain subjects within the field. In fact it would be appropriate for teachers to have a weekly meeting during one of these prep periods to maximize problem solving and instruction capacity.

Obviously one of the most critical elements to improve the educational system is to create an environment where the profession of teaching is respected once again. One aspect of this change would require teachers having more power in the classroom to control improper behavior. One means to accomplish this change is to allow teachers to negatively influence a student’s grade when that student provides a disruptive influence to the learning environment. A good pilot program would be that the teacher would have the authority to deduce up to a maximum of 10% from the grade of an individual for misbehavior at certain predetermined intervals.

Some might immediately object to such a system using the argument that behavior should have nothing to do with determining the class grade because the grade should be exclusively contingent on demonstration of acquired knowledge through prescribed evaluation metrics like homework, quizzes and tests. While on its face this objection may seem appropriate and fair, the problem is that it views the behavior in a vacuum. Basically it suggests the premise that negative behavior only produces a detriment towards the practicing individual and if the individual can perform at a certain level on the evaluation metrics without showing respect or paying attention in class then there should be no punishment. However, such logic is clearly incorrect because in the classroom environment a vast majority of negative behavior provides a detrimental element to the overall environment, disrupting the ability of all parties to learn the information. The behavior commonly produces a detriment towards multiple parties even if it is undesired or unwarranted by those parties.

For those who attempt to retain the purist assumption from above, Even despite this reality, it is important to acknowledge that tolerance for such negative behavior is typically not allowed in the professional workplace and if one of the chief elements of education is to prepare an individual for a career on some level, then such behavior should not be allowed in the classroom without consequence either. For example if an individual performs his/her job well, but facilitates such a negative environment that it negatively affects the performance of others to the point where the company as a whole suffers, that individual will typically be either told to change their behavior or he/she will be fired. Legal barriers prevent students from “being fired” both from the classroom and the education system in general, thus the best secondary option is affect grades.

Another possible argument against this strategy is that the individuals who have the highest probabilities for misbehavior are those who care the least about grades and school in general. Therefore, how will this punishment system act as a meaningful deterrent? Well, if the suggestions from above relating to linking various aspects of education to successful advancement of one’s passion then a vast majority of individuals should care about their grades to the point where behavior can be reasonably managed through such a punishment. Even for those who do not accept the link between their passions and education, to simply produce no consequence to disruptive behavior is irrational. For example it is widely acknowledged that various people will exceed legal speed limits over the course of their driving career, so with this reality in mind should there be no punishment for violating these laws? Certainly not for it makes no sense to eliminate a valid and appropriate punishment for the violation of a valid social norm or law. Understand that grade reduction would only be one tool in the toolbox for teachers to address bad behavior.

Another important issue addressing the improvement of education in modern society is managing the integration of technology into the classroom environment. This point is certainly not unique, however, most individuals who sing the praises of technology as a “revolutionary” force in education are not teachers; instead they are business people, entrepreneurs, educational commentators, etc. and only see the positive elements of technology in education, frequently commenting with annoyance that technology is not more widespread.

Interestingly enough if these commentators did have teaching experience they would quickly realize that technology has already penetrated almost all classrooms in the form of smartphones. Unfortunately these elements are not positive, but a net negative producing significant distractions and emboldening those who which to cheat on quizzes and tests. It is true that technology can provide a significant boon to education, but it can also provide a significant detriment and it is important that all parties acknowledge this reality. So what can be done to neutralize the detrimental aspects that technology can bring to education?

The main aspect of this issue is how to manage technological distractions? The best solution is to put instruction into place where there is no legitimate cause to need to utilize the technology and then ban its use for the duration of class time. Now it stands to reason that technophiles would cry foul to this type of strategy once again citing the importance of technology in the classroom, especially in sparking student interest due to the length of time technology is incorporated into student life outside of the classroom. This objection highlights a problem in the presented arguments from those who support technology in the classroom, the general drive to force the influence of technology into all aspects of the classroom. The simple fact is that most classroom activities do not benefit from the incorporation of individualistic technological action. Yes, teachers can typically instruct more effectively using programs like PowerPoint versus transparent slides or a chalkboard, but students are not significantly benefited by following along with the lecture on their smartphones or laptops.

In essence there needs to be a dividing point when students can use technology and when they cannot and the cannot would occur during the lecture portion of the class. Clearly there are very small and specific exceptions to this principal; for example when lecturing about computer programming it would make sense for students, if applicable, to be at computers applying the elements of the lecture to increase familiarity with the operation of the concepts. However, despite the erroneous beliefs of technophiles, most topics do not lend themselves to this type of interaction, thus the utilization of technology by students during the lecture will result in a reduced probability of comprehension not an increased probability.

What would possible penalties be for student driven technological distractions? This question leads to two schools of thought relative to the expectation of respect for the instructor. Clearly one can argue that a student that does not pay attention in class, after accounting for outside psychological factors, is not showing proper respect to the instructor. However, if this lack of attention does not create a distracting environment for others (for example the student is doodling in a notebook, but not making enough noise to draw attention to this fact), should such behavior matter?

The answer to this question boils down to two issues? What is the obligation of the student to demonstrate respect for the teacher and what is the obligation of the teacher to ensure the student pays attention to the instructed material? The simplest philosophy in this issue is that the student is chastised for the lack of attention and told to correct the behavior and the lecture will not continue until the student complies. The general goal of this practice is to reestablish the authority of the teacher in the classroom setting and ensure the student receives some benefit from the lecture.

A more interesting strategy is if the student is not demonstrating behavior that will actively disrupt class and his/her behavior is on a limited scale (only 1-2 individuals in a class of 30 is not paying attention) then the teacher should not care about the behavior leaving the student to understand the instructed material his/herself. If the individual cannot understand the material then he/she should score poorly on the evaluation metric(s) that cover the particular material, which would be the fault of the student. Again it is not part of the teacher’s job to ensure that all students pay attention. If the individual can understand the material without the assistance of the lecture, why should the student be forced to pay attention to the lecture instead of engaging in a non-class distracting alternative activity?

A more interesting question is what does the teacher do when a number of individuals demonstrate a lack of attention, which could be viewed as a lack of respect to the authority of the teacher? As from above the teacher has two options: 1) stop lecturing until the class ceases their lack of attention; 2) continue to lecture placing the individuals who are not paying attention at a possible disadvantage for later evaluation metrics. A traditional and even modern viewpoint of teaching would instantly dismiss the latter option and criticize the teacher for not being able to keep the attention of the students. Of course almost all with this opinion have never taught a day in their life in an educational environment, thus the significance of their opinion is heavily marginalized. The problem with the first option is that rarely is the lack of attention from a student acute, but typically is habitual, thus correcting the behavior is more difficult than simply telling the student to pay attention. This reality is what makes the second option interesting when combined with the career affinity option discussed earlier.

One could argue that most habitual and “disrespectful” lack of attention behavior can be addressed by applying the above strategy of tying the passions of individuals to the subject matter taught in various classes. Thus once again after accounting for outside factors the chief motivation behind a student not paying attention in class would be the internal perception of redundant knowledge. Basically the student already believes that he/she has grasp of the knowledge presented in the lecture and elects to do something else.

This perception is not a significant problem because either the student is correct and should be spending time in the classroom doing something else while not distracting others, which only arrogant teachers would find fault with (all students should pay attention to me, etc.) or the student is incorrect and this perception and resultant behavior will be corrected after a poor performance on the next evaluation metric.

The above discussion demonstrates that the important concern is not an individual distracting him/herself, but an individual distracting others. It is this point where individually utilized technology becomes the problem. All rational people will agree that there is a significant difference in noise generation between an individual doodling in a notebook or working on math homework for next period versus an individual incessantly tapping on keys/screen or periodically making a sound like a laugh in response to a piece of video. Basically the utilization of technology as the element of distraction dramatically increases the probability that the distraction distracts others who do not want to be distracted from the content of the lecture. Therefore, individual technology must be appropriately managed though similar penalties as discussed above for behavior infractions.

Overall the administration of technology in the classroom is the prerogative of the teacher despite complaints from non-teachers. A problem technophiles have with this strategy is the incorrect belief that only technology can make a modern lecture innovative, dynamic and impactful. A quality teacher can give these characteristics to a lecture with just a piece of chalk and a chalkboard and if these non-teacher commentators had any real experience in education they would have a better understanding of this reality.

One of the improvements that must be made to establish better teachers is changing the means at which training experience is acquired. Overall there is too much single-experience watching/observing versus actual multi-experience hands-on training. For example a number of training programs involve a prospective teacher sitting in and observing the behavior, style and actions of a veteran teacher. However, rarely do these prospective teachers teach in the class while receiving feedback from the veteran teacher, they do little prep-work/grading/discussion and do not interact with other veteran teachers either.

Instead of this old method, new prospective teachers during their “observation” period should act as teaching assistants doing a significant amount of the grading and preparation work for the veteran instructor and teaching for a set period of time (maybe once per week). Then the prospective teacher should move to another teacher in the same subject to experience a potential different viewpoint in how to manage a class and/or teach the subject matter. Of course the logistics associated with such a new design would require work.

Another important change to positively advance teaching is to hold charter schools to actual academic standards or disallow public funding. Some love to make the utopian argument that money does not really matter with regards to improving public education, but such arguments are incorrect and self-serving. It makes no sense that charter schools can receive public funds, but have no accountability to those who provide those funds. Therefore, charter schools must either be removed from public funding or be held accountable to the same standards as public schools.

Similarly the return of respect to the teaching profession can never be achieved as long as organizations like Teach for America are allowed to continue to undermine the profession by introducing unprepared individuals into the profession. Teach for America and similar organizations produce negative propaganda regarding teaching under the motto “its so easy anyone can do it”, but refuse to accept responsibility for the reality that over half of their “qualified” candidates exit the profession after only two years.

Similar to the general propaganda spread by Teach for America and other similar organizations, one must abandon the idea that teaching is an occupation undeserving of respect due to the perceived hours of operation. A common refrain in public discourse is that teaching is not difficult because “teachers get the summers off”. What these false criticisms fail to acknowledge is the total hours worked versus days worked. Good teachers that care about ensuring a proper learning environment work more hours than average over the course of the week and also work over the summer. Overall quality teachers, those who the public claims to want in schools, do not fit this “not real work” profile and are negatively impacted by its continued propagation.

It is appropriate to briefly touch on a couple of indirect methods that could improve the educational experience. First, it makes sense to follow scientific research regarding the way lighting and room color influence performance and behavior. For example it has been reported that “warm” yellowish white light supports a more relaxing environment that promotes play and probably material engagement, standard school lighting (neutral white) supports quiet contemplative activities like reading and “cool” bluish white light supports performance during intellectually intensive events like tests.1 Thus equipping classrooms with LED lights that could be changed between these different types of lighting tiers should provide useful advantages to both teachers and students.

Second, there is sufficient evidence to suggest that early start times in high schools and some middle schools (7:30 and earlier) have negative educational influence on students.2-4 While this issue has received attention in the past and is still receiving some attention here and there, unfortunately it is not as cut-and-dry as simply starting school 30 minutes later for there are significant logistical hurdles to the successful administration of a “later school day” policy.

One of the major problems is how to manage bus transit for a single fleet of buses tends to service one school district or region. The tiered start times for different schools (high school, middle school/junior high, elementary school) is typically necessary for transit efficiency allowing this single fleet to manage all schools. Change the start time for high school and the efficiency of bus service collapses unless start times for middle schools and elementary schools are also changed.

However, changing start times for these schools is not beneficial to younger age students because they are already starting at times later than 8:00 am, and starting even later may even be detrimental because of the much later release times (4:00 pm or later). Not surprisingly the solution of “get more buses” is a non-starter for most school districts are already rather cash strapped to begin with due to tax funding dependencies and charter schools taking money from that pie as well. This transit problem and the resultant potential detriment for younger students is exactly what Montgomery Country in Maryland experienced when they changed school hours in 2015.

Another meaningful logistic hurdle involves the administration of after-school extra-curricular activities and how they could disrupt home life due to students arriving home at 5:30 or 6:00 pm, especially during the late fall and winter months when daylight becomes limited. Also there may be increased costs for the school for heating and cooling, especially cooling for those districts in high temperature regions for starting later in the day means hotter average school hour temperatures. This issue is tough because the costs could be prohibitive for some districts and meaningless for others. Of course one significant problem is that studies involving the incorporation of later school hours only seem to focus on health and/or possible changes in academic achievement and do not address obstacles to applying later school hours, which is rather ridiculous.

In the end one of the most pressing problems in education is misrepresentation of the overall goal of education. Some reformers seem to think that the most important role for education is to foster a level of knowledge that allows an individual to gain employment in some particular field. While such a role is important, it is not so important that it should displace other important elements to education like:

1) Produce citizens that can make rational decisions, which will allow them to make positive contributions to society.
2) Produce citizens that can effectively form solutions to both qualitative and quantitative problems.
3) Produce citizens that can use both spoken and written word to effectively communicate their ideas and feelings to other individuals as well as understand and analyze the validity of the ideas and feelings of others.
4) Produce citizens that do not tolerate individuals that attempt to manipulate or deceive society for their own ends or do not tolerate those that practice and/or preach ignorance or idiocy for the sole purpose of satisfying their own personal beliefs and ends.

Overall blind devotion to test scores and technology will not help achieve these goals and without the ability to produce these types of individuals society becomes vulnerable to manipulators and opportunists that would produce net harms. It is the responsibility of education to produce a society that is not only productive, but also able to protect it from these unscrupulous individuals, thus it is the responsibility of society to ensure an educational environment that accomplishes these goals. Current reformers are not offering solutions that will produce such accomplishment, thus something must change.

==
Citation –

1. Suk, H, and Choi, K. “Dynamic lighting system for the learning environment: performance of elementary students.” Optics Express. 2016. 24(10):A907-A916.

2. Eaton, D, et Al. “Prevalence of insufficient, borderline, and optimal hours of sleep among high school students–United States, 2007.” Journal of Adolescent Health. 2010. 46(4):399-401.

3. Wahlstrom, K, et Al. “Examining the impact of later high school start times on the health and academic performance of high school students: A multi-site study.” 2014.

4. Au, R, et Al. “School start times for adolescents.” Pediatrics. 2014. 134(3):642-649.

Wednesday, June 8, 2016

Food Labels – Do They Properly Inform Consumers?


The unsurprising and non-controversial role of food labels is to present the ingredients and elements of a food product both independently and in the context of dietary guidelines to ensure consumers are informed to what they are purchasing and how “healthy” the product. In the United States the National Labeling and Education Act (NLEA) in 1990 required the inclusion of nutrition information on packaged foods, with a few exceptions, and set the standard for how the information should be presented. This legislation was important because before the NLEA nutritional information was only required when producers wanted to make claims about specific nutritional benefits derived from consuming their product. However, the lack of standardization in presentation of nutritional information made it difficult to contrast and compare even when the information was available. Thus the famous standardized side panel conveying nutritional information for food products was born.

Interestingly enough very little has changed in the presentation and information of this standard U.S. labeling since the NLEA up until now. Recently the Food and Drug Administration (FDA) released information regarding how this food label would change by 2018; this news was met with cheers from some health circles and jeers from others. Overall the changes are rather uneventful with an increased font size for calorie count, elimination of calories from fat, more nuanced language for per serving and per package identifiers including more empirically based serving sizes, gram amounts in addition to percentages for vitamins (this change seems rather meaningless), Vitamin D and potassium switch required status with Vitamin A and C and more clarification regarding the % Daily Value footnote. The one supposed “big ticket” change is that labels are now required to breakdown the sugar content between natural sugars and added sugars.

The level of usefulness for the consumer within the divergence between natural and added sugars is questionable because without specifically breaking down the sugar content into its molecular complements: glucose, fructose, maltose, etc. the total sugar amount is still the only real meaningful piece of information. Knowing how much sugar was added versus how much sugar naturally occurs in the product is rather irrelevant regarding how the body will process it. Is someone really going to buy product A over product B because it has 28 grams of natural sugar versus 14 grams of natural sugar and 14 grams of added sugar? For some the answer will be yes, although most will not have a good reason why (the only real valid reason would be the contention that added sugars have a higher probability of being simple sugars and negative for health), but for most the answer will be no.

Some individuals could counter-argue that various groups like the AHA, AAP, WHO and Institute of Medicine have recommended decreasing intake of added sugars with general estimates that only about 10 percent of total daily calories should come from added sugar. While all of this is true, the problem is that differentiation between added sugar and natural sugar is more propagandized than meaningful. Again without differentiating between the specific molecules that make up the sugar content, total sugar is the only metric regarding sugar that actually matters. The propaganda stems from individuals promoting the reduced consumption of processed foods, which are more likely to have added sugars. Certainly it is true that added sugars do nothing positive to the nutritional content of the food, but again total sugar is what matters without specific differentiation.

The “debate” about added vs. natural sugar aside, in reality this new label certainly falls short of Michelle Obama’s endorsement “you will no longer need a microscope, a calculator, or a degree in nutrition to figure out whether the food you’re buying is actually good for our kids…” It is difficult to support the accuracy of that statement based on the changes; a sentiment shared by many other individuals who think the food labeling requirements should have been much more substantial.

For example applying the labeling itself is only a part of the battle for the information on the label is only meaningful when consumers read and understand it. There is some question to whether or not the label needs to change for studies have demonstrated that while a vast majority of individuals in the U.S. read food labels, this information does little to influence their food choices.1-3 However, in the EU, which has a more intensive and thorough labeling system, food labels do influence consumer choice.4

While it is difficult to directly determine if the critical factor in this difference in behavior is born from the food labeling methodologies due to cultural differences between the U.S. and the EU, it is also difficult to dismiss the differing labeling strategies as playing an influencing factor. The key difference between the two labeling systems is that the U.S. system places a greater emphasis on the consumer to understand the nutritional context of both product A and how it may differ from the nutritional context of product B versus the more categorized labeling system of the EU in general.

For example in the UK labeling follows four core principles enumerated by the UK Food Standards Agency (FSA) for front of the package:5,6

1) Separate information must be provided on fat, saturated fat, sugars and salt; (this is also a guideline in the U.S. via Facts Up Front, but it is a voluntary program)

2) A red, amber or green color coding, similar to traffic lights, must be utilized to indicate whether the levels of those elements outlined in the first principle are high, medium or low respectively per 100 g (or ml for liquids) of product content;

3) Color metrics are established by nutritional criteria set forth by the FSA;

4) Provide portion ratios relative to the elements outlined in the first principle for color coding;

When this proposal was first made in 2006 there was significant resistance regarding the traffic light identification system as numerous food manufactures and producers questioned the use of the traffic light system and instead favored more of a U.S. style system using percentage of daily recommended values.5 Furthermore food manufactures also disagreed with the use of 100 g/ml as a standard invoking the argument that consumers think more in portions of the consumed product. It would be difficult for consumers to deduce a gram weight-based portion size. This complaint produced an alternative system using the needlessly large number of portion sizes by various food products creating a much more difficult comparison environment for consumers; such was ironic because it produced the result that was exactly the rational used by food companies to argue against the 100 g/ml standard, an overly complicated system.

Regardless of the bumpy road to establishing a universal labeling system and the lack of ideal standardization in the UK (note the confliction between points 2 and 4 from above), numerous studies have demonstrated that front of the package simple signal (like color coding) labeling of the “healthiness” of a food product does influence consumer choice both by increasing the probability that the customer purchases healthier products and increasing the probability that food producers create healthier products;7-10 also traffic light systems have proven superior to other systems like single compounded numbers or guiding star type systems.8,11 Therefore, clearly creating regulations regarding the nutritional or “health value” of a food for the front of the package is a meaningful step to increasing the probability of an informed consumer.

Part of the battle for the front of the package (FOP) is not just to produce a standardized system to convey the health of the product, but to ensure genuine portrayal of the product itself. For example advertising for some food products tends to mislead consumers that the product may contain a larger quantity of a component than in reality. Such is common with fruit juice products; for example in an attempt to draw attention away from the top ingredients commonly being concentrated water and high fructose corn syrup with pictures of fruit. One means that helps support such trickery is that ingredients are only listed in order of percent amount, but the percentages are not given. Actually requiring the percentages may help limit the impact of this type of advertising.

Ensuring proper labeling design is important because studies have demonstrated that simple, transparent and clear labeling engage subconscious emotional elements in the brain including the amygdala.12-14 Therefore, the FDA may need to properly regulate front of the box labeling because the side panel may be at a psychological disadvantage to the “health proclamations” that commonly adorn the front of the box in stylized and eye-catching presentations.

In the past some parties have acknowledge the importance of the front of the box and lamented the U.S. government’s acquiescence of its power to corporations. These parties have proposed taking back the front of the box in such a way to “inform” consumers of whether or not their choice is a healthy one. For example one proposal is that the upper right portion of the box should contain the three most prevalent ingredients in the product, the calorie count and the number of total ingredients beyond those first three in bold and clear font.15

The proponents of such a system believe that it will produce a fast means for consumers to identify healthy food versus unhealthy food that is so easy it is impossible to ignore. However, the problem with such a system is that it can be easily manipulated where producers can fine-tailor their produces where 3 seemingly healthy ingredients can be the 3 most plentiful ingredients by an incredibly small amount before the “unhealthy” ingredients.

Also calorie counts in such a system would have to identify serving size to be placed in proper context and even then such counts may prove to complicate the issue. Note that the above proposition suggests posting the calories per serving on the front of the package. However, if all similar products do not have standardized serving amounts (all listing 100 grams for example vs. ½ cup or 8 to a box) then listing the calories is not simplistic strategy to optimize food choices on the basis of health. Varying serving amounts force the consumer to undertake some general arithmetic. For example suppose Cereal A has 120 calories per ½ cup (10 servings) and Cereal B has 160 calories per ¾ cup (7 servings), front of the box labeling would imply that Cereal A has fewer calories, but that is not the case in either equivalent serving calories or total calories for the entire box. Therefore, front of the package labeling must be less simplistic unless a standardized serving metric is established. Facts Up Front suffers from this serving difference problem limiting its effectiveness.

The possible problems with the above differing option notwithstanding, it is clear that the EU, including the U.K., has a better labeling system than the U.S. with regards to helping consumers acquire and understand nutritional information. So why does so little change in the “updated” U.S. labeling system? Most would argue, probably correctly, that lobbying by food companies prevents the FDA from going further thanks to interference from Congress. If the FDA had the “freedom” to make any changes what changes should they make?

Obviously it is important for there to be some form of comparison information that goes beyond Facts Up Front. A traffic light system certainly holds promise due to its successful application in the UK. However, it is understandable that food companies would balk at such a condition, especially those with significant “red” light products. The proper response to such complaints is two-fold: first, who cares if the food companies have complaints again the proper utilitarian construct of ensuring transparent information. Second, one could attempt to lessen the impact of the traffic light system in the context that a primarily “red” food should not be viewed as something that should never be consumed, otherwise no one would ever eat something like a piece of cheese cake, but instead a food that should be consumed rarely in the context of good health. Thus, the green, yellow and red lights simply transition into anytime, once-a-day and rare, food choices.

Another interesting idea would be to establish a standardized declaration system for the front of the package involving commonly referred health terms against an empirically derived metric. Basically it is commonplace for food producers to put labeling on the front of a package that states: “high in fiber”, “low sodium”, “x number of essential vitamins and minerals”, etc. This newly proposed system would eliminating that ability of food producers to make such claims and instead replace this system with a five or six bullet point checklist in the upper right corner of the package confirming a given “positive health feature”. A check would be earned by meeting a standard floor or ceiling for the given attribute per 100 g of product; where the FDA would establish the standard. For example the “high fiber” box would be checked if a food contained 3 grams of fiber per 100 g of product, not checked otherwise. Five possibilities for such a checklist are shown below.

1) High Fiber;
2) Low Sodium;
3) Whole Grain;
4) Low Sugar;
5) Low Saturated Fat

In the end both of these strategies: the traffic lights and the checkbox, should significantly increase the probability that consumers are informed about the general nutritional value of their food product choices without an unreasonably long analysis period. Overall there is no good reason that the FDA and its surrogates should not establish and enforce such a labeling system.



Citations –

1. Cha, E, et Al. “Health literacy, self-efficacy, food label use, and diet in young adults.” Am. J. Health. Behav. 2014. 38(3):331-339.

2. Campos, S, Doxey, J, and Hammond, D. “Nutrition labels on pre-packaged foods: a systematic review.” Public Health Nutr. 2011. 14(8). 1496-1506.

3. Huang, T, et Al. “Reading nutrition labels and fat consumption in adolescents.” J. Adolesc. Health. 2004. 35(5):399-401.

4. Storcksdieck, G, and Wills, J. “Nutrition labeling to prevent obesity: reviewing the evidence from Europe.” Curr Obes. Rep. 2012. 1(3):134-140.

5. Lobstein, T, and Davies, S. “Defining and labelling ‘healthy’ and ‘unhealthy’ food.” Public Health Nutrition. 12(3):331-340.

6. Food Standards Agency. Board Agrees Principles for Front of Pack Labelling. 2006. Food Standards Agency.

7. Lobstein, T, Landon, J, and Lincoln, P. “Misconceptions and misinformation: the problems with guideline daily amounts (GDAs). A review of GDAs and their use for signaling nutritional information on food and drink labels.” National Heart Forum. 2007.

8. Temple, N, and Fraser, J. “Food labels: a critical assessment.” Nutrition. 2014. 30:257-260.

9. Hersey, J, et Al. “Effects of front-of-package and shelf ntrition labeling systems on consumers.” Nutr. Rev. 2013. 71:1-14.

10. Hawley, K, et Al. “The science on front-of-package food labels.” Public Health Nutr. 2013. 16:430-439.

11. Sutherland, L, Kaley, L, and Fischer, L. “Guiding Stars: the effect of a nutrition navigation program on consumer purchases at the supermarket.” Am. J. Clin. Nutr. 2010. 91:1090S-1094S.

12. Grabenhorst, F, et Al. “Food labels promote healthy choices by a decision bias in the amygdala.” NeuroImage. 2013. 74:152-63.

13. Pessoa, L, and Adolphs, R. “Emotion processing and the amygdala: from a ‘low road’ to ‘many roads’ of evaluating biological significance.” Nat. Rev. Neurosci. 2010. 11:773-783.

14. Seymour, B, and Dolan, R. “Emotion, decision-making, and the amygdala.” Neuron. 2008. 58:662-671.

15. Kessler, D. “Toward more comprehensive food labeling.” N. Engl. J. Med. 2014. 371(3):193-195.

Tuesday, May 24, 2016

Addressing the HDL Problem in High Cholesterol Treatment


Cardiovascular disease is still the biggest cause of death in the developed world including the United States. One of the critical elements that influence this rate of death is the disruption of cholesterol homeostasis, especially in the context of increasing the risk of arteriosclerosis.1,2 One of the current principal medical therapies for managing high cholesterol is the administration of statins. However, while statins have demonstrated a relatively strong safety profile with minimal sides effects, there are individuals who are unresponsive to treatment or may prefer a different option. The chief influence of cholesterol concentrations is tied to both high-density lipoproteins (HDL) and low-density lipoproteins (LDL) concentrations. Stanins address the LDL side of the equation through their inhibition of HMG-CoA reductase; it makes sense that the next step in producing another effective form of cholesterol treatment is to focus on HDL.

HDL is one of five major lipoprotein groups that are responsible for transporting lipids like cholesterol, phospholipids and triglycerides. Both apolipoproteins, apoA-I and apoA-II, are required for normal HDL biosynthesis with apoA-I making up 70%.3 In contrast to LDL, HDL is responsible for moving lipids from cells, including within artery wall atheroma, to other organs for excretion or catabolism, most notably the liver.4 Both HDL and LDL concentrations are indirectly measured through the concentrations of HDL-C and LDL-C due to difficulties and costs associated with direct measurement. Since the 1970s HDL has been acknowledged as having a direct inverse relationship regarding risk for cardiovascular disease (CVD).5 This HDL-CVD relationship has also been conserved over different racial and ethnic populations.6 A seminal study known as the Framingham study also identified high LDL-C and low HDL-C levels as a strong predictor of CVD risk.7 Finally it has also been noted that close to 30% of lipids are transported by HDL in healthy individuals.8

The general belief is that HDL is able to lower the risk of cardiovascular disease through the inhibition and even reversal of atherogenesis via initiating the process of reverse cholesterol transport (RCT).9-11 RCT is the common term for the removal of cholesterol from peripheral cells and transport to the liver. While RCT involves multiple steps, the major ones involve the transfer of cholesterol from peripheral cells to HDL by ATP-binding cassette transporter (ABCA1) through apoA-I interaction and phospholipid interaction, conversion of cholesterol to cholesteryl esters by lecithin-cholesterol acyltransferase (LCAT) and the removal of these esters by interaction with either a direct removal pathway, scavenger receptor class BI (SR-BI), or the indirect removal pathway, cholesterylester transfer protein (CETP).4,10,11

CETP interaction involves the exchange of triglycerides of VLDLs with the cholesteryl esters of HDL resulting in VLDL being converted to LDL that later enters the LDL receptor pathway where the triglycerides are degraded due to their instability in HDL resulting in a smaller HDL lipoprotein that can begin to reabsorb new cholesterol molecules.12

The strategy of manipulating HDL concentrations or interactions to produce better health outcomes is certainly not unique and has not gone unnoticed by the pharmaceutical community. For example one of the initially more promising therapeutic treatments for high cholesterol was increasing expression of endogenous apoA-I due to its role in HDL synthesis. To this end some research has focused on using PPARgamma agonists to increase APOA1 gene transcription to eventually increase apoA-I concentration.13,14 ApoA-II has also received some attention because it appears required for normal HDL biosynthesis and metabolism. Increasing either apoA-I or apoA-II concentrations produce an increase in HDL-C levels and supposed HDL levels.

In contrast to increasing apoA synthesis rates, there is already effective means for increasing HDL-C levels via the supposed reduction of apoA catabolism through increasing nicotinic acid (niacin) concentrations.15 Niacin has demonstrated the ability to reduce HDL apoA-I uptake in hepatocytes in vitro.16 Whether or not this influence occurs via interaction with a HDL receptor or G protein-coupled receptors (most notably GPR109A), is unclear,16,17 but what is known is that niacin reduces apoA catabolism and increases HDL-C concentration.15

In addition to research on increasing HDL synthesis, other research has focused on reducing the degradation/loss of HDL through focusing on influencing the esterifcation and de-esterifaction HDL pathways. As mentioned above HDL-C is esterified to HDL-CE by LCAT. Low concentrations of LCAT in both humans and mice produce significant drops in HDL-C concentration and rapid catabolism of apoA-I and apo-II whereas high concentrations of LCAT result in significantly increased HDL-C concentrations.18,19 These results are more than likely due to feedback systems in that increased LCAT activity via higher LCAT concentrations increase conversion of HDL-C to HDL-CE, thus increasing the demand for HDL-C and its reactants (HDL and apoA-I/apoA-II).

Of the two major ending points for HDL-CE, labeling studies suggest that a majority of HDL-CE is transported to the liver via CETP exchange instead of through direct liver uptake via SR-BI.20 Therefore, CETP inhibitors, like JTT-705 and torcetrapib, are also viewed as an effective means of increasing HDL-C (and by association) HDL concentrations.21-23 Interestingly enough there also appears to be a negative influence on LDL-C concentrations.4,21 However, despite this increase in HDL-C concentration from CETP inhibition, there is a question of whether or not this pathway actually reduces CVD. For example large genetic and observational studies have contrasting results,24 but lean towards increased CETP concentrations increasing CVD probability, but inhibition of CETP does not seem to reduce CVD beyond standard rates (the reduction seen from not having elevated concentrations); this behavior may occur due to CETP negatively interacting with RCT.20

Overall despite the notion that higher native HDL levels (and higher HDL-C levels) are associated with lower rates of CVD and that all of the above methods have some ability to increase HDL-C concentration levels, pharmaceutical derived increases of HDL-C levels, be it from HDL-C direct increases, niacin, or CETP inhibition, do not instill the same CVD health benefits as native levels.25,26 Isolated genetic variants also appear to have little to no effect; for example a loss-of-function variant in LIPG raises HDL-C, but did not change CVD probability.26,27 So what could be a reason behind this inability of HDL-C concentration alone to decrease CVD probability?

One important element in the HDL pathway that has only been alluded to so far with regards to pharmaceutical intervention is the expression of the direct removal pathway through SR-BI. Various studies have identified that overexpression of SR-BI reduces HDL-C concentration and under-expression of SR-BI increases HDL-C concentration.28-31 Neither of these two results should be surprising as SR-BI is an end point pathway for eliminating HDL-C and/or HDL-CE converting it back to HDL. However, the interesting aspect of this change in SR-BI expression is that increased SR-BI reduces the rate of arteriosclerosis and decreased SR-Bi expression increases it.26 So how could SR-BI have this effect?

SR-BI, which is encoded by the gene SCARB1, was identified as the primary liver related HDL receptor decades ago.32 The principal role of SR-BI is to selectively uptake HDL-CE into hepatocytes and steroidogenic cells as well as, to a lesser extent, HDL-C.4,32 Most importantly the interaction between SR-BI and HDL-C(E) results in the internalization of the whole HDL resulting in the removal of the cholesterol and return of the non-cholesterol carrying HDL into the bloodstream.33

This absorption of HDL-C(E) and associated return of HDL could explain the reduced rate of arteriosclerosis over CETP interaction for among other things the SR-BI and HDL relationship trigger macrophage derived RCT.34,35 Basically SR-BI is returning HDL, not HDL-C(E) to the bloodstream which is ready to absorb more cholesterol; this readiness somehow signals the associated macrophages to induce greater rates of RCT. Whereas CETP does not reduce the cholesterol load of HDL as much due to the reliance on other limiting factors making them less capable of increasing RCT rates due to reduce cholesterol absorption capacity.

With this information about the functionality of SR-BI, a theory can be posited regarding why increasing HDL-C does not result in improved health outcomes. It makes sense to consider the idea that SR-BI is a form of limiting factor in the capacity of HDL to reduce the risk of CVD. Due to the fact that CETP appears to manage a majority of HDL-C(E) reduction it stands to reason that SR-BI expression is not significantly tied to HDL, HDL-C or HDL-CE concentrations. Therefore, when HDL or its cholesterol variants increase in concentration there is no corresponding increase in SR-BI. One possible explanation for this outcome is that a certain minimum concentration of cholesterol is required to circulate in the blood, which is managed by a level of negative feedbacks that maintain SR-BI expression levels at a certain floor and ceiling.

So why is SR-BI more important overall than CETP if CETP manages a majority of HDL reduction/eliminations? Perhaps CETP has a limit to what type of HDL it can manage. If HDL gets too “big” via its total level of cholesterol absorption the only means to remove that cholesterol could come from the direct pathway, i.e. SR-BI. However, if HDL concentrations outpace SR-BI expression by a significantly higher than normal level then it stands to reason that significant amounts of HDL will become too big for CETP to manage. Eventually these HDL molecules can breakdown (i.e. explode in a sense) while still circulating in the blood stream releasing all of the previously absorbed cholesterol and transformed cholesterol-esters. If this happens the cholesterol is not properly managed and can result in the increased the rate of arteriosclerosis and associated CVD despite the higher HDL concentrations.

In the end while in general statins have been impressive at controlling high cholesterol and its associated detrimental side effects to health, it is always wise to have alternative strategies. The best sought alternative to statins is a pharmaceutical agent that increases HDL(-C) concentrations due to their positive relationship with quality health outcomes to cholesterol related events. However, numerous studies have produced disappointing results for agents that increase HDL levels with regards to cholesterol related health outcomes, including the potential that more negative events become more probable. So what can be done about this issue?

Clearly if the above proposed theory regarding SR-BI as a limiting factor in the effectiveness of HDL is accurate then if one wants to raise HDL pharmaceutically to produce some form of health benefit, one must also increase SR-BI expression to properly manage the increased HDL and cholesterol associate concentrations. This process on its face should not be difficult as there are already existing pharmaceutical agents as well as natural agents that appear to increase SR-BI expression, but will demand proper study to identify its viability and safety over the long-term.


Citations –


1. Koyama, T, et Al. “Genetic variants of SLC17A1 are associated with cholesterol homeostasis and hyperhomocysteinaemia in Japanese men.” Nature: Scientific Reports. 2015. 5:15888-15899.

2. Arsenault, B, Boekholdt, S, and Kastelein, J. “Lipid parameters for measuring risk of cardiovascular disease.” Nat. Rev. Cardiol. 2011. 8:197-206.

3. Lewis, G, and Rader, D. “New insights into the regulation of HDL metabolism and reverse cholesterol transport.” Circ. Res. 2005. 96:1221-1232.

4. Rader, D. “Molecular regulation of HDL metabolism and function: implications for novel therapies.” The Journal of Clinical Investigation. 2006. 116(12):3090-3100.

5. Miller, G, and Miller, N. “Plasma-high ensity-lipoprotein concentration and development of ischaemic heart disease.” Lance. 1975. 1:16-19.

6. Goff, D, Jr, et Al. “2013 ACC/AGA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation. 2014. 129(2):S49-S73.

7. Kannel, W. “Lipids, diabetes, and coronary heart disease: insights from the Framingham Study.” Am. Heart J. 1985. 110:1100–1107.

8. Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III). Executive summary of the third report of the National Cholesterol Education Program (NCEP). JAMA. 2001. 285:2486–2497.

9. Ross, R, and Glomset, J. “Atherosclerosis and the arterial smooth muscle cell: proliferation of smooth muscle is a key event in the genesis of the lesions of atherosclerosis. Science. 1973, 180:1332–1339.

10. Barter, P, et Al. “Anti-inflammatory properties of HDL.” Circ. Res. 2004. 95:764-772.

11. Mineo, C, et Al. “Endothelial and anti-thrombotic actions of HDL.” Circ. Res. 2006. 98:1352-1364.

12. Agellon, L, et Al. “Reduced high density lipoprotein cholesterol in human cholesteryl ester transfer protein transgenic mice.” J. Biol. Chem. 1991. 266. 10796-10801.

13. Tangirala, R, Regression of atherosclerosis induced by liver-directed gene transfer of apolipoprotein A-I in mice.” Circulation. 1999. 100:1816-1822.

14. Mooradian, A, Haas, M, and Wong, N. “Transcriptional control of apolipoprotein A-I gene expression in diabetes.” Diabetes. 2004. 53:513-520.

15. Carlson, L. “Nicotinic acid: the broad-spectrum lipid drug. A 50th anniversary review.” J. Intern. Med. 2005. 258:94–114.

16. Meyers, C, Kamanna, V, and Kashyap, M. “Niacin therapy in atherosclerosis.” Curr. Opin. Lipidol. 2004. 15:659–665.

17. Tunaru, S, et Al. “PUMA-G and HM74 are receptors for nicotinic acid and mediate its anti-lipolytic effect.” Nat. Med. 2003. 9:352–355

18. Kuivenhoven, J, et Al. “The molecular pathology of lecithin:cholesterol acyltransferase (LCAT) deficiency syndromes. J. Lipid Res. 1997. 38:191–205.

19. Ng, D. “Insight into the role of LCAT from mouse models.” Rev. Endocr. Metab. Disord. 2004. 5:311–318.

20. Schwartz, C, VandenBroek, J, and Cooper, P. “Lipoprotein cholesteryl ester production, transfer, and output in vivo in humans. J. Lipid Res. 2004. 45:1594–1607.

21. De Grooth, G, et Al. “A review of CETP and its relation to atherosclerosis.” J. Lipid Res. 2004. 45:1967–1974.

22. Kuivenhoven, J, et Al. “Effectiveness of inhibition of cholesteryl ester transfer protein by JTT-705 in combination with pravastatin in type II dyslipidemia.” Am. J. Cardiol. 2005. 95:1085–1088.

23. Clark, R, et Al. “Raising high-density lipoprotein in humans through inhibition of cholesteryl ester transfer protein: an initial multidose study of torcetrapib.” Arterioscler. Thromb. Vasc. Biol. 2004. 24:490–497.

24. Boekholdt, S, et Al. “Plasma levels of cholesteryl ester transfer protein and the risk of future coronary artery disease in apparently healthy men and women: the prospective EPIC (European Prospective Investigation into Cancer and nutrition)-Norfolk population study.” Circulation. 2004. 110:1418–1423.

25. Rader, D, and Tall, A. “The not-so-simple HDL story: Is it time to revise the HDL cholesterol hypothesis?.” Nature medicine. 2012. 18(9):1344-1346.

26. Zanoni, P. “Rare variant in scavenger receptor BI raises HDL cholesterol and increases risk of coronary heart disease.” Science. 2016. 351(6278):1166-1171.

27. Haase, C, et Al. “LCAT, HDL cholesterol and ischemic cardiovascular disease: a Mendelian randomization study of HDL cholesterol in 54,500 individuals.” The Journal of Clinical Endocrinology & Metabolism. 2011. 97(2):E248-E256.

28. Wang, N, et Al. “Liver-specific overexpression of scavenger receptor BI decreases levels of very low density lipoprotein ApoB, low density lipoprotein ApoB, and high density lipoprotein in transgenic mice.” Journal of Biological Chemistry. 1998. 273(49):32920-32926.

29. Ueda, Y, et Al. “Lower plasma levels and accelerated clearance of high density lipoprotein (HDL) and non-HDL cholesterol in scavenger receptor class B type I transgenic mice.” Journal of Biological Chemistry. 1999 274(11):7165-7171.

30. Varban, M.L, et Al. “Targeted mutation reveals a central role for SR-BI in hepatic selective uptake of high density lipoprotein cholesterol.” PNAS 1998. 95(8):4619-4624.

31. Brundert, M, et Al. “Scavenger Receptor Class B Type I Mediates the Selective Uptake of High-Density Lipoprotein–Associated Cholesteryl Ester by the Liver in Mice.” Arteriosclerosis, thrombosis, and vascular biology. 2005. 25:143-148.

32. Acton, S, et Al. “Identification of scavenger receptor SR-BI as a high density lipoprotein receptor.” Science. 1996. 271(5248):518-520.

33. Silver, D, et Al. “High density lipoprotein (HDL) particle uptake mediated by scavenger receptor class B type 1 results in selective sorting of HDL cholesterol from protein and polarized cholesterol secretion.” J. Biol. Chem. 2001. 276:25287–25293.

34. Zhang, Y, et Al. “Hepatic expression of scavenger receptor class B type I (SR-BI) is a positive regulator of macrophage reverse cholesterol transport in vivo.” J. Clin. Invest. 2005. 115:2870–2874. doi:10.1172/JCI25327.

35. Rothblat, G, et Al. “Cell cholesterol efflux: integration of old and new observations provides new insights.” J. Lipid Res. 1999. 40:781–796.