ramen and pickles

science, technology, and medicine served up with some tasty noodles

Monthly Archives: January 2012

Phys Ed: Exercise Hormone Keeps Us Healthy – NYTimes.com

A newly discovered hormone produced in response to exercise may be turning people???s white fat brown, a groundbreaking new study suggests, and in the process lessening their susceptibility to obesity, diabetes and other health problems. The study,published on Wednesday in Nature and led by researchers at the Dana-Farber Cancer Institute and Harvard Medical School, provides remarkable new insights into how exercise affects the body at a cellular level.”

Cell Stem Cell – Rejuvenation of Regeneration in the Aging Central Nervous System

Ruckh et al. in Cell Stem Cell show remyelination in mice through stem cells.  This is basically curing multiple sclerosis in mice and suggests therapies for treating age related problems in the brain and nervous system.


The decline effect and the scientific method : The New Yorker

An excellent discussion of meta-analysis, publication bias, and how statistics can be viewed as a kind of ontology (the original definition, namely the study of existence/reality/truth). 


Extra points for including a description of a funnel plot, and quotes from John Ioannidis.

First images of newly discovered primate | Fauna & Flora International


New large animal species are still being discovered!

Learn to program: Make a free weekly coding lesson your New Year???s resolution. – Slate Magazine



If you???re looking for a New Year???s resolution, let me suggest an idea that you might not have considered: You should learn computer programming. Specifically, you should sign up for Code Year, a new project that aims to teach neophytes the basics of programming over the course of 2012. Code Year was put together by Codecademy,* a startup that designs clever, interactive online tutorials. Codecademy???s founders, Zach Sims and Ryan Bubinski, argue that everyone should know how to program???that learning to code is becoming as important as knowing how to read and write. I concur. So if you don???t know how to program, why not get started this week? Come on, it???ll be fun!

Code Year???s minimum commitment is one new lesson every week. The company says that it will take a person of average technical skill about five hours to complete a lesson, so you???re looking at about an hour of training every weekday. That???s not so bad, considering that the lessons are free, and the reward could be huge: If you???re looking to make yourself more employable (or more immune from getting sacked), if you???d like to become more creative at work and in the rest of your life, and if you can???t resist a good intellectual challenge, there are few endeavors that will pay off as handsomely as learning to code.”

Jobs Kill, BIG Time

Lots of caveats with this, including a lot of distrust on how these different jobs were ranked/rated by the different characteristics, but it is pretty interesting to think about the different characteristics of jobs that contribute to their safety or risk of causing death.


Quoting Robin Hanson:

I???ve saved the most interesting result in Ken Lee???s thesis till today. The subject is how death rates vary with jobs. The big result: death rates depend on job details more than on race, gender, marriage status, rural vs. urban, education, and income combined! Now for the details.

The US Department of Labor has described each of 807 occupations with over 200 detailed features on how jobs are done, skills required, etc.. Lee looked at seven domains of such features, each containing 16 to 57 features, and for each domain Lee did a factor analysis of those features to find the top 2-4 factors. This gave Lee a total of 22 domain factors. Lee also found four overall factors to describe his total set of 225 job and 9 demographic features. (These four factors explain 32%, 15%, 7%, and 4% of total variance.)

Lee then tried to use these 26 job factors, along with his other standard predictors (age, race, gender, married, rural, education, income) to predict deaths in the 302,890 people for whom he had job data. Lee found that his standard predictors didn???t change much, and found these job factor risk ratios (Table 34, column 2):


Ten of the 26 estimates are 5% significant, and five are 1% significant ??? this isn???t random noise (*** p<0.01, ** p<0.05, * p<0.1). Each factor is scaled to range in value from 0 to 1 across the 806 occupations; its risk ratio is an estimated ratio of death rates when that factor has its max value of one, relative to death rates when that factor has its min value of zero. And these are huge risk ratios!

If you take all of Lee???s standard non-age predictors (race, gender, married, rural, education, income), and multiply together their risk ratios, you???ll find that a poor badly-schooled unmarried urban black male dies 17.7 times as often as a rich well-educated married rural asian woman (of the same age), with a lifespan roughly thirty years shorter on average. (A risk ratio of 1.57 costs roughly five years of life.)

Yet big as this effect is, the top five job factor risk ratios give a total ratio of19.7, bigger that all the other non-age effects put together! And the top ten job factor ratios give a total risk ratio of over 100!  (All twenty six factors together give a total risk ratio of 563.) Jobs are clearly a huge and neglected influence on who lives and who dies.

If you cared about preventing death, rather than just signaling your concern, these results suggest you stop wasting your efforts on tiny effects like medical insurance, auto accidents, crime, recreational drugs, radiation, or food safety, and focus on: jobs. Yes a lot of job-death variation must come from different types of people doing different types of jobs, but a great deal of this variation is also likely causal ??? some jobs kill folks much more than others.

At the very least we should try to tell people about the huge life and death consequences of their job choices. Then workers could demand higher wages for more deadly jobs, which should induce employers to seek ways to substitute less deadly for more deadly jobs. Alas I suspect most folks will just shrug their shoulders ??? these sort of effects seem too abstract to elicit much concern. If you look at a person doing a job they don???t look like they are dying. Not like if snakes were killing people on planes ???

FYI, here are some sample jobs rated high and low on the four overall job factors (from Table 49):


An illustrated guide to the Bruins??? $156,679 post Stanley Cup win bar tab


Lots of good data to analyze here, but I was struck particularly by the infographic:

Nice because it includes a “Melchizedek” bottle.


How Antidepressant Clinical Trial Failures Relates to the Electron Charge

As mentioned in a previous blog post (http://scienceninja.posterous.com/the-epidemic-of-mental-illness-the-illusions), careful meta-analyses of many clinical trials of commonly prescribed anti-depressants suggest that they don’t do any better than placebos, and newer trials seem to demonstrate a trend where any beneficial effects of anti-depressants over placebos is disappearing.
This controversy in psychiatry has prompted this response in the prestigious and high impact “American Journal of Psychiatry”.  What is their explanation for this failure to show efficacy over placebo in new trials?  The patients in the new trials are not really depressed, they are frauds who are looking for cash payouts to be part of clinical trials.  They give a couple of anecdotal examples.


In essence what he is saying:  These drugs really do work. we just can’t demonstrate it in clinical trials because patients lie.  Studies that give the result we want (drugs work good) are good, well designed trials.  Studies which show no effect or placebo doing better are plagued with lying patients that obscure the true efficacy and should be ignore.  Bad trial participants is the problem.  Trust us.  These drugs really do work.  We’ve shown these drugs work in clinical trials.  The good ones.  Where they show that they work.
Now perhaps a better example of what might be happening can be seen in physics, a place where we can make much more careful and exacting physical measurements with lower variability (than for example, self reported mood).  Robert Millikan developed a technique for measuring electron charge looking at falling oil drops in an electric field.  Richard Feynman explains his results and those who followed him in his essay “Cargo Cult Science” (http://www.lhup.edu/~DSIMANEK/cargocul.htm):
We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher. Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of–this history–because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong–and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.
It is a very common phenomenon in scientific endeavors that an initially interesting finding or association can often disappear under repeated analyses.  Investigators are biased toward repeating the initial results, but as they investigate further, they semi-assymptotically converge to the true value.

What is Science?


Stephen Pinker has written a book, which promises to be on an interesting topic, dear to my heart, namely an investigation on why the level of overall violence in the world is decreasing and what might be done to accelerate its further decline.

However, I couldn’t get past the preface without running into this sentence:

“My approach is scientific in the broad sense of seeking explanations for why things happen.”

When did this constitute science even in a broad sense of the term? Now, there is not a whole lot to be gained from arguing semantics, and he is free to define and use a word however he wants. However, this implies to me that from the outset that he is looking to make some statements that related past associated events into a just so story of how they caused one another. This is what Nassim Taleb calls the narrative fallacy (http://en.wikipedia.org/wiki/Narrative_fallacy) which is something that we should all be vigilant to avoid to allow clear thinking, as it is an easy trap to fall into, and here is Pinker, in my mind, implicitly embracing it from the outset.