“Medical School Sucks”

September 14, 2010 1 comment

Wellcome Images

While I was in medical school, near the end of second year, I was burnt out. We were all preparing for USMLE Step 1 and final exams for Pathology, Pharmacology, etc.  at the same time. I had the distinct feeling that if the school had for some reason added two more courses to our workload it would have been unnoticeable. I was already fully overwhelmed, stressed, and worn out. Having more work to do might have lowered my test scores, but it couldn’t feel any worse. I was discussing this with an attending physician with whom I had a friendly relationship, and he said: “Look, medical school sucks. Second year really sucks. Third year sucks in a different way. Fourth year is fantastic, and that’s good, because the year after that, internship, sucks more than anything that came before.” I didn’t find this discouraging, but oddly reassuring. Not only did I know that I wasn’t alone, but I knew that I was right on schedule, and that things were tough because they had to be, not because I was wildly off track or over my head.

The current issue of JAMA is devoted to medical education, with a few articles about the mental health and attitudes of medical students. In the past, medical students have been popular research subjects, and all manner of experimental procedures and treatments were done on them. There’s not a good history of this research (hello, Lawrence K. Altman, maybe this is your next book?), and it’s now fairly rare as IRBs are very protective of students as a “disempowered” population. But medical students continue to fascinate doctors, and a fair amount of research has been done into the mental states of medical students.

The results are fairly consistent, and not very surprising to anyone who has been or known a medical student or resident.  Medical students are more prone to depression, anxiety, suicidality, and more likely to die by suicide than others of the same age. The study in JAMA notes that depressed medical students are more likely than their non-depressed classmates to think that seeking treatment would be more stigmatizing or more likely to damage their academic reputation. This is most likely a combination of the facts that depressed people have a more negative outlook than the nondepressed but also a more realistic one. One reassuring finding in the depression study was that the depressed students reported that they were less professional and made more errors than their peers, but there was no evidence that they actually did so.

Mostly, these findings demonstrate that medical students, and therefore doctors (even your doctor) are human beings. Residents who felt that self-sacrifice was no excuse to accept inappropriate gifts were more likely to state they would accept gifts from drug companies after they were asked to reflect on the sacrifices they have made for their job. Another article discusses pervasive “presenteeism” among residents, coming to work despite illness, that is likely part of that perceived hardship.

There’s no doubt that medical trainees face difficult situations, extreme stresses, and significant barriers to mental health treatment. There seems to be a persistent interest by medical educators to address these issues, and as the system becomes more humanized (continuing adjustment of work hours rules, etc.) they may actually make some progress. There continues to be a perception, however, among medical students and residents, that they should work as many hours as possible, consistently do excellent work, never complain (except to each other), and never get ill. And the inability to be superhuman, or the desire not to attempt it, can lead to guilt, depression, and resentment. This is a complex problem, and with the social and economic status of doctors remaining in flux, it does not appear it will become simpler any time soon.

Intracranial Ventriloquism

September 7, 2010 2 comments

Futuristic-sounding story out of University of Utah today: electrodes implanted in the skull allow patients to speak without making a sound. While this is certainly a cool new trick, the investigator was quick to add: “It doesn’t mean the problem is completely solved and we can all go home. It means it works, and we now need to refine it so that people with locked-in syndrome could really communicate.”

University of Utah Department of Neurosurgery

When he says “it works,” what he means is that they get output that can be interpreted as individual words at a rate that is significantly better than chance. What they’ve done is implant small grids of electrodes over two areas of cortex and ask the subjects to think of individual words (the grids in the picture at left are artist renderings since the actual grids are too small to see, and they are next to “standard” size EEG leads). The researchers then try to characterize the electrical output to see if they detect any patterns. They then ask the subject to think of individual words (from a pre-determined list of 10 words) at random to see if they can recognize the electrical pattern and guess the right word. By honing the technique, they got up to about 50% accuracy which isn’t bad. You can’t really have a conversation at this level, and it doesn’t approach the near-100% accuracy you get by having someone blink when you read the correct word/letter (think of The Diving Bell and the Butterfly), but as a first attempt it’s pretty cool.

One confusing thing, however: in the press release they describe the “surprising” result that Wernicke’s area was less active then facial motor cortex during speech production, and that Wernicke’s area was most active when the subjects were thanked by the researchers. I don’t know why this is surprising, since Wernicke’s area is mostly involved with language recognition, interpretation, and meaning. Anyone who’s seen a patient with a stroke in that area who develops a Wernicke’s Aphasia is well aware that they have no problem speaking. They just have trouble making sense. Putting a grid over facial motor cortex is clever, though, as this are is likely active when movements are contemplated as well as when they are executed, and distinct words would be expected to have a different pattern of activation (since by definition a different series of muscles would be activated).

So we’re a long, long way away from anybody talking without moving a muscle, but in order to get there, we have to start somewhere. And apparently that somewhere is in Utah.

Tags: , ,

Magic Mushrooms for Anxiety

September 7, 2010 3 comments

By Vaxzine via Flickr

In a study published online yesterday in Archives of General Psychiatry, investigators at UCLA published the results of a pilot study using psilocybin (the active ingredient in “magic mushrooms”) to treat anxiety in cancer patients. This was a small study, with 12 patients who acted as their own controls (each had two sessions, one using the active drug and the other using a niacin placebo). These patients were fairly ill with advanced cancer, in fact only two of the patients were alive at the time of publication. Aside from legion anecdotal reports that using ‘shrooms makes people feel better, which has been accumulating since prior to Woodstock, we know that psilocybin and its active metabolite, psilocin, are potent serotonin agonists like LSD and presumably other psychedelics as well. Not that mushrooms and Paxil are strictly comparable, but we also know that generally upregulating the serotonin system is good for mood and anxiety, so there is a decent pharmacological rationale to think that this treatment might work.

It seems that the purpose of the study was twofold. First, as a proof-of-concept and model for further studies with psychedelic drugs, and second to actually test the efficacy of psilocybin on anxiety. The study does seem to have demonstrated that these drugs can be given in a safe manner and be fairly benign: no adverse reactions (or bad trips) were reported. The authors describe using a fairly low dose of psilocybin compared to the studies done with this drug 40 years ago. It’s not clear to me how this dose actually compares with the recreational doses used by your average illicit consumer, but it was apparently enough to get some typical psychedelic effects.

I’m not familiar with the “5-Dimension Altered States of Consciousness profile,” but it’s apparently a method to assess quality and intensity of experiences, and a good way to scientifically rate and characterize mushroom trips (and presumably other drug experiences as well). The patients described their psilocybin experiences as generally positive, with higher scores in “Positive Derealization” and  “Positive Depersonalization,” compared to the low scores in “Anxious Derealization” and “Fear of Loss of Thought Control,” to name a few of the subscales. It’s not clear whether the “Manialike Experience” item would be considered pleasant or unpleasant, but this one scored high as well.

If there are any criticisms of the study, they would be the small size and the lack of a true placebo (I think the strongest niacin flush in the world would be easily distinguishable from a mushroom trip, and the patients in the study in fact said so). Given that the patients were their own controls in a cross-over design, the long-term results of the drug use isn’t interpretable compared to the placebo anyway, and that wasn’t really the aim of the investigators. So is psilocybin effective for anxiety in cancer patients? Probably. Should we go ahead and recommend it? Probably not.

For the patient experienced with psychedelic use and with access to his or her own supply, I think it would be fair to suggest a trial. But if this study concerned a “regular” prescription drug, we would still be quite a ways off from FDA approval and evidence-based recommendation. Given the complicated legal and social history of psilocybin,  we are still incredibly far from mainstream acceptance of this drug as a treatment, even in critically ill patients.

By Thom via Picasa

But the big news lately is that after a 40-year hiatus, relegated to touring with the Grateful Dead, Phish, and annual appearances at Burning Man, etc., psychedelic drugs are officially re-entering the world of science. As the baby boomers gain control of research budgets, and the generations of academics who feared illicit drug use continue to retire, medicine is perhaps getting over its hang-ups related to these drugs. In the brave new world of  “medicinal” marijuana, it’s important to allow researchers to evaluate the potential of these drugs the same way we investigate the drugs produced by Big Pharma. Why should global corporations be the only ones allowed to tinker with your serotonin system?

Sharpen That Needle? No, I Don’t Think So

September 3, 2010 4 comments

Scott Beale/Laughing Squid

A study out in Archives of Neurology this week has generated a lot of interest. For example, the New York Times had a front-page story with the subhead “100% Accuracy in Test for Alzheimer’s” or something similar. The “100% Accurate” part is what caught my eye, and others too. I doubt that a “10% Accurate” headline would have made it to the front page or that the story would have been covered on the major networks. But is the headline true? My first thought was that no, of course not, otherwise the headline would have been “First-Ever 100%-Accurate Medical Test Developed.” 100% is a number we don’t see too much in medicine, particularly in the ambiguous world of neurological diagnosis. A test like that would be an enormously huge deal.

Does the study live up to the hype? Sadly, no. And it certainly doesn’t justify the an editorial entitled “Sharpen That Needle” which appeared in the same issue, written by two doctors who should know better. There is a 100% result found within the study results, but it isn’t the accuracy of the test exactly. It’s the sensitivity. And not the overall sensitivity, but the sensitive among patients with mild cognitive impairment who went on to develop Alzheimer’s. Now in the next paragraph I’ll explain sensitivity, so those of you who don’t want or need a (very) basic statistics lesson can skip over it.

New York Times Front Page 8/10/10

The “sensitivity” of a test tells us how good it is at finding true cases of what it’s looking for. Of all the people who have a certain disease, say, how many will test positive on our proposed test? If 5% of people with the disease test negative (false negatives) then our test is 95% sensitive. The other basic quality of a test is “specificity.” It tells us how many of the people who come up positive on our test actually have the disease we’re testing for (true positives). So let’s say that 10% of the time our test is positive, it’s wrong. Then we have a 90% specificity. You can’t judge a test based on just one of these elements. As an extreme example, let’s say we check if people are breathing to see whether they have Alzheimer’s. Now since we only test living people, all of the people with Alzheimer’s will have a positive test result: 100% Sensitivity, hooray! But if only 3% of the people we test have Alzheimer’s, then we have a 3% Specificity. Boo! Now the test proposed in the article did a lot better than that, but you get the idea.

If you guessed that the problem with the test is in the specificity, you’d be right. About 36% of the normals in the study also tested positive. So our specificity is down around 64%. This is a problem. It’s a problem because if you use this as a screening test in your office, about 1/3 of the people you inform that they should prepare for Alzheimer’s won’t actually get it. Not good. The other problem is that the “accuracy” of the test is better described by the positive predictive value, which depends on the sensitivity/specificity but also the prevalence of the disease. About 20% of people 75-85 have Alzheimer’s. If we use that as the prevalence, I get a 38% PPV for the test. (This is where my statistics starts to get hazy, so please send corrections if needed). That means that if you test positive on this test, and you live past 75, there’s about a 40% chance you’ll get Alzheimer’s. There are just too many false positives for this test to be a good screen.

andrewacomb via flickr

And the biggest problem with this test for most people won’t even be its accuracy. The big problem will be that it requires a spinal tap. Let me tell you firsthand that lots of people are deathly afraid of taps. They’re really not so bad, but if you think it’s hard to get people in for colonoscopy, forget about spinal taps.

Not to mention the biggest problem of all: if you do get Alzheimer’s there’s not a lot we can do for you anyway. Most doctors would or should question the ethics of testing for an untreatable illness. Now AD isn’t exactly untreatable, but there’s no evidence that the meager treatments we do have change the course of the illness in the long run, or that starting them earlier makes much difference.

If this did become a standard screening test, I’d certainly stand to benefit: as a neurologist I would start a dementia clinic and tap everybody and send lots of big bills to Medicare. And if I thought it would do my patients any good, I’d do just that. In fact, I’d team up with a GI doc and offer a combined colonoscopy/lumbar puncture. It’d be perfect: you’re already lying on your side, sedated, and you won’t feel or remember a thing. Turning 50? Come on in!

But until we have something to offer our Alzheimer’s patients, I’ll postpone sharpening that needle, thanks.

%d bloggers like this: