Reasons to use email folders, even if you think you don’t want them

As someone who is naturally disorganized, using folders has greatly improved my ability to find important emails at work and home.

I’ve been using automatic rules to sort my work email into separate folders forever; I’m usually working on multiple projects, each with a different team that has its own team email alias. It’s easy to set up a rule sending all messages for the P1094 team to my P1094 folder, which then makes it easier to find that email from 2 months ago that has crucial info. I also have one folder where I manually move email exchanges with my supervisor and one where I move important administrative emails.

At home, I still use folders but more in the way I used to use folders in my file cabinet – as a way to store like information together. Instead of rules, I move emails into the following folders that I find really helpful:

Paid: every time I receive an e-bill, I mark it unread until I’ve paid it. As soon as it’s paid, I move it to my Paid folder. Same with autopay notices and invoices for items I’ve bought online. It’s especially helpful when I want to look up which company I bought something from – I can search on “socks” in that folder without getting hits on other irrelevant emails.

Donations: I make all my donations online. Every email acknowledgement gets moved to this folder. Then, when I’m doing taxes it’s really easy to just go through this year’s donation receipt emails. As an added step that I like to do to keep things uncluttered, after tax time I make a sub-folder labeled with the year and move all the ones I’m done with into it.

House: emails about house expenses go here. It’s an easy way to keep a record.

Local help: I’m on a local email list. Every once in awhile someone recommends a contractor or plumber or electrician or the like. I may not need them now, but I might next year. I’ll never find them if they’re in my cluttered inbox.

I do have a couple of folders I use rules for – these are things that I may need or want to check on occasion, but don’t have to see when they come in. This includes a couple of newsletters and notifications of bounces or other problems from an email list I run.

As someone who is naturally disorganized, using folders has greatly improved my ability to find important emails at work and home. I hope you find some of these suggestions helpful.


“Breakthrough” Covid-19 cases are completely expected

We’ve been reading a lot about “breakthrough infections” in the news, with many people feeling misled about vaccine efficacy or pointing to vaccinated people getting sick as proof that vaccination is useless. However, to the contrary, the vaccines are working as expected.

The first, most important, point is that no vaccine is 100% effective. The reason we don’t see mumps outbreaks for the most part is not because the MMR vaccine is perfect; it’s because when the vast majority of the population is vaccinated against mumps, those few who didn’t develop immunity from the vaccine are unlikely to be exposed to it – this is the idea behind the concept of “herd immunity”.

Another important point is that the clinical trial outcome measures for the Covid-19 vaccines were whether they prevented severe illness or deaths. They were not tested for whether they prevented any infection at all, especially not asymptomatic infection. The goal was to get vaccines that would keep people alive and out of the hospital.

Both of the above points mean that seeing some people get mild coronavirus infections is not a failure of the vaccines, and referring to these as “breakthrough infections” is a bit of a misnomer in that it makes them sound unexpected – which they aren’t. Someone who is vaccinated is much less likely to catch the coronavirus than those who are unvaccinated and, more importantly, is very much less likely to die or end up on a ventilator.

For a good article on understanding the current situation, read “How vaccinated people can make sense of the rise in breakthrough COVID-19 infections”, an interview with Sehyo Yune, MD, Assistant Vice Chancellor of COVID-19 Wellness at Northeastern.

2E Inauguration Poet

The U.S. Youth Poet Laureate, Amanda Gorman, who did such a beautiful job writing and performing her poem at President Biden’s Inauguration is special for reasons beyond the obvious – she’s 2E* (twice-exceptional). As discussed in this article, Amanda Gorman has speech and auditory processing issues

Can you be a poet if you have speech and auditory processing issues? The nation’s first Youth Poet Laureate, Amanda Gorman, is living proof that you can:

Can you be a poet if you have speech and auditory processing issues? The nation’s first Youth Poet Laureate, Amanda Gorman, is living proof that you can: Gorman has always had a love for words. However, the path to becoming a poet wasn’t easy… Gorman was diagnosed with an auditory processing disorder in kindergarten. She also has speech articulation issues that make it difficult for her to pronounce certain words and sounds.

For me, listening to this poem again with this in mind adds a new dimension of appreciation. I look forward to seeing how her poetry grows over the next decades.

* 2E/twice-exceptional –  gifted children who have some form of disability

On Being Saved by Love for Another

Today I was reminded that it’s time to reread one of my favorite books, Silas Marner, by George Eliot. This short beautiful novel is set in the English Midlands in the early 1800s and tells the story of a weaver whose life was spoiled by false accusations that turned him into an exiled miserly recluse. Losing his gold and gaining a golden toddler in the same night leads him to experience love for the first time since his exile and to become part of the community in his village. In addition, there is a parallel plot, intertwined with Marner’s, focused on the sons of the local squire.

Reading Silas Marner has always filled me with joy, even before motherhood added an extra dimension to my understanding of it, and I get more out of it with each re-read. What always strikes me is the transformative power of loving a young child. Marner loses the original object of his love – cold unfeeling gold coins – and gains the much deeper love for a living growing child who needs him and loves him in return.

For all who are dealing with loneliness and isolation during this pandemic, I recommend reading this book and experiencing its light and love.

Remembering a previous epidemic

Many of us have vivid memories of the emergence of a new, frightening, deadly disease in the 1980s and the toll it took before the virus, HIV, was identified and the first treatments were found. For many others, HIV/AIDS is just another chronic condition. The article Unsung Heroes: Gay Physicians’ Lived Journeys During the HIV/AIDS Pandemic provides a view of that time through the eyes of gay Canadian physicians:

Unfortunately, over time, memories of what it was like to meet head-on a grim, contagious, disfiguring, lethal, and sexually transmitted threat like HIV/AIDS have begun to fade. It was a “time when medicine was all but powerless” (Bayer & Oppenheimer, 2000, p. 3) and when “people with HIV [/AIDS] were fired from their jobs, kicked out of their apartments, denied health care and abandoned by their families” (AIDS Legal Council of Chicago, 2013, p. 4).

I work in HIV/AIDS clinical trials and remember when the AIDS Clinical Trials Group (ACTG) was first being designed and its grant submission written. Many of my colleagues are too young to remember the early days of the epidemic; some were born after treatments had been found and they think of HIV/AIDS as a chronic condition rather than the mysterious deadly disease that had people terrified.

The world has changed a lot and the pandemics have many differences but also similarities. For those of us who remember how long it took to even figure out what the pathogen was let alone develop treatments that weren’t just palliative care, it’s truly amazing how quickly SARS-CoV-2 was identified and sequenced and vaccine development started. Two world-changing pandemics — very different from one another but each changing those who live through them.

Reading COVID-19 news: What are preprints and why should you care?

As the COVID-19 pandemic continues, every few days we’re seeing reports on alarming or exciting new developments – new tests, new treatments, scary new mutations. How is a reader to figure out what is real and what should be treated with skepticism? One thing to check is whether the story is based on one of the many preprints being posted each day versus research that’s been through peer-review. But what is a preprint and how can you tell that one is the source?

What’s a Preprint

In the usual course of doing medical research, studies and experiments are submitted to professional journals where they undergo peer-review — a process where ideally unbiased researchers with expertise in that area of research review the paper thoroughly, point out flaws that need to be addressed, and recommend that the journal’s editors either accept the paper, reject it, or request that the authors revise in response to the reviewers’ comments, and resubmit the edited version. While there are problems with this process and flawed papers can end up in even the top journals, it provides a level of quality control. It also takes a long time — many months, or even years.

Over the past few decades, fields such as physics and math have created what are called preprint servers — online repositories where researchers can post draft papers, many of which are later submitted to journals. Anyone can read them and, if they wish, post comments critiquing them. The positive is that research can be shared quickly. Unfortunately, this means misinformation can also be shared quickly.

The Good

Until recently, this was not common in biomedical research. However, in the midst of a pandemic, speed becomes crucial. In the past few months, publishing preprints of COVID-19 research has become commonplace as a way for researchers to share information with each other. While the preprints haven’t undergone peer-review, readers can post comments pointing out errors or gaps, then have discussions with each other and the researchers. In addition, there are active research communities on Twitter such as #epitwitter where new papers are dissected in detail.

The Bad

The main two preprint repositories in the health sciences are bioRxiv and medRxiv, which are now linking jointly to COVID-19 research. Each has a notice on its home page stating:

A reminder: these are preliminary reports that have not been peer-reviewed. They should not be regarded as conclusive, guide clinical practice/health-related behavior, or be reported in news media as established information.

Unfortunately, those notices are about as effective as speed limit signs. Journalists are under pressure to get news out quickly, especially exciting news and the flashier the research finding, the more clicks and shares articles about it will get.

What’s a Reader to Do?

When you see the latest news headlines about a new amazing cure or a wonderful vaccine or how a mutant coronavirus strain is spreading:

  • Always be skeptical of dramatic results. The more exciting or frightening the news, the more carefully you should check the article.
  • Check where the study is published. If the source is bioRxiv or medRxiv, remember that anyone can post a paper there and no one has reviewed the paper or checked the results.
  • What is the researcher’s area of expertise? We’re seeing a lot of non-infectious disease epidemiologists modeling projections of things like the spread of COVID-19, how hospital capacity is likely to hold up, how well can flattening the curve work without understanding what factors are important to include in the models and how they interact. Anyone with a strong math background can make a model, but it takes education in the epidemiology of infectious disease to make a good model.
  • Has the journalist interviewed experts in this particular area who have had time to thoroughly examine the paper? This can mean experts on coronaviruses, infectious disease epidemiology, or other specialized areas.
  • If the research is a clinical trial, is it well designed? Optimally, the reporter will have an expert in clinical trials review it but if they don’t, some things to look for are:
    • was there was a comparison treatment (if not, you can’t tell if the patients would have improved anyway),
    • were participants randomly assigned to study arms (otherwise you can end up with all the sicker patients or those with a certain risk factor or in a certain age group or…) on one arm, making it impossible to tell if treatment differences are real or due to these imbalances
    • were there enough participants to be able to draw conclusions*
    • was any difference found clinically meaningful
    • remember that not finding a “statistically significant” difference doesn’t mean there’s definitely no difference, just that we can’t tell yet (absence of evidence doesn’t equal evidence of absence)
    • did one treatment cause more harm (for example heart attacks, infections, liver damage)

In addition to the above, see if the story lasts. Are there follow-up articles confirming or refuting the news? Is it a flash in the pan that disappears after a couple of days? Does it subsequently show up in a peer-reviewed journal (many of which are speeding the review process for COVID-19 papers)?

Above all, remember that science is a process of trying to increase and correct our knowledge. We should expect that some of what we heard at the start of the pandemic turned out to be wrong, and some of what we think we know today will be corrected or refined in the future.

* While “large enough” varies by type of study and the size of the effect found, in general you’d like to see at least 40 or more participants in a preliminary study and several hundred in a Phase III clinical trial. If the sample size is small enough that changing the results for 2 participants has a major effect on the findings, it’s way too small.

What to expect as the COVID-19 pandemic progresses?

The paper “Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period” by Stephen M. Kissler, Christine Tedijanto, Edward Goldstein, Yonatan H. Grad, and Marc Lipsitch of the Departments of Epidemiology and of Immunology & Infectious Diseases at the Harvard T.H. Chan School of Public Health, came out this week and provided a projection of what we should expect over the next time period. It’s an important article, but very technical, so the below is my attempt to translate their summary into something easier for non-experts to read.

The abstract for this article basically says that the authors made a model based on estimates from what is known about the coronaviruses that cause colds – how seasonal they are, how much immunity they provide and how long it lasts, etc. The authors project there will be recurrent winter outbreaks of COVID-19 after the end of this first, most severe pandemic wave.

Until we have effective interventions (treatments, vaccines), the key measure of the success of physical distancing is whether hospital critical care capacities are overwhelmed. To avoid this, we may need prolonged or intermittent physical distancing through 2022. We also need to expand critical care capacity and find effective treatments in order to improve the success of distancing and hasten the (safer) acquisition of herd immunity. We urgently need longitudinal serological (antibody) studies to learn how many people develop immunity and how long it lasts. Even if it looks like the pandemic has been eliminated, we need to keep up COVID-19 surveillance since another wave could occur as late as 2024.

Please let me know in the comments if anything is unclear or an inaccurate summary.

When a medical test saying “yes” means “maybe” and “no” means “probably not”

There’s a lot of talk about various tests for Covid-19 and it can get confusing. Discussions of whether tests are useful throw about terms like sensitivity and specificity that often don’t imply what people think they would. In this post, I will start by defining some important terms and then explain how using them helps us understand the effectiveness of various tests and testing strategies. I will also explain why the prevalence of Covid-19 in the population being tested is such an important factor.

When we test someone for Covid-19, the result can either be positive (test says person has Covid-19) or negative (test says person doesn’t have Covid-19). “Positive” and “negative” just describe the test results, not whether they are correct. If the test results correctly say someone has Covid-19, it’s a true positive. If it correctly says the person doesn’t have Covid-19, it’s a true negative.

Please note that while I use Covid-19 testing as the example here, the principles described apply to tests of any type for any disease. All medical tests have sensitivity and specificity, and the number of false vs. true positives and negatives for any test varies the same way described below.

Unfortunately, tests can get wrong results for a variety of reasons. For example, when someone who has Covid-19 is tested using a swab, the swab may not hit a spot on the nose or throat that has virus on it so the results say they don’t have Covid-19. On the other hand, a swab that doesn’t have virus on it may get a result that says it did.

It’s important to distinguish between these two types of errors:

  • A false positive test says a person has Covid-19 when they really don’t
  • A false negative test says a person doesn’t have Covid-19 when they really do

This table shows the possible combinations:


For a test to be useful, it needs to have most results be either true negatives or true positives. Whether that happens depends on three things:

  • Sensitivity is the proportion of people with Covid-19 who the test correctly identifies as having it.
    Sensitivity = the probability of getting a positive test if someone has Covid-19
  • Specificity is the proportion of people without Covid-19 who the test correctly identifies as not having Covid-19
    Specificity = the probability of getting a negative test if someone doesn’t have Covid-19
  • Prevalence is the proportion of the population that has Covid-19

Here’s a non-medical example of why prevalence is so important in determining how many errors we’ll end up with. Imagine you have a baseball umpire who’s calls 95% of balls (bad pitches) correctly; his sensitivity is 95%. He calls 95% of good pitches correctly but miscalls 5% of them as balls (specificity is 95%). If an amazing pitcher throws 1 ball and 99 good pitches (the prevalence of bad pitches is 1%), this ump is likely to call 6 balls overall (1 true ball, 5 good pitches). Of these, 5 will be wrong calls. So, 83% of the time the pitches he calls balls will be wrong.

Now let’s take a case where the umpire is calling a lousy pitcher who throws half his pitches as balls (the prevalence of bad pitches is 50%). In a 100-pitch game he’ll correctly call 50 balls and incorrectly call 2-3 good pitches as balls. In this game, 95% of his calls for balls will be correct.

Most of the time that people are tested for a disease, it’s because a medical practitioner suspects they may have it. If the test has 95% specificity and sensitivity, half the people who are tested have it, and we test 200 people, most of the positive tests will be people with the disease. We’ll expect about 95 of the 100 people with the disease and 5 of the 100 people who don’t have it to test positive, in which case 5 of 100 positives will be false positives (only 5%). In the picture below, the pink diamonds represent people who have Covid-19 but test negative (false negatives) while the dark circles represent people who don’t have Covid-19 but test positive (false positives). The picture below that shows all the positive tests – you can see that there are 95 true positives and 5 false positives.

image 3.png

image 4.png

However, if we decide to test a random sample of people in the population for the disease and only 10% of the population has it (prevalence=10%), we’re going to end up with a lot of false positives. When we test 200 people and only 20 of them have the disease, we’ll end up detecting 19 out of 20 (95%) of them, but we’ll also incorrectly diagnose 9 of the 180 (5%) people without the disease. That means that 9 out of 28 diagnoses (about one third) will be wrong.

image 5.png

image 6.png







This is the kind of situation epidemiologist Zachary Binney discusses in his Twitter thread on the new COVID-19 antibody test from Cellex. The test has sensitivity of 93.8% and specificity of 95.6%. If we use it to test a bunch of people chosen randomly and only 5% have had COVID-19 and developed antibodies, a positive test will only be right about half the time. If 30% were infected, a positive test will be right about 90% of the time.

As Binney discusses, if we’re trying to find out what proportion of the population has been infected, epidemiologists have methodology for correcting for this problem so the test will be useful for getting that information. Also, if we’re testing a group who are highly likely to have caught COVID-19, such as health care providers, then we’ll likely be correct more of the time since this will be like the umpire calling balls on the really good pitcher. And if we have a second test that works a little differently (so both tests don’t tend to be wrong on the same people), we can screen people with the first test and then give then the second test to confirm that the first one was right.

I hope this has been helpful for better understanding some of the issues in doing wide-spread testing for COVID-19. Don’t be discouraged if you find you need to read this a couple of times before fully grasping how it works – that happens to most of us learning these concepts for the first time. Please let me know if you have any questions or catch any errors.

Huge thanks to Katherine Boothby for the wonderful graphics and very helpful editorial suggestions.

COVID-19 and the spread of unvetted medical “news”

UPDATE: Annals of Internal Medicine just published a discussion of the flaws of the hydroxychloroquine (HCQ) preliminary research and subsequent consequences at
A Rush to Judgment? Rapid Reporting and Dissemination of Results and Its Consequences Regarding the Use of Hydroxychloroquine for COVID-19

The COVID-19 crisis has led to the growth of “preprint servers” – repositories where biomedical researchers can share draft unreviewed papers (preprints) to speed the interchange of ideas. This is a major change in how medical research is usually shared.

Usually, manuscripts are submitted to journals who send them to other experts in the field for review. The reviewers can recommend accepting the paper as is, recommending accepting it with minor revisions, “revise and resubmit” which involved giving authors recommendations for re-analysis or re-thinking parts of the paper after which they can resubmit the paper and hope it is accepted, or rejection. Once a paper has been accepted, the results are not to be shared elsewhere until after it has been published in the journal.

This process doesn’t guarantee that all published papers will be error-free, even in the most prestigious journals, but does provide some level of quality control. It’s also a lengthy process. In the current situation, where speed is of the essence, this process has been upended. Scientists are sharing data at an unprecedented rate, hoping to speed the process of developing new treatments and tests, as well as figuring out what steps to take. Having preprint servers has enabled information sharing and collaboration at unprecedented rates, and the scientists reading papers off the servers know that these are not finished products and may have serious mistakes.

Unfortunately, scientists aren’t the only ones reading preprints. While all scientific papers should be read critically, extra skepticism needs to be used when reading preprints. This is a serious problem right now, when journalists and the public are grasping for any encouraging news. Any news article you read that is based on preprints needs to be considered unreliable until followed by confirmation by other labs, preferably from other institutions.

The segment

Science Communications In the Time of Coronavirus

from WNYC with Ivan Oransky, professor of medical journalism at NYU and co-founder of Retraction Watch, provides a good overview of these issues.

Science is a team sport, redux

Work being done on COVID-19 is showing how much science is really a group effort. The best scientists are quick to credit their teams and other colleagues, acknowledging their important contributions. For example, Prof. Florian Krammer bracketed his Twitter thread explaining the paper about the antibody test his lab developed by giving credit to collaborators at other labs, and “the student who took the lead on this, Fatima Amanat as well as my whole group of dedicated students, postdocs, techs and assistant professors who dropped all their beloved influenza work to help out with creating tools to fight SARS-CoV-2.”

During this pandemic, it’s been exciting to see the explosion in sharing of data and results. While caution is needed when viewing work that has been quickly posted on pre-print servers without being peer-reviewed (especially when hyped by the press), this sharing has enabled medical researchers to progress astonishingly quickly in areas ranging from genetic sequencing, testing, and vaccine research. It takes a world of scientists to face down a pandemic.