Microscopy is life

Microscopy is one of the most important techniques in all science. It has a ton of uses that continue to bring us closer to cures for diseases as well as teaching us more about the world around us. But despite the incredible advances made by modern microscopy nothing comes close to what was perhaps the most important discovery of all – indisputable proof of cells.  It’s something that we take for granted these days but without microscopy our knowledge of human health would never have moved on from the middle ages. The discovery of cells catapulted us into a new era of science when we learned that they make up literally every living thing. This is the story of that journey.

plant-cells

Plant cells seen through a light microscope
“Cells.” BBC bitesize revision. bbc.co.uk/education/guidesz9hyvcw/revision/2. n.p. Web. 25 September 2016

So we’re going to start right at the beginning, the middle of the 17th century. Now, at this time microscopy had been around for about 50 years since it’s invention in 1590 but scientists regarded it as largely a worthless pursuit. Understandably perhaps, early microscopes couldn’t magnify all that much and the very idea of a micro world existing all around us would have been laughable fantasy at the time.

Robert Hooke (1635 – 1703)

That is with the exception of  Robert Hooke, the brilliant English polymath who didn’t consider microscopy to be worthless at all. He actually liked it very much, so much so that he had a custom microscope (rather expensively) built to pursue his interest and with it he set out to look at a lot of things.

So many things, in fact, that he published a book of his findings, Micrographiain 1665. This book turned out to be a pretty big deal because, not only was this the first book to ever publish drawings of things such as insects and plants seen through a microscope, it was also the first major publication of the newly formed Royal Society. Now, for those who don’t know how big this is, the Royal Society is extremely influential in the scientific world today. It commands great respect within the scientific community and funds over £42 million of scientific research. The crazy thing is, they may owe a large amount of their success to Micrographia which, to many people’s surprise, became an instant hit and the worlds first scientific best-seller and subsequently inspired wide public interest in microscopy. Without the start given to it by Micrographia who knows if the society would even still exist.

The interesting thing is, despite the wealth of detailed drawings and descriptions contained within the book, it was one particular image that ended up becoming Hooke’s legacy. In the picture you can see on the right, taken directly from Micrographia, he described a magnified cutting of dead cork tree. He said of the individual compartments that they reminded him of monk’s cells in a monastery. For this reason he named them ‘cells’. He didn’t realise that he was looking at the cell walls of plant cells but regardless, as the first person to describe them, we’ve used the word cell ever since.

The rest of Hooke’s legacy

With public interest in microscopy riding so high, many scientists were keen to leap on the microscopy bandwagon but, strangely, one of these scientists was not Robert Hooke.

Robert Hooke was nothing short of a genius and an industrious polymath who put microscopy on the back-burner and went on to contribute to countless other areas of science. To name a few; he postulated theories of gravity that preceded Isaac Newton; he stated that fossils were the remains of living things and went on to suggest that they be studied to learn more about Earth’s history; he used telescopes to study craters on the moon and the rings around Saturn; he came up with theories of memory capacity, memory retrieval and forgetting; not to mention that he was an accomplished architect who acted as the surveyor of London city and helped rebuild it after the great fire of London of 1666, going so far as to co-design multiple buildings that still exist today.

Remarkably, the scientist best credited with pushing microscopy forward was actually a Dutch cloth merchant called Antonie van Leeuwenhoek.

Antonie van Leeuwenhoek (1632 – 1723)

A cloth merchant is a strange start for a scientist but van Leeuwenhoek had already been using magnifying glasses to examine his threads and cloth for a while. The thing that made van Leeuwenhoek truly unique though was his ability to create high quality lenses. With public opinion so strongly in favour of microscopy, he set out to bend his considerable talent towards it.

Armed with his lenses, van Leeuwenhoek actually managed to create a microscope that didn’t see an equal for over one hundred years. The downside was that his microscope was, admittedly, a very strange design.

The image on the right shows what it looked like and how you should hold it.

van-lThe lens is the tiny hole you can see in the middle of the large plate. The sample holder is that long screw that ends above the tiny hole in a fine point on which he would place the sample. The two screws allowed him to change the position of the sample slightly.

The issue, as you can see from the lower image on the right, is that the microscope had to be held as close to the eye as possible. This was because his small lenses needed the sample to be very close for them to be in focus. His strongest lenses, for example, needed the sample to be about one millimetre away…

He may have looked silly doing it but the things he discovered were anything but. He became the first person to see red blood cells in 1674 and then human sperm cells in 1677. He followed that up with countless observations of microorganisms found in pond water. With these observations he became the first person to describe bacteria. Of course, he didn’t know what bacteria were but based on what they looked like and the way they moved van Leeuwenhoek recognised that they were living things. For this reason he called them ‘animalcules’ which means something like ‘small animal’.

van Leeuwenhoek’s animalcules

Now, what we have to remember about this time was that no one knew a single thing about cells. Although van Leeuwenhoek had seen red blood cells and bacteria, no one knew what they were or what they did. It wasn’t known that red blood cells carry oxygen around the body (in fact, they weren’t even aware of the existence of oxygen) and as far as disease was concerned, the miasma theory was still the prominent idea.

The miasma theory is really interesting actually. I go into a bit more detail about it in my post about the black death but, essentially, it was believed that disease came from ‘bad air’. It was thought that bad areas of air contained poisonous materials that caused disease and they could be located by the terrible smell. They didn’t believe that people (or animals) couldn’t pass disease to each other, rather people in the same area got infected because they were in the same area of bad air. This theory is really well exemplified by the disease malaria which is spread by mosquitos living typically around smelly swampland. Malaria in medieval Italian is a literal description of the idea – mala aria = bad air.

For this reason, instead of demonising bacteria as the disease causing agents that some are, van Leeuwenhoek, a devout Calvinist, wrote about how great God was to provide the Earth with so many examples of life both great and small.

van Leeuwenhoek the businessman

Getting back to the microscopes, it was well known that van Leeuwenhoek’s microscope design was strange, so strange, in fact, that no other scientist of the time used it. But that’s just one reason. The other, more important, reason was that his microscopes were simply the best in the world and not one other scientist was able to reconstruct them. They had the highest magnification and best resolution and no other microscopes existed that could come close. They were so good that we weren’t able to recreate a van Leeuwenhoek microscope until the 1950’s…

This meant that van Leeuwenhoek had a virtual monopoly on all microscopic research. No other scientist could compete. Ever the businessman, van Leeuwenhoek was completely aware of this which is why he never revealed the microscopes he used for his research. He worked alone and when visited by notable figures and other scientists he only showed them lesser lenses so he would always stay at the top of his field.

(Incidentally, if you think that his behaviour was selfish because sharing this information could have advanced science faster… Don’t go into research.)

Robert Hooke himself even lamented that the whole field of microscopy was resting on just one man’s shoulders.

van Leeuwenhoek the scientist

Despite this practice, van Leeuwenhoek contributed massively to the field of microscopy and microbiology. Although he never trained as a scientist, he was a highly skilled amateur whose research was of a very high quality. His talent for making lenses was certainly not wasted, he made over 500 lenses and over 200 microscopes – each one individually hand-made to provide different levels of magnification. By the end of his life he’d sent over 560 letters to the Royal Society and other institutions detailing his discoveries. For this reason he’s considered to be the father of microbiology and the world’s first microbiologist.

Unable to compete, other scientists focused instead on creating new microscopes that would provide better magnification and resolution than van Leeuwenhoek’s designs. They succeeded, eventually. But it was close to one hundred years before these new microscopes were able to even match Van Leeuwenhoek’s.

Improvements in microscopy

And so, in 1832, over one hundred years after van Leeuwenhoek’s death, microscopy yet again came into the spotlight. This time our unlikely hero was an Englishman, Joseph Jackson Lister, who owned a wine business but was deeply interested in natural history.

He wanted to use a microscope to investigate plants and animals but noticed that the microscopes of the time had pretty poor resolution which made it impossible to see the very small things he wanted to see. So he sought to change that.

The fine details of what he did are complex and a bit out of the scope of this blog but basically, the issue microscopes had at the time was that when light entered the microscope, not every colour of the spectrum was focused on the sample. This was a problem because the unfocused colours caused distortions and coloured halos on the image. What he did was to invent a new type of lens, an achromatic lens, that focused all the colours of light onto the sample which drastically improved the resolution. In short, what he did was to perfect the optical microscope. What’s more, he had a wine business to run so he did this all in his spare time.

Unfortunately, that’s all we have to say about Joseph Jackson Lister. Although I’d like to take a quick aside to say that if his name sounds familiar it’s because his son, Joseph Lister, was the famous surgeon, pioneer of antiseptic surgery and widely regarded as the father of modern surgery.

But for our story, Joseph Jackson Lister merely provided the microscope, we’re interested in a couple of men who used it: Matthias Schleiden and Theodor Schwann.

Matthias Schleiden (1804 – 1881)

Schleiden was yet another interesting case. A German man who originally trained and practiced as a lawyer but abandoned law to pursue his hobby – botany. Thus, he studied botany at university and then moved on to teach it as a professor of botany at the University of Jena.

Schleiden disagreed with the botanists of the time, who, in his opinion, were devoted only to naming and describing plants. What Schleiden wanted to do was to really study them, and he wanted to study them microscopically.

It was through this study, using Joseph Jackson Lister’s microscope design, that, just like Hooke, he realised that plants were all made up of recognisable blocks, or cells. With this discovery he wrote his famous article, ‘Contributions to Phytogenesis‘, in 1838, in which he proposed that cells were the most basic unit of life in plants and that plant growth was caused by the production of new cells.

What he was telling us was that, in plants, cells are the building blocks of life, the reason for growth. Plants are made of cells and they grow when new cells are made

He was so close to cell theory, the problem was, he only made these conclusions about plants. But this all changed after one fateful night of dining with his colleague, Theodor Schwann.

Theodor Schwann (1810 – 1882)

Schwann worked in the same university as Schleiden but Schwann, by contrast, studied animals, and over dinner they began talking about their work. Schwann immediately noticed the similarities in Schleiden’s description of plant cells with things he’d seen in animals and the pair wasted no time connecting the two phenomena.

What happened next was the publication of perhaps one of the most important books in the history of biology; Schwann and Schleiden’s Microscopical Researches into the Accordance in the Structure and Growth of Animals and Plants (1847).

In this book they concluded that the cell is the basic unit of all life and set out the two basic tenets of cell theory:

1. All living organisms are composed of one or more cells
2. The cell is the basic unit of structure and organization in organisms

This was a huge deal. I can’t even begin to explain what this meant for our understanding of life. I’d go so far as to say that everything we know about life today came from this one idea.

Reconciling science with religion

If there was one issue with Schwann and Schleiden’s book, it was that they missed the last crucial component of cell theory; the question of how new cells formed was very much still contested.

The leading thought at the time was that of spontaneous generation. This theory, heavily influenced by religion, was that life could spontaneously arise from nothing, or in the very least some kind of inanimate matter. This explained how maggots seemed to form on dead flesh or how fleas formed from dust.

What many now believe to be a concession to religion, Schwann and Schleiden wrote that new cells formed from inanimate matter within existing cells. This allowed them to maintain that cells were the building blocks of life, but at the same time attributing the formation of new cells to God.

Many doubt that Schwann believed this but Schleiden was known to favour the spontaneous generation theory and by including it in the book allowed it to gain favour with the public that led to the books great success.

The last tenet

It wasn’t until 1855, when Rudolf Virchow, pursuaded by the work of Robert Remak, published his article Omnis cellula e cellula (All cells [come] from cells), that we ended up with our final tenet of cell theory:

3. Cells arise from pre-existing cells.

This stated that the only source for a living cell was another living cell and thus completed what we now consider to be cell theory.

Since then, new tenets have been added but these three tenets have been unchanged as the basis for cell theory for more than 150 years.

So the next time someone tells you that microscopy is boring, tell them that without it we would know nothing about life on this planet.

How We Changed Medicine Forever – Vaccines

Vaccines are the literal embodiment of the maxim ‘prevention is better than cure’. When it comes to healthcare it’s far better to stop something happening in the first place than it is to repair the damage after it has happened. Thankfully that’s what a vaccine does, it grants you immunity (effectively) to a disease. This is fortunate because many of the diseases that we can vaccinate against leave pretty significant damage.

 

vaccines

 

How vaccines work

A vaccine relies on the incredible ability of your immune system to remember a disease from last time and mount a quicker and more effective response the second time. This is what I mean by effective immunity because even though you received the vaccine, the disease causing bacteria or virus may still get into your body. But because your immune system is prepared for it the response is so fast and effective that you don’t even notice.

As an aside, your immune system’s memory can be a bit of a double edged sword because it’s also the cause of allergic reactions. An allergic reaction happens when your immune system decides that something quite harmless is actually a real threat. One of the most common allergic reactions is hay fever where the immune system mistakenly believes that pollen is dangerous and mounts an immune response to it. The faulty immune response to pollen is to try to force it back out of the body and keep it out which as any sufferer knows involves runny eyes, noses, sneezing and the rest of it – your body is simply trying to minimise the amount of pollen that can get in. It sucks, but thanks to your immune systems memory it will happen every time you encounter pollen for the rest of your life.

You should be thankful though if hay fever is the worst of it. Some more serious allergic reactions, such as one to penicillin or insect stings, can cause anaphylactic shock where the immune response may include swelling of the throat which prevents breathing or widening of blood vessels which causes a potentially fatal drop in blood pressure. The first time you suffer an allergic reaction will be reasonably mild and probably not life threatening but this is like your warning from the universe because, thanks to your immune system memory, the next time you have an allergic reaction it will be faster, stronger and potentially fatal.

 

The first ‘vaccine’

So how did we figure out vaccination? Well, unlike antibiotics, it was actually quite a long time ago.

Arguably the first vaccination was used in China in the 15th Century against one of the most deadly diseases of the time; smallpox. It was already clear that someone who got smallpox and didn’t die never got smallpox again which suggested to them that recovery from the disease granted immunity to it. This information was useless though if you wanted to prevent the disease in the first place but what if it could be weakened first?

It was discovered that taking the scabs from someone with smallpox, leaving them out for some time to dry and then snorting them through the nose would produce a smallpox infection notably less severe than the natural infection but granting full immunity afterwards. Absolutely disgusting, but better than smallpox.

This form of vaccination was called variolation (variola being the Latin name for smallpox) and despite causing a few deaths itself, it was still safer than naturally getting smallpox. Variolation finally made its way to Britain in 1721 and became the de facto vaccine for smallpox in Europe.

 

The first real vaccine

Continuing with smallpox, as it was still very much one of the most prevalent and deadly diseases of the time, in the late 18th Century a scientist called Edward Jenner noticed that people who became infected with cowpox seemed immune to smallpox.

The story goes that he noticed that milkmaids always had beautifully smooth skin as a result of never contracting smallpox (which would leave pock marks or scars on the skin of survivors). This, he attributed to them working with cows who were known to suffer from the far less deadly cowpox. Cowpox could be passed to humans, seemingly granting an immunity to smallpox. The beautiful milkmaid part of the story is debated nowadays but it is a rather more romantic way of describing how Jenner noticed the protective capability of cowpox.

This led him to perform one of the most ethically questionable experiments (by today’s standards) to prove his theory. He took pus from the sore of a milkmaid suffering from cowpox and inoculated an 8 year old boy, the son of his gardener, by applying the pus directly into small cuts made onto both arms. After about a month the boy was infected with smallpox using the variolation method and miraculously developed no symptoms of the disease. Several months later the boy was infected again with the variolation method and again developed no symptoms of the disease, leading Jenner to confirm that overcoming cowpox infection granted immunity to smallpox infection.

Despite initial caution by the medical establishment, Jenner’s arm-to-arm smallpox vaccine was finally accepted and variolation was halted in 1840 in favour of Jenner’s vaccine. News of the vaccine soon spread around Europe and eventually to the rest of the world.

Following a global effort led by the World Health Organisation (WHO) using a smallpox vaccine that didn’t require cowpox pus, smallpox has been officially eradicated since 1979. The disease exists now only in secure laboratories for research purposes.

 

Types of vaccine

We’ve come a long way since rubbing pus into wounds as a form of vaccination and we now have many methods that we can use. But by far the most common methods are the use of inactivated vaccine and a live attenuated vaccine.

 

Inactivated vaccine is pretty self explanatory, the disease causing bacteria or virus is killed or made inactive by heat, chemicals or radiation. The inactivated bacteria or virus is then given to the patient, usually in the form of an injection, and the body’s immune system treats it as the real thing. This causes the body’s immune system to develop a memory of the disease so it can effectively fight it next time.

One issue with this vaccine is that the bacteria or virus is dead so it can’t replicate in the body, that sounds like a good thing but hear me out. Without replication the immune system can ‘forget’ the disease so these vaccines are usually followed up with boosters or booster shots. This keeps the bacteria or virus in the body long enough for the immune system to properly generate memory.

Examples of vaccines that use this method are effective against influenza (the flu), cholera, bubonic plague, polio, hepatitis A and rabies.

 

A live attenuated vaccine is slightly more complicated. The disease causing bacteria or virus is made less virulent or less effective, in most cases harmless, but it is still live. That means it can still infect you and replicate inside of you but its ability to cause disease is removed. This has a distinct advantage over inactivated vaccine because it continues to replicate inside the body (albeit at a much slower rate than normal) so boosters are not so often needed. No one likes getting an injection after all and this also saves a lot of time for healthcare professionals.

However, there is, in some live attenuated vaccines, a tiny, tiny chance that the disease causing bacteria or virus will mutate and restore it’s natural virulence. We’re talking a millions to one chance of course but that really sucks for that one guy. Furthermore, live attenuated vaccines can’t be used by people who are immunocompromised such as HIV sufferers because their body has no defence even against this weakened bacteria or virus.

Unfortunately, there are some bacteria or viruses that can’t be vaccinated against using inactivated vaccine and they need to be live attenuated. Healthcare professionals take this into account when developing vaccines and certain safeguards and ethical considerations need to be met before the vaccine will even hit the shelves. Is it worth it to immunise 1 million people against a disease if one person will get infected? Keep in mind that the disease is still unlikely to be fatal. And we must remember that when the entire planet has the vaccine and the disease is thereby eradicated, then, like smallpox, we won’t need the vaccine anymore. In the case of smallpox for example, the vaccine was fatal for 1-2 people per million vaccinated. We have to consider whether that is worth it.

Examples of other live attenuated vaccines are tuberculosis (BCG vaccine), measles, mumps and rubella (MMR vaccine), influenza, chicken pox, polio and typhoid.

 

Case study: Polio

You may have noticed that polio has both an inactivated vaccine and a live attenuated vaccine. The reason for this is super interesting and highlights one of the things that healthcare professionals need to consider when preparing and prescribing a vaccine.

PolioVaccine

Polio is primarily a disease of the intestine and is passed on through the fecal-oral route. That means the only way you can get it is if you eat/drink something that contained the feces of an infected person. Therefore, it’s pretty hard to catch it but if you do catch it there’s absolutely no cure. You will have it for life.

The good news is that 72% of people who have it will show no symptoms so effectively you might as well not have it. 24% of people will have minor symptoms like a sore throat, nausea and flu like symptoms and 1-5% of people will have major symptoms similar to those of meningitis. But about 0.1-1% of people, 1 in 1000 children or 1 in 75 adults, will develop permanent paralysis. That’s not confined to limb paralysis, in some cases it can cause paralysis of the muscles used for breathing which is immediately fatal.

It’s horrible, but that doesn’t mean it’s not interesting. You see, although the virus starts in the intestine, it can move into the blood stream and from there into the central nervous system. This is what happens in those who develop paralysis. And this is a significant problem for doctors because although I’ve been describing immunity as a binary thing – you either have it or you don’t – it’s actually a bit more complex than that…

 

The polio vaccine

The immune system has both cell-mediated immunity, such as that which is present in the intestine where polio starts, and humoral immunity which blocks the polio virus from entering the blood and central nervous system. What’s interesting is that the inactivated polio vaccine, which is injected into your bloodstream through your arm, gives you immunity to the virus travelling in your blood – effectively making you immune to paralysis should you get infected by polio. But it does nothing to stop polio from infecting your intestine in the first place. The live attenuated polio vaccine however does get into your intestine, replicates there and travels in your blood just like the real virus so you get immunity to both the initial infection and paralysis.

So maybe you’re thinking, why do we even have the inactivated vaccine if the live attenuated vaccine is better? Well, the live attenuated vaccine, like many other live attenuated vaccines, has a chance to mutate and become fully virulent again. Because of this, one person per million vaccinated will get paralysed from the vaccination. Now, this is where healthcare professionals need to think carefully about who to vaccinate and it all comes down to statistics.

We need to ask ourselves if one paralysis per million is a big number or a small number based on the number of polio infections in that country. For example, in countries like Pakistan or Afghanistan where polio is a legitimate concern it’s better to risk one paralysis per million to stop the spread of the infection at the source. But in a country like the US where the chance of even getting polio is less than one in a million, there’s no sense using a vaccination that will paralyse more people than get infected in the first place.

The situation was different 20 years ago. Almost every country used the live attenuated vaccine because getting polio was a legitimate concern everywhere. The success of the vaccination means that more and more people now only take the inactivated vaccine and eventually, like smallpox, no vaccine will be necessary.

 

Case study: Influenza

You may have also noticed that influenza has an inactivated vaccine and a live attenuated vaccine. We’ll come back to that in a minute but firstly we need to talk about the influenza vaccine because it’s also pretty interesting stuff.

The thing about influenza is that it’s not just one virus. There are lots of slightly different influenza viruses that all cause the same disease but all look different to your immune system. To give you an idea of how many there are think about the last big flu scare – the H1N1 swine flu of 2009. That sequence of numbers and letters tells us about the type of flu we’re dealing with where the H and the N are two proteins called hemagglutinin and the neuraminidase, respectively. Of these proteins there are 18 different H and 11 different N giving us the possibility of any flu from H1N1 through to H18N11 and every combination in between. There are also different strains within these numbers. For example, the 2009 H1N1 swine flu was a different strain to the previously identified H1N1 and has now since replaced it as the dominant strain. Now I’m going to tell you that this is only type A influenza, there are also types B and C which also include their own further strains.

Incidentally, this is the same reason we can’t cure the common cold, there are so many different types of influenza virus and so many types of cold virus that are constantly changing and evolving that we simply can’t vaccinate against them all. So how do we make a flu vaccine?

 

The influenza vaccine

As you might already know, the flu vaccine changes every year and it’s recommended that you get a new flu shot every year to protect you from whatever the dominant flu type is. The interesting thing is that the vaccine protects you against the flu virus type that research indicates will be the most common in the upcoming season.

The truth is that it’s impossible to know for sure which flu types will predominate in the coming year. Hell, the dominant type can even change within the same season. To further complicate matters, scientists need to say which flu they believe will predominate many months before the flu season even starts in order to prepare and produce the required amount of vaccine for everyone to get vaccinated on time.

That doesn’t mean that getting a flu vaccine is ever a bad choice, particularly for those groups at high risk – the young, the weak and the elderly. We’ve been doing this a long time now and we are very good at predicting which flu type will predominate. This isn’t guesswork, data are published weekly on which flu types are circulating per season with further data being published after the season about the overall effectiveness of the vaccine and suggested improvements.

In addition, the flu vaccine is designed to protect against three or four different flu types to ensure that we are covering all bases and even if the vaccine isn’t exactly right it can still provide lowered protection against a related type. For this reason, the vaccine is sometimes inactivated and sometimes live attenuated depending on the predominant flu type. Often, both types are offered with no real disadvantage to either one but the more we have the more people we can vaccinate.

Sometimes, like in the case of the 2009 swine flu, the predominant flu type is completely resistant to the vaccine and therefore poses a significant threat to those high risk groups. This is when a new vaccine needs to be immediately prepared. This takes time, time the virus can use to become an epidemic which is always a possibility. The 1918 H1N1 Spanish flu killed more people than the amount of soldiers that died in the First World War (1914-18), killing some 3-5% of the worlds population. We like to think that we are better prepared 100 years on but if our vaccination doesn’t work that year we are in no better position than those in 1918.

We can tend to trivialise the flu as ‘just a bad cold’ but the flu is a legitimately dangerous disease that can even be fatal in the young and strong and we should not underestimate it.

 

Disease eradication through vaccination

As of now smallpox is the only disease to be eradicated entirely due to vaccination but many diseases are well on the way.

Polio is almost completely eradicated with only 56 cases so far in 2015, it’s already considered eradicated in the Americas and Europe.

Measles, mumps and rubella (German measles) are also almost completely eradicated in the Americas and Europe with just a handful of cases each in 2015. However, these diseases have actually been rising again over the last couple of years and that is all thanks to the belief that the MMR vaccine causes autism…

 

Does the MMR vaccine cause autism?

No. It doesn’t.

The scientist, Andrew Wakefield, who wrote the original paper claiming a link between the MMR vaccine and autism has been shown to have falsified his data. As a result the paper was retracted from the journal and he was stripped of his medical license. No follow up study has ever been able to reproduce his results and the most recent study of 95,000 children has shown no evidence of a link between vaccines and autism even among children with autistic siblings.

 

How we Changed Medicine Forever – Antibiotics

Antibiotics are so inherent to our idea of what healthcare is that it’s quite likely you already know what they do and you could also probably even name a few of them. What’s crazy about this is that there are people alive today who still remember what life was like before antibiotics. 

 

plate

 

The fact is that antibiotics are less than 100 years old. Penicillin, the first true antibiotic, was not successfully used for treatment until 1942.

 

So what did people do before 1942?

Well, perhaps as expected, people died from bacterial infections. This was to such an extent that the leading causes of death before the invention of antibiotics were pneumonia, tuberculosis and enteritis – all bacterial infections. The fourth leading cause of death was heart disease which has since replaced bacterial infections as the leading cause of death today. In fact, the likelihood of dying from a bacterial infection today is extremely low with the likelihood of dying of pneumonia approximately the same as dying in a car accident (0.15%). Death by bacterial infection nowadays is only really significant in those people who are immunocompromised such as HIV sufferers.

Before the invention of antibiotics though treatment was largely based on old medicinal folklore. I went into a bit more detail about old medical beliefs in my blog post about the black death but essentially doctors would treat the symptoms and hope that the body sorted itself out. They would attempt to lower the temperature of those with fever, burst swellings, cover sores and use a variety of herbal preparations known to have healing effects.

This could be somewhat effective for external infections such as wounds on the skin and if you had an infected wound on one of your extremities you could always just have it amputated. Of course, amputation leaves you with a considerably larger wound so you better hope that that doesn’t get infected straight after. Internal infections such as pneumonia were a different beast altogether though, how do you treat what you can’t see? You just had to hope the body could deal with it. And before antibiotics, 40% of those bodies could not.

 

How we discovered antibiotics

The path to the discovery of antibiotics started in the 1880s in Germany thanks to a man known as Paul Ehrlich. In his early career Ehrlich was crazy about dyeing or ‘staining’ cells. This is simply a technique where certain colourful chemicals can be used to stain a cell a certain colour. These chemicals are called stains and specific stains only stain specific cells so if you have a mixture of cells you can put a stain in there and it will only stain the specific cell type you are interested in. Useful.

This doesn’t sound at the moment too relevant to antibiotics but in 1882 Ehrlich made one of his greatest achievements, he was able to improve upon a stain that identified bacteria. Specifically, tuberculosis bacteria (Mycobacterium tuberculosis).

We need to hold up here for a second because the ability to stain bacteria was a massive discovery that cannot be overstated enough. As scientists today we can fall into the trap of trivialising staining because we’ve had the technology for so long now but the ability to selectively target the bacteria directly responsible for one of the most deadly diseases of the time was an unimaginably big deal. If you think about it, it was only in the 1860s that Louis Pasteur finally  proved that micro organisms such as bacteria existed and that they were what caused disease. For someone to then come along in the 1880s, just 20 years later, and say that not only do we know that bacteria cause disease, we can stain them and differentiate them from all other cells. It’s incredible.

But the really great thing about this discovery is the natural progression to the next idea – if we can specifically stain disease causing bacteria cells, why can’t we specifically target them with something that will kill them? This ‘magic bullet’ was exactly what Ehrlich worked on next.

 

The first antibiotic

After screening hundreds of stains against various organisms Ehrlich, in 1907, finally discovered a medicinally useful drug, the synthetic antibacterial salvarsan (now called arsphenamine). The drug was a very effective treatment for syphilis which was much needed at the time as syphilis was practically an epidemic.

Despite the effectiveness of salvarsan it wasn’t a particularly pleasant drug for the patient. The drug was based on arsenic for a start which is incredibly toxic and notoriously difficult to prepare. Indeed, if incorrectly prepared the patient would likely die of arsenic poisoning. Then again, if successfully prepared the drug had to be injected into both buttocks ‘deeply, slowly and gently’ which caused immense pain for the patient that lasted as long as 6 days. The drug, once in the body, went on to cause nasty side effects including liver damage which in some people progressed to loss of life or limb.

This led to patients being given the choice of an agonising treatment that may result in death or to continue living with syphilis which would eventually lead to death but potentially a long time in the future. It’s perhaps unsurprising then that many considered the disease to be preferable to the cure.

 

The explosion of antibiotic research

The issues with salvarsan turned people off of antibiotic research with many scientists preferring to investigate vaccines and other immunological techniques. That is until 1932 when Bayer labs in Germany, continuing Ehrlich’s work on antibiotic stains for bacteria, discovered the new antibiotic Prontosil. Prontosil was vastly different from salvarsan because it wasn’t arsenic based for a start. Secondly, it was effective against a broad range of bacterial infections and not just a single one like syphilis. This was much closer to Ehrlich’s ‘magic bullet’ and following its clinical success, was widely acclaimed by the medical community.

The problem with Prontosil though was that it was quite difficult to manufacture, making it problematic to mass produce. Thankfully, in 1936 a team of scientists at the Pasteur Institute in Paris managed to isolate the chemical in Prontosil that was responsible for its effect. This chemical was sulfanilamide and was both cheaper and easier to mass produce. This was the very first sulfa drug.

This name may not mean that much to you but you have to trust me that the discovery of the first ever sulfa drug was a legendary discovery that practically catapulted us into the antibiotic age. Its discovery opened the floodgates as labs all over the world started discovering, improving and manufacturing their own sulfa drugs. It has been suggested that as many as 5400(!) different sulfa drugs had been created by 1945. These drugs were subsequently indispensable during World War II and saved the lives of tens of thousands of soldiers.

Even today we still use sulfa drugs for things like acne and urinary tract infections and with the increase in antibiotic immunity that we see today, sulfa drugs are seeing renewed interest.

Despite this, the biggest antibiotic discovery was still to come…

 

Penicillin

The story of penicillin is a fantastic one because it was discovered completely by accident.

The scientist Alexander Fleming, working out of a lab in St. Mary’s Hospital in London, was known at the time as a great researcher who had already made large contributions to the medical field and was very interested in bacteria and antibiotics.
staph penicillin

He was also known as a rather untidy man and his lab was no exception. In September, 1928, he returned from vacation to find that one of the Petri dishes he’d carelessly  left open on his lab bench was covered in mould due to his also leaving the window open. Before throwing it away though, he noticed something very odd about this particular dish – in it he’d been working on a culture of Staphylococcus (a bacteria known for causing sinus infections) and surrounding the mould there was no more evidence of the Staphylococcus bacteria. Fleming would have seen something like the image on the right where the Staphylococcus bacteria is present along the zig-zag line and the white penicillin mould produces a halo around itself where the bacteria has been killed.

Fleming correctly theorised that something in the mould had antibiotic properties and started growing more of it. He then extracted the ‘mould juice’ and tried it on many other types of bacteria which again showed its antibiotic effect. Importantly this time, he had showed that the mould had its effect on multiple types of disease causing bacteria.

Thrilled with his discovery he had the mould analysed and found that it belonged to the Penicillum genus. He therefore gave it its famous name, penicillin.

Unfortunately, it was extremely difficult to extract and stabilise whatever it was in the mould that had the antibiotic effect and despite Fleming’s best efforts he was never able to do it himself. It wasn’t until 1942, 14 years after its discovery, that a team of scientists at the Radcliffe Infirmary in Oxford refined and mass produced it and not a moment too late for it to also be used in World War II.

 

The penicillin legacy

Now, if we’re talking about ‘magic bullets’ penicillin most certainly was it. The first penicillin showed a great effect against many common bacteria as well as having a very low toxicity in people and produced no significant side effects. The only issue they had at the beginning was mass producing it.

You see, it was very difficult to keep up with the sudden and dramatic demand for this wonder drug. One issue was that it was brand new so techniques to quickly mass produce it weren’t developed yet and the other issue was that it didn’t last that long in the human body once administered. It only lasted some 3-4 hours before it was excreted out which meant that multiple treatments were needed to keep it in the body. Indeed, this was such an issue that the first person treated with penicillin later died because they ran out of penicillin before he was fully cured. So how did they remedy this? Well, doctors would collect the urine from their patients in order to take the penicillin back to be reused. Thankfully, in 1945, a method for mass production of penicillin was devised which prevented these doctors from literally taking the piss.

But it wasn’t just penicillin itself that was the big deal, the broad range of drugs that came from it were just as important. The original penicillin that we now called penicillin G or benzylpenicillin couldn’t cure everything but it was suggested that whatever was at the core of the penicillin molecule could be altered to produce more varied antibiotics. And they were right, in 1960 we had methicillin, in 1961 we had ampicillin, in 1970 we had flucloxacillin and in 1972 we had amoxicillin to name a few. These are drugs that many of you may know because you’ve probably been treated with them.

However, penicillin based drugs are gradually falling more and more out of favour. The last new penicillin based drug (excluding codrugs) was temocillin created in 1984. And the key reason for this decline in penicillin based drugs is antibiotic resistance.

 

Antibiotic resistance

Antibiotic resistance is a global problem and one that may be a defining issue of our generation.

You’ve probably heard of MRSA which stands for methicillin-resistant Staphylococcus aureus. Staphylococcus aureus or staph is a common bacteria that causes many respiratory infections and is usually treated by penicillin based drugs. MRSA though cannot be treated by any penicillin based drug. The name is admittedly confusing because it suggests that only methicillin has no effect but when MRSA was discovered in 1960 methicillin was the antibiotic of choice against staph so when resistance was discovered they called it methicillin resistant. Incidentally, methicillin has been long since replaced and you’ll never get it prescribed these days.

Thankfully, following the success of penicillin, a lot of research was directed towards discovering more antibiotics to fill in the gap. So although penicillins made up the majority of antibiotics from the 1940s, you could say the penicillin age ended around 1978 because since then we’ve mostly produced new antibiotics completely unrelated to penicillin. Not that we should stop using penicillin based drugs of course, if it isn’t broke don’t fix it. But when the day comes that bacteria gain resistance to our other penicillins, it’s nice to know that we have backup antibiotics and the less we use them now the less chance there is of bacteria developing resistance to the new drugs.

 

How antibiotic resistance develops

The way antibiotic resistance develops in bacteria is through simple evolution, the same way we developed from apes or how we developed the ability to drink milk – survival of the fittest.

A group of bacteria, like a group of humans, is not full of genetically identical beings. In the same way that humans have genetic predispositions like how some people are more resistant to the common cold than others, bacteria have different resistances to antibiotics. In one group there might be a range of bacteria from completely weak to completely resistant. These will be the extremes of course, most bacteria will be around the weak end with an odd few that are resistant. Even among the resistant there will be some that are highly resistant and some that are only slightly resistant.

So what happens when this group of bacteria is infecting your throat for example? Well, we blast them to hell with antibiotic. We blast them with so much antibiotic that all but the very resistant are completely destroyed leaving the body’s natural immune system to mop up the stragglers. That’s how it should work and how it does work in the overwhelming majority of cases.

But. Let’s say your course of antibiotics is for 10 days but you start feeling better after 7 days. Which you should, the majority of the bacteria are probably dead at this point but think about the ones that are left – they will be the very resistant. So let’s say you stop taking your antibiotic because you feel better. Great, except now all you have left are very resistant bacteria in your body and there’s no more antibiotic keeping them in check. So they multiply.

And this is where bacteria really shine because unlike the millions of years it took for apes to become modern humans, bacteria multiply in minutes. E. coli, for example, multiply in 20 minutes. That means every 20 minutes the amount of E. Coli in your body doubles, so if you started with 2 you would have half a million in 6 hours. And because they all came from those 2 with very high antibiotic resistance, you now have half a million with very high antibiotic resistance. So you start feeling sick again and you go back to your doctor who has no choice but to prescribe a different, stronger antibiotic.

That is why, ladies and gentlemen, you need to finish your course of antibiotic. Because if you manage to pass on that very resistant bacteria to someone else, and they pass it on, and they pass it on… eventually the resistant bacteria will be the only one left and we’ll have another MRSA.

 

The end of the antibiotic ‘golden age’?

This paints a rather grim picture of the future but this chain of events is an extremely rare occurrence and if any resistant bacteria do spread they are usually eliminated by the persons immune system before they can multiply or the bacteria multiply and kill the person (along with themselves). The dying person will probably be in hospital by this stage which is why hospitals are currently the most likely place to catch MRSA by the way.

This is why the golden rule for doctors prescribing antibiotics is to prescribe only when necessary. The less antibiotics we use, the less chance there is of bacteria developing resistance. But you can do your part too by simply finishing your course of antibiotics.

But honestly, the biggest threat to our continued use of antibiotics is the livestock industry. Some 80% of all antibiotics used in the US are used in the livestock industry to prevent disease of animals that later become our food. It’s been just recently reported that some types of E. coli and K. pneumoniae infection, the latter being the cause of pneumonia, have become resistant to our last resort antibiotic. The reason for this is that, for some reason, our last resort antibiotic is being used in livestock farming giving bacteria all the time they need to develop resistance and become a problem for us. Now, if you do happen to catch one of these diseases from these resistant bacteria there is nothing modern medicine can do for you. If these strains of bacteria spread worldwide we will return to pre-antibiotic pneumonia fatality rates of 30-40%. If anything serious is to be done about overuse of antibiotics, it should start with the livestock industry.

 

I didn’t mean for this article to get so bleak towards the end because we mustn’t forget that we have scientists constantly researching new antibiotics and other therapies that can go beyond what even antibiotics can do. This article is already too long to comment on them now but an article about our development of new technologies to deal with disease in the future will be forthcoming.

So for now, enjoy your life! The quality of healthcare available to you because of antibiotics is greater than any other time in history and some very clever people are working hard to make sure that this will continue.

 

TL;DR The first antibiotic was bad, the second was an improvement then penicillin came along and blew everything else out of the water. Now we need to develop new antibiotics and other technology because bacteria are becoming resistant to the ones we have.

 

 

The Black Death and social upheaval

The Black Death  (1346-53) was one of the worst pandemics in human history. It resulted in the deaths of an estimated 75-200 million people in Europe which at the time represented 30-60% of Europe’s entire population. To put that in perspective, the world population at the time was only around 450 million. Epidemics and pandemics are not uncommon throughout history but what’s significant about the Black Death is that the consequences of it are still felt today.

spread the plague

 

The cause of the Black Death

Rats actually get a bad reputation for spreading the plague that caused the Black Death, the real culprit is the flea.

Plague is caused by a bacteria called Yersinia pestis which gets into fleas when they feed on the blood of an animal already suffering from plague. When the flea unknowingly ingests the plague bacteria it causes a blockage in the ‘throat’ of the flea which prevents the flea from feeding. In a rather disgusting biological mechanism, the hungry flea desperately tries to feed on an animal which dislodges the blockage of bacteria and the flea literally pukes this bacteria back into the animal it’s feeding on. Delightful.

Some rats, however, are what’s called a reservoir for plague bacteria. This means they can be infected by plague bacteria but suffer no consequences. Unfortunately for us this means that the flea simply has to feed on an infected rat and it will transfer the plague bacteria to the next thing it feeds on. In the case of the Black Death, it is believed that the next thing the flea fed on was another type of rat that had no resistance to the plague. Fleas, you see, are normally specific to one type of animal and rat fleas will tend to only feed on rats. The problem occurred when the newly infected rats started dying of plague because suddenly there were a lot of fleas around with nothing to eat. When there were no rats left, humans started looking pretty appetising…

 

Human infection

Once an infected flea bites someone the plague bacteria travels to the lymph nodes which, in humans, are in the groin, armpits and neck. The bacteria causes a swelling of the lymph nodes which then turn black due to an accumulation of dead blood and pus and without treatment the swellings will start travelling all over the body. If the swellings are left alone the victim will die from the build up of dead blood but if the swellings are popped the victim may die from toxic shock. Add to this a fever and vomiting blood and it’s no surprise that as much as 80% of all victims died within 4 days of showing symptoms. Nowadays, with antibiotics, the likelihood of death is only around 15%.

So how did the Black Death actually end? Well, one important reason why is that plague doesn’t spread very easily from human to human. Unless, of course, you go around poking the infected – the blood contained in the swellings were full of bacteria. But luckily the plague bacteria doesn’t exist in large amounts elsewhere in human blood so human fleas rarely consumed enough bacteria to pass it onto someone else. That meant that once enough infected rats and humans had died, the rat fleas quickly died out too due to starvation. It also helped that by that time people had learned to stay away from those already infected.

The population of reservoir rats didn’t change though which meant that plague was still around and further outbreaks did happen throughout the following centuries. There was no further outbreak in Europe however that matched the devastation of the Black Death.

 

Arrival and spread

So where did it come from? Well, plague is actually quite common among many rodents in Asia and there is strong evidence to suggest that there had been plague outbreaks in China occurring around 15 years before it arrived in Europe. Then, thanks to the burgeoning trade routes of the silk road, it was easy for rats harbouring plague bacteria to stow away on transport vessels of Mongol traders.

A more macabre story of the arrival of plague in Europe however was during the Mongol siege of the trading city of Caffa in the Crimea in 1347. The siege had already been quite drawn-out and the Mongol army were suffering from the plague. So, in one of the first recorded examples of biological warfare, the Mongols catapulted infected corpses over the walls of Caffa. Traders from the city fled in fear of the plague, unknowingly carrying it with them to the port of Sicily in the south of Italy. From there it spread north through Europe by trade and also by people trying to run away from the infected regions thereby carrying it with them. The rest, as they say, is history.

Importantly, however, not everyone was slow to realise how fast the plague was spreading. For the first of our consequences of the Black Death that we still feel today, the port of Dubrovnik in Croatia made ships and people wait 40 days before entering to be sure that they wouldn’t bring plague into the city. They called this period of time quaranta giorni which simply means forty days in the Venetian dialect of Italian. This phrase gave us the English word quarantine.

 

Medicine in times of plague

It would be an understatement to say that doctors at the time were not prepared for the plague.

You see, medicine in the middle ages was strongly influenced by alchemy which was a belief system that combined religion and spirituality with magic and mythology. You might be familiar with alchemists as those guys who tried to turn lead into gold and you’d be right. Alchemists believed that everything had a perfect form and the perfect form for metals was gold because gold was believed to be the most perfect form any metal could take. Perfection didn’t just apply to metals though, it applied to all things, even people. Alchemists believed that if a person became pure enough through spiritual means they could achieve longevity, immortality and then redemption.

This translates into medicine because herbs or minerals would be blended to change them into a better form, one that could fight disease. It was all about harnessing the hidden powers of normal objects. It may sound silly now but sometimes they hit upon something worthwhile which gave the theory some credit. St. Johns wort for example is a herb that’s been used for centuries for its anti-inflammatory properties. Alchemists would have made a preparation of St. John’s wort and say they had unlocked its hidden power but we know now that it’s due to the large amounts of the compound hyperforin, a potent anti-inflammatory compound, found in the plant.

So how effective was alchemy against the plague? Simply, it wasn’t. One suggested treatment was to swallow gold as they believed the perfection of gold would counteract the corruption of plague. It didn’t work. Another remedy they used was distilled spirit which just means strong alcohol. That didn’t work either. Although, if I knew I was going to die in a matter of days then I can’t imagine I would turn down a little distilled spirit. Consequently, the demand for distilled spirit rose considerably after the plague.

Many of the doctors of the time realised that their remedies were doing nothing and rather than contract plague themselves, promptly fled – they may not have known about modern medicine but they were not stupid.

 

The plague doctor

The lack of real doctors gave rise to a group of mostly volunteers who produced one of the most iconic images of the time, the plague doctor.
The_Plague_Doctor___Concept_01_by_zyanthia

The job of the plague doctor was to verify whether people had been infected or not and record deaths in the public record. They also attempted to treat people, typically with alchemical preparations and blood letting. They had power and respect beyond many other public servants and were paid extremely well.

The costume of the plague doctor really highlighted the medical beliefs of the time. Keep in mind that in the middle ages they had no idea how diseases came about and spread. They didn’t know about bacteria and germs or the importance of cleanliness and hygiene. The dominant theory of the cause of disease was the miasma theory whereby disease came from ‘bad air’. In this theory, a bad area of air would contain poisonous materials that caused disease and could be located by its terrible smell. It was believed that people (or animals) couldn’t pass disease to each other, rather people in the same area got infected because they were in the same area of bad air. This theory is really well exemplified by the disease malaria which is spread by mosquitos living typically around smelly swampland. Malaria in medieval Italian literally means mala aria – bad air.

For this reason, the iconic beak-like mask of the plague doctor would be filled with nice smelling herbs and spices that would overpower the ‘bad air’. The long coat and hat were then worn to further prevent the body from coming into contact with the bad air. And although this did protect the wearer from the real danger, flea bites, it is very likely that the plague doctor helped carry fleas around from infected areas to non-infected areas on the clothing unknowingly spreading the disease even more…

So how does this affect us today? Well, once it became clear that the medicine of the time was completely ineffective against the plague people began to think that maybe they were wrong about the causes of disease and how to treat it. This started a period where doctors looked more at the human body in sickness and in health to study the process of disease. Surgery became an important part of medicine as more doctors stopped thinking of medicine as a spiritual or magical treatment and more as something that needs direct involvement. The truth is that we may have the Black Death to thank for the advances made in medicine that lead to our current level of understanding of disease prevention and cure.

 

Social change

You might expect medicine to undergo changes following such a pandemic but the loss of 30-60% of the population of a continent also effected serious social change.

 

The end of feudalism

Following the end of the Black Death there were not that many people around. England, for example, lost 70% of its population which left them with just 2 million people in 1400. Between 1350-1500 more than 1300 villages were just deserted.

At this time they were deep into feudalism which, broadly speaking, meant that all the land was owned by lords who employed peasants to work the land. There was almost no social mobility, you were born a lord or a peasant and you accepted your role in life. After the Black Death though, there was the same amount of land but a whole lot less people to work it. What this meant for the peasants was that the lords needed them.

Now think about this from the peasants perspective, any peasant who survived the Black Death in England witnessed many of their friends and family die. They now believe that they are special, saved by God, they believe they have a greater purpose. At the same time they have the freedom to move anywhere they want in the country because the lords everywhere need peasants to work the land. And because peasants are so scarce the lords have no choice but to entice the peasants with higher wages and better working conditions. For some, they can even buy their own land.

The price of land fell dramatically because it was desperately needed to grow crops and raise livestock. This lead to a new class of peasants, yeomen, who were farmers that owned the land they worked on – often up to 100 acres. The immediate effect of this was a weakening of the lordship as some minor lords were almost indistinguishable from wealthy yeomen.

Of course, all this had to end eventually. The government of England passed laws to restrict peasant earnings to their pre-Black Death values and to force peasants to stay on the land they had served before. Furthermore, they had to work church lands for free. Admittedly, both peasants and lords broke these laws but contempt for the government over the laws and new taxes ultimately resulted in the peasant revolt of 1381.

And this wasn’t just in England, similar laws and exploitation of the peasants fueled revolts all over Europe. The Jacquerie was a peasant revolt in northern France in 1358 and the revolt of the Ciompi occured in Florence in 1378. Feudalism never recovered.

 

Improving efficiency

With less farmers around, maintaining previous levels of crop production was impossible. Grain farming in particular was incredibly hard work and so it is no surprise that people turned to other methods of farming. One of the more popular changes was an increase in animal husbandry which is the rearing of animals by, usually, a single shepherd on grassland. The selective breeding of animals since then for more meat, more milk and more eggs has lead to the farm animals we have today.

It wasn’t just peasant jobs that were made more efficient, previously a scribe had to painstakingly copy manuscripts and books by hand for wider circulation. Experiments with improving this job after the Black Death eventually lead to the development of Gutenberg’s printing press. Gutenberg finished development of his first press in Germany in 1450, just under a century after the Black Death.

 

Opportunities for women

It was the job of the parish priest to give the final sacraments to those dying of plague. The problem with this was that many priests then caught the plague which resulted in a shortage of priests. Fortunately, the churches often had many laywomen who voluntarily gave their time to the church. It was these women who took over many of the roles that the priests had fulfilled.

Inheritance law was also changed in the wake of the huge death toll. Previously, property and land was inherited by the eldest son but after the Black Death property could be inherited by sons and daughters equally.

 

English language

In 1066 England was conquered by William the Conqueror who came from Normandy in France. Naturally, he didn’t speak Old English like the natives, he spoke Anglo-Norman. Since then Anglo-Norman was used as the spoken language of the courts where it became known as Law French. Incidentally, court documents were written in Latin.

After the Black Death the amount of French Law speakers (mostly educated clerks) was drastically reduced which caused problems for the courts. Furthermore, the peasants and yeomen were gaining power and more laws were being passed that affected them. The problem was that peasants didn’t speak Law French, in fact many lords didn’t speak Law French either, so they complained that they didn’t understand what was being said for them or against them in court. This lead to the Pleading in English Act 1362 which stated that all pleas in courts must be in the common tongue of the land – English. This act practically designated English as the official language of England. The Black Death is directly responsible for the popularisation of the English language.

 

Conclusion

As a final word, I don’t want to give the impression that the Black Death was the cause of all this change, it was always a catalyst. All the things I’ve talked about here – medical changes, peasant revolts, the rise of the yeomen and the growth of the English language – had all been developing throughout the century. What the Black Death did was pull the trigger on social change that had been rising in the chamber for a long time.

And let us not forget that the people living in the time of the Black Death experienced the closest thing to a true apocalypse that any of us will ever know. Let us hope that we never have to witness something the same again.

Milk and human evolution

Milk and milk products are enjoyed by over 6 billion people on the planet (out of 7 billion). In the developed world the primary reason for people not consuming milk products is because they have a reduced tolerance or intolerance to milk. That means the main reason why milk products aren’t enjoyed by everyone is not because they don’t choose to but because they physically can’t. Which is crazy especially if we consider that even someone with an intolerance must have tried a milk product at some point to find out…

Milk spill

The global pattern of intolerance to milk is quite interesting in itself. In Europe and America there is a historical reliance on milk products as a main food source. We know that the cow was domesticated in North Africa and the Middle East as early as 10,000 BC where herders then spread the domestic cow through Europe, India and the eventually the Americas. This animal was so important as a food source for early populations in these regions that today only ~5% of people of European descent are intolerant to milk. In East Asia, where the cow was not used by early populations, up to 90% are intolerant to milk. The simple truth of these statistics is that if you were an early European with an intolerance to milk you couldn’t eat. And if you can’t eat, well…

It is also for this reason that a common observation of East Asian people is that Europeans smell like milk.

 

Milk intolerance

The question that immediately comes to mind then is how can so many people have intolerance to milk when new babies only drink milk? Human milk is almost no different to cow milk and the body deals with them both in the same way. Indeed, it so happens that intolerance to milk as a baby is extremely rare. Babies from all cultures have no problems digesting human or animal milk and signs of intolerance only start to reveal themselves at around 6 years old. The question is, why?

The answer to this problem lies in a gene called LCT which produces the very useful enzyme lactase. Lactase is what you need to thank for your ability to digest milk. You see, milk is mostly made up of a sugar called lactose. Lactose is pretty big though and needs to be broken down into things the body can use – lactase is what makes this possible. Lactase is produced in the small intestine and when any milk products pass through it the lactase breaks down the lactose into smaller bits which are easily absorbed by the body.

However, lactase is pretty useless in mammals after they finish breast feeding because normally we wouldn’t be expected to drink milk ever again. For this reason, lactase production slows down once we no longer need the breast. A regulatory system in the genome prevents the LCT gene from expressing and when LCT doesn’t express, the body can’t make more lactase. This is advantageous for many mammals because then the body has more energy to express more useful genes – producing slightly more adrenaline to aid hunting for example. Humans, however, decided that we would rather keep drinking milk.

So now the question becomes, how did we change this genetically encoded switch to allow us to keep drinking milk?

 

Milk and human evolution

The great thing about genetics, and what makes evolution possible, is that no one is exactly the same in any given population. Things like hair colour and eye colour are nice and obvious but it’s the underlying things, those unnoticed biological processes that can be so much more interesting.

In this case we need to cast our minds back to a population of cattle herders more than 10,000 years ago. Thanks to genetics most of those people will stop producing lactase at around the same age – they are the average cattle herder. A few, though, will stop producing lactase early, maybe around 2 years old. But one or two of them won’t stop producing lactase until they are much older, let’s say 16 years old. This is plenty old enough to have kids of their own. When they do have kids genetics helps us out again as their kids are quite likely to inherit the ability to keep producing lactase until they are 16 years old. Suddenly we have a family that can drink milk into their teens.

So the majority of the group are surviving by raising animals for meat or eating plants and berries but one family is eating that plus they have a secondary food source – milk. Milk is even better than meat and berries, it’s renewable and happily follows you around.

So what happens when disaster strikes? A particularly bad winter makes foraging for plants and berries difficult and the animals can’t survive the cold with no food in their bellies. Our milk drinking family though can give their cow the plants and berries they find and in turn they can drink its milk. 80% of the group might die from starvation but our milk drinking family survives.

The advantage of being the ones to survive means that you are also the ones who are going to have kids. That means more kids will be born with the ability to drink milk. And the longer you live, the more kids you can have. Genetics might do you a favour again and the newest kid can drink milk until he’s 20 years old. A 20 year old milk drinker might have 3 kids or more. Now who will be more successful? The family who can drink milk until they are 16 or the family who can drink milk until they’re 20? What about until they’re 30? 50? Keep that up long enough and suddenly it’s 2015 and around 95% of people of European descent can drink milk well into old age.

This is what is called natural selection or more specifically evolution through natural selection. The environment (nature) greatly favoured those who could drink milk so they were positively selected for because they were able to live long enough to breed and pass on their genes. This is what people mean when they say ‘survival of the fittest’ although biologists don’t like this phrase too much because it’s not very descriptive. For example, just because a family can drink milk it doesn’t make them more fit, it makes them better adapted to live in that particular environment. People from East Asia are just as fit to as people from Europe but Europeans happened to evolve to drink milk.

milk evolutionOf course, now that we don’t just rely on cattle and milk to survive there is no selective pressure for our bodies to keep producing lactase so don’t expect that ~5% intolerant population to change much. In fact, it’s likely to get higher again as people with intolerance to milk are just as likely to breed as anyone else now. Such is evolution.

So if you thought the best use for evolution would be human flight I’m sorry to disappoint you… but at least you can console yourself with a nice, cold glass of milk!

 

Preparation of milk

Drinking milk straight from the cow was then, for the most part, fine. Problems started occurring though once populations started spreading out more. You see, it just so happens that milk is an excellent breeding ground for bacteria and other microbes and the longer the milk is waiting around, the more bacteria can breed in it. People who lived far from the cow needed milk to be transported to them, this meant that the bacteria had plenty of time of multiply… Because of this, the chance of getting an infection associated with drinking milk was around 25%.

Fortunately for us, in the 19th century Louis Pasteur discovered that heating beer and wine just enough to kill most of the bacteria prevented it from going bad. This procedure was so successful that it was named after him to give us what we now know as pasteurization. Pasteurization was quickly applied to milk too so then for the first time milk could be stored and transported safely. Since the invention of pasteurization the chance of getting an infection associated with drinking milk has fallen to around 1%.

 

Pasteurization

There are a couple of different ways of pasteurizing milk:

High-temperature, short-time (HTST) pasteurization is the standard method whereby milk is forced through pipes heated to 72 °C for 15 seconds. It goes straight in the bottle/carton and labelled ‘pasteurized’ – easy as that.

Ultra heat treating (UHT) heats the milk to almost double that of standard pasteurization, 140 °C but for just 4 seconds. This milk isn’t exactly ‘pasteurized’, it’s more like sterilized. The difference is that pasteurization kills almost all of the harmful bacteria but isn’t hot enough to destroy absolutely everything – it preserves a small amount of bacteria as well as many nutrients but is perfectly safe to drink. Sterilization kills everything in the milk so it doesn’t have the exact same nutritional properties but the advantage is that you can store it for ages. The shelf life of pasteurized milk is around 3 weeks in the fridge because the remaining bacteria will eventually multiply in it. UHT milk however will stay fresh for 8-9 months.

 

Raw milk

Despite our advances in making milk safe to drink, there is a growing community of people who now prefer to drink raw milk. That is, unpasteurized, untreated milk straight from the cow. They claim that pasteurization removes many beneficial enzymes and nutrients from the milk, as well as weakening the flavour. Flavour is, of course, entirely subjective but it is true that pasteurization removes enzymes and nutrients.

The enzymes it removes though don’t affect our ability to digest milk at all so it’s highly unlikely we’re missing out on something important. The nutrients such as vitamins B and C which are destroyed by pasteurization are also not important to get from milk as our diet is already naturally high in these vitamins. However, there are testimonies and anecdotes you can find online from people who claim that raw milk has cured everything from acne to arthritis and irritable bowel syndrome.

In the end it comes down to personal preference and I’m in no position to say whether raw milk improves a medical condition or not. However, it doesn’t change the fact that raw milk harbours bacteria such as E. coli, Campylobacter, Salmonella, and Listeria. Since 2006 there have been over seven disease outbreaks in Pennsylvania alone directly caused by Campylobacter and Salmonella bacteria in raw milk. The potential health risks are considered so great by some that raw milk is actual illegal to sell in over 20 states in the US. But that doesn’t stop determined raw milk drinkers who can literally time share a cow to get a ‘gift’ of raw milk in return for paying for its food and barn space.

Of course, I believe people should be able to make their own decisions about what they eat and drink. But we mustn’t forget that children can’t make their own decisions and when a parent forces their own dietary choices onto their children, especially when something in that diet is considered dangerous, it might not turn out so well. Indeed, the Centres for Disease Control and Prevention (CDC) in the US have shown that from the 104 outbreaks of disease associated with raw milk between 1998-2011, 82% involved at least one person younger than 20 years old. It’s something to keep in mind.