How to feed the world and poison people; the Fritz Haber story

Fritz Haber (1868 – 1934) is without a doubt one of the most interesting and controversial scientists to have ever lived. What made him unique is that his work represented the two extremes of scientific innovation – the ability to save lives and the ability to take lives. On one hand he revolutionised agriculture, allowing us to produce enough food to feed the growing global population, but during the First World War he turned his attention to chemical warfare and the extermination of the Allies.  The story of his life is perhaps one of the greatest arguments for the necessity of ethics in science.

A scientist belongs to his country in times of war and to all mankind in times of peace.” – Fritz Haber

 

World food crisis

At the end of the 19th century Germany had a problem. It was running out of food. This actually wasn’t just a problem for Germany, the whole world was running out of food. In fact, many people were worried that at a global population of 1.5 billion people there just wasn’t enough food to support further population growth. This became the issue for scientists at the time, how could they feed the growing population.

The root cause of the issue wasn’t the problem, they knew what they needed. Put simply, they needed more nitrogen.

Nitrogen is essential for plants to grow. I actually talked about the importance of nitrogen for plant growth in an earlier blog post. But to summarise it, without nitrogen there can be no life – seeds need nitrogen to grow, they need it to make the cells that will become the plant.

Now, there are a few places where you can get nitrogen and we used to get almost all of it from, well, poop. In particular, bat poop, otherwise known as guano. Guano was absolutely vital for 19th century agriculture. It was so important that the ‘Guano Era’ between 1845 and 1866 made Peru, thanks to it’s vast reserves, a very prosperous country. Spain even went to war with Peru for control of it’s guano-rich islands. It was basically the oil of its day. The problem was that there just wasn’t enough guano to support the level of agriculture needed to feed everyone.

 

Bread from the air

There was one nitrogen reserve though that hadn’t been tapped. Nitrogen, as you may know, makes up about 78% of the air we breathe. The trouble is, plants don’t take their nitrogen from the air, they take it from the soil.

Nitrogen in the air isn’t accessible to plants because it doesn’t float around in its simple form. It floats around fiercely bonded to another atom of nitrogen to form one of the strongest bonds in chemistry – a triple covalent bond. It doesn’t matter if you don’t know what that is, the important thing to know is that this type of bond requires large amounts of energy to break apart in order to access the usable nitrogen atoms. This makes it uneconomical for plants when there is much more readily accessible nitrogen in the soil.

This isn’t necessarily an obstacle for humans though and Fritz Haber, working at the time out of the University of Karlsruhe, Germany, started to think about how they could access this nitrogen. How could they break this triple bond.

In 1909 Haber came up with a solution. This solution was to take air and place it in a big tank under extremely high pressure and temperature and then introduce hydrogen. Under such extreme conditions the energy was enough to break the nitrogen triple covalent bond. This allowed hydrogen to elbow its way in and react with nitrogen to produce a new nitrogen-hydrogen bond, producing the chemical we know as ammonia.

Ammonia, unlike nitrogen in the air, is very accessible to plants and was therefore an excellent fertiliser. Haber had achieved the impossible, he had harnessed nitrogen from the air in what was arguably one of the most significant technological innovations in human history. He had produced ‘bread from the air’.

Today, over 170 million tons of ammonia is produced every year using what we now call the Haber process. It allowed the global population in the 19th century to exceed 1.5 billion to now sit at around 7 billion and is expected to further rise to 10 billion by 2050. It’s estimated that about half of the nitrogen in our bodies was made available to us directly by the Haber process.

It could be argued, then, that Fritz Haber enabled the existence of more life than any other person in history. Fittingly, he was awarded the Nobel Prize in 1918. The thing is, by 1918 he was already considered by many to be a war criminal.

 

Guns from the air

Haber’s work on ammonia earned him the directorship of a new institute, the Kaiser Wilhelm Institute for Physical Chemistry and Electrochemistry. This didn’t just come with new work commitments, it also came with an entirely new social circle. He was now regularly speaking with cabinet ministers and even the emperor himself. This would be distinction enough for anyone but it was especially so for Haber who was a true patriot. He was a man who sincerely loved his country.

Because of this, when the First World War broke out in 1914 he willingly volunteered for service. He quickly proved his worth by using the Haber process and ammonia for the production of nitric acid which was used for explosives. Previously, they would have used saltpetre imported from Chile but the Allies had almost full control of Chilean saltpetre as most of it belonged to British industries. But Haber had done it again and produced ‘guns from the air’ which allowed the Germans to enter the war on level footing despite the apparent disadvantage.

Despite this success, Germany was suffering defeats on the front lines which caused Haber to think of a new strategy. Drawing on previous experiments using chlorine gas as a weapon, Haber suggested driving the Allied soldiers out of the trenches with gas. Most German army commanders were opposed to the idea and wouldn’t let Haber use it. They called the use of gas ‘unchivalrous’ and ‘repulsive’ to poison men as one would rats. Not to mention, the Hague conventions of 1899 and 1907 prohibited the use of poison or poisoned weapons.

However, eventually Haber found a man willing to use the gas – Albrecht, Grand Duke of Württemberg who was trying to take the city of Ypres, Belgium.

 

The battle of Ypres, 1915

Haber arrived in Ypres in February 1915 and had his unit start setting up steel cylinders containing chlorine gas.

Haber’s unit was particularly interesting because rather than soldiers, he recruited physicists, chemists and other scientists who had to be rigorously trained in how to deal with the poison gas. Haber’s ‘gas pioneers’ were all intelligent men who Haber managed to convince that gas was the only option left available. His famous defense was that ‘it was a way of saving countless lives, if it meant that the war could be brought to an end sooner’.

Incidentally, one of these men was Hans Geiger, inventor of the Geiger counter and three others (James Franck, Gustav Hertz, and Otto Hahn) went on to win Nobel Prizes in their own fields after the war.

In total, the unit set up 5700 tanks of chlorine gas comprising over 150 tons of chlorine. Haber’s plan was to release the gas into the wind which would carry it over to the Allied trenches. They waited weeks for the right weather conditions but finally on April 22nd the wind was just right…

Some described it as a cloud but others just saw a low yellow wall inching closer as the gas crept along the battlefield. It was said that as it glided, slowly through no mans land, leaves shriveled, the grass turned the colour of metal and birds fell from the air. It took mere minutes to reach the trenches.

Soldiers gagged, choked and convulsed as it hit causing over 5000 casualties and 1000 deaths. It was the first successful deployment of a weapon of mass destruction.

Two days later, under more favorable conditions, another gas attack was attempted. This time resulting in 10000 casualties and over 4000 deaths. The New York Times reported on 26th April 1915 that ‘[The Germans] made no prisoners. Whenever they saw a soldier whom the fumes had not quite killed they snatched away his rifle … and advised him to lie down to die better.’

Death by chlorine gas was not a pleasant way to go. When breathed in the chlorine would react with water in the lungs forming hydrochloric acid that would literally melt the lung tissue. Survivors described it as ‘drowning on dry land’.

 

The aftermath

In total, over 70000 Allied troops lay dead at Ypres and about half as many Germans. Having seen the success of the gas attack Haber only wished the Germans had used it sooner.

The Kaiser, having heard of Haber’s success, was delighted with the effect of the first gas attack and Haber was promoted to Captain. A rare rank for a scientist. A party was thrown shortly thereafter to celebrate Haber’s new status.

However, shortly after Haber’s return to Berlin on May 1st, 1915, his wife, Clara Immerwahr, took his service pistol and shot herself in the chest. The body was found by their 13 year old son.

Although she left no suicide note it was generally believed that she killed herself in protest of her husbands actions in using poison gas. Previously she had publicly condemned his work as a ‘perversion of the ideals of science’ and ‘a sign of barbarity, corrupting the very discipline which ought to bring new insights into life’.

The following morning Haber left his dead wife and lone son and traveled to the Eastern Front where he was due to initiate another gas attack on the Russians.

 

The legacy of chemical warfare

The use of chlorine gas by the Germans opened the floodgates to gas usage by both sides in the First World War. The British first used chlorine gas in September 1915 followed by the first use of a new gas, phosgene, later in the year. The first use of  mustard gas came in 1917 and, like phosgene, was actively used by both sides. The overall death toll due to poison gas weapons in the First World War was estimated to be around 1.3 million.

The thing was, by the end of the First World War both sides had invented gas masks and other methods to counteract a poison gas attack rendering it mostly useless. In the years after the war public opinion turned heavily against poison gas weapons and the Geneva Protocol, signed in 1925, prohibited the use of all poison gas or bacteriological methods of warfare.

In the Second World War, neither Germany or the Allies used any war gases in combat.

 

How Fritz Haber should be remembered

The questions remains, then, how should Fritz Haber be remembered? Is he the man that saved the world from starvation and enabled billions of lives through fertiliser? Or is he a war criminal who used the first weapon of mass destruction and opened the world up to chemical weapons?

Like almost every question of morality there is no black and white. He fed billions of people so he can’t be totally bad but a good man wouldn’t push poisonous gas into the lungs of other human beings. But does the good outweigh the bad? The world would certainly be a worse place without him. He killed thousands but he still saved billions. Can you call someone a good man based on mathematics?

There is no doubt that Haber was a highly intelligent man capable of brilliance. But to be remembered as a brilliant man you need to know how to use that brilliance and for that you need a conscience. You need to be able to look beyond yourself, to see how your actions affect others. You could say that he was blinded by patriotism or that he believed that he was acting for the greater good but he still chose to kill people to meet these ends. He showed no respect for the lives he was taking.

But, then, who are we to draw the moral line between good and evil? A lot of scientists, a lot of great scientists, have done things that some may call evil. During the Second World War many American physicists worked on the Manhattan Project which led to the development of the atomic bombs dropped on Hiroshima and Nagasaki that killed thousands of Japanese civilians. Are these scientists evil for not respecting the lives they took? Then again, dropping the atomic bombs ended the war, potentially saving more lives than it destroyed. But isn’t this exactly the justification Haber used to defend his poison gas?

I think that what is more important than labeling scientists as good or evil is using their example to show people how destructive science can be when performed without ethics. Science is not inherently good or evil but it is a tool that can be potentially used for both.

When society remembers a person it sends an important message about the morality of that society and the morals it expects from it’s people. This is why we should be careful about remembering Haber as good or evil because to do so would cause us to forget the other side. Society should always be aware of the potential destructive power of science when wielded without ethics lest it happen again.

 

Gold from the sea

For Haber, the rest his life was mired in tragedy and disappointment.

He felt humiliated by Germany’s loss in the war and especially by the huge reparations Germany had to pay. This humiliation made him feel personally responsible for the reparations and so he sought a way to pay them himself. His idea was to attempt to distill gold from the ocean water. Although it sounds ridiculous, ocean water does indeed contain small quantities of gold. The problem was that he severely overestimated the amount of gold there was. He spent five futile years of his life doing this before he concluded that it just wasn’t economically viable.

Then, in 1933, Hitler took control of Germany and one of his first actions was to expel all Jews from the civil service – this included scientists working in Haber’s institute. Haber was, in fact, a Jew but he had converted to Christianity back in 1893. That saved him but didn’t save many of his coworkers and friends. He was unable to save their jobs and resigned himself in protest. Despite his conversion to Christianity he was still seen by the Nazis as ‘Haber the Jew’ and so, fearing for his own safety, fled to England. The former patriot was said to have felt like he had lost his homeland.

Haber died of heart failure travelling through Switzerland in 1934 but not before repenting, on his death bed, his waging war with poison gas.

 

Haber’s tragic irony

One final tragedy befell Haber following his death.

In turned out that his institute had actually created another nitrogen based gas for use as an industrial pesticide. They called this gas Zyklon. A re-formation of this gas, known as Zyklon B, was used by the Nazis in the gas chambers that killed millions of Jews. Those killed included the children and grandchildren of Haber’s sisters as well as many of his friends that stayed behind.

He would never know.

 

Ask not what plants can do for you…

Plants have a bit of a bad reputation as being boring which I think mostly stems from the fact that they don’t ever appear to do anything. They’re just stuck in one place which makes them much less interesting to interact with than say, a kitten, for example. Despite this, we owe our entire existence to them. Plant life was here long before us and will most likely still be here long after.

Indoor-Plant-Image-By-Ambius

 

As you probably already know we’re completely dependent on plants for our own survival – they produce the oxygen that we need to breathe to begin with. But, more than that, we could never have existed without them. I’ve already covered this in my post about the fern that caused an ice age, essentially how one plant prepared our atmosphere for human life. And, of course, we wouldn’t be able to survive without food crops either. Plants are, after all, at the bottom of pretty much every food chain.

Being at the bottom of the food chain presents a problem for plants though, how do you secure your own food when you can’t move?

 

Photosynthesis

Photosynthesis is one of those processes that gets absolutely drilled into you at school so I won’t spend too long discussing it.

All we need to know, as the image on the right shows, is that plants take in carbon dioxide and water and, using the sun’s energy, produce oxygen and glucose. Glucose being a sugar which the plant uses for energy.

The important thing I want to focus on about photosynthesis though, is that it uses the sun’s energy. Of course plants use the sun’s energy you might say, it’s what plants do, we all know it. And we do, but this fact is something we absolutely take for granted. You may not realise it but photosynthesis is the only significant method of storing the sun’s energy and making it accessible to other life forms that we have on Earth.

 

Plants as energy storage

It’s difficult to accurately explain just how important this is for life on Earth because without this there would be no life on Earth. Not the kind we know anyway. The thing is, energy has to come from somewhere and all of our energy comes from the sun. You may have heard the phrase ‘energy can neither be created nor destroyed’ before. It’s the first law of thermodynamics and it’s not a law that gets broken.

The second part of this statement goes on to say that energy can only be transformed from one form to another. What that means is that energy is in a constant state of being moved around but can never be depleted. In the case of photosynthesis, energy from the sun is used by the plant to create glucose. The sun’s energy, then, hasn’t been ‘used up’, it’s just been transformed into glucose which the plant uses as energy. The plant then uses glucose to accomplish it’s own goals like growth, reproduction and respiration. But the important thing to remember is that the sun’s energy doesn’t leave the plant, it just gets repurposed. After being repurposed the energy might leave the plant of course, as a seed for example, but that’s not relevant for us here.

So what happens when an animal eats that plant? Well, when the plant is being digested it releases all the energy stored inside it. This means the animal can now use this energy for it’s own purposes. Considering that this energy started off in the sun, we can now say that the sun’s energy is inside the animal. In this regard, the plant was simply storing the sun’s energy. Likewise, if something eats that animal, the energy would be transferred again. This is how energy flows through food chains.

To put this another way, imagine that animal was a cow and you’ve just been handed a delicious steak. That steak is essentially the sun’s energy presented to you in a form that you can use.

 

Plants as power storage

So what happens to all those plants that don’t get eaten?

Well, eventually they die and decompose. But given long enough those plants and the energy they contain become trapped deep underground. In the case of aquatic plants, they settle to the bottom of the sea and over time mix with mud and get buried under other layers of sediment. Heat and pressure at such depths cause the dead plants to chemically change into a liquid. We call that liquid, oil.

Land based plants on the other hand undergo chemical transformation underground at high heat and pressure to form coal and natural gas.

So dead plants (and animals) are our fossil fuels. When we burn coal in power plants or oil in engines we are actually unleashing the stored energy from the sun collected by plants millions of years ago. Burning fossil fuels is basically how we access the energy from the sun for our own power needs. I go into a bit more detail on this in my post on what fire is.

What this means is that plants are directly responsible for the way we convert the sun’s energy into life and power on this planet.

 

How plants grow

Going back now to how an immobile life form is able to get the things it needs to survive. We’ve already covered how plants get their energy through photosynthesis but how they convert that light into energy is also pretty interesting.

 

Chloroplasts

To do this plant cells contain structures called chloroplasts. You can see chloroplasts in the picture on the right where they appear to be like little green beans inside the square plant cells. Chloroplasts are essentially little cells within plant cells that contain all the basic photosynthesis machinery, everything necessary to make photosynthesis happen. Not every plant cell contains chloroplasts but it’s quite easy to locate those that do because, as you can see, chloroplasts are very green. They’re green because they contain a pigment called chlorophyll which is the part of the chloroplast that actively absorbs energy from light. It absorbs energy mostly from the blue part of the spectrum and then a bit less from the red part of the spectrum. It doesn’t absorb green very well at all so green gets reflected, giving chloroplasts their characteristic green colour.

Chloroplasts are so green in fact and so numerous in plant cells that you can tell where they are because they turn the plant itself green. This is the reason that plants are mostly green and you can see for yourself how dependent plants are on photosynthesis if you just think about how green the average plant is. It’s literally the reason we gave them the name chloroplast – it comes from the Greek words chloros (green) and plastes (the one who forms).

Where the chloroplast came from though is an equally interesting story because despite how reliant the plant is on chloroplasts, plant cells cannot make new ones. This goes back to what I said about chloroplasts being like little cells within cells. This is actually a very accurate description because it’s believed that millions of years ago chloroplasts were completely independent bacterial cells. We believe that at some point an early form of plant cell ingested a chloroplast bacterium with every intention of ‘eating’ it. For whatever reason the chloroplast bacterium wasn’t digested and it just went on living within the plant cell. The resulting relationship ended up so beneficial to both cells that they basically never went back. We call this kind of relationship a symbiotic relationship whereby two completely different organisms live together for mutual benefit.

However, the result of having one cell living inside another is that they both retain their own DNA. What this means is that the chloroplast DNA is completely separate from the plant cell DNA. Now, DNA is kind of like the blueprint that tells the cell everything it needs to produce in order to survive and replicate but in this case the plant cell is missing the DNA of the chloroplast. So the chloroplast, the essential component of plant cell life, can’t actually be created by the plant cell itself.

Thankfully, the chloroplast is equally as invested in the relationship and makes sure to replicate itself ready for when the plant cell needs it.

Keeping in mind everything we’ve already discussed about the role of plants as a way of transforming the sun’s energy into a usable resource. It’s crazy to think that a biological mistake that happened millions of years ago is responsible for almost all life as we know it.

 

Nutrients

As impressive as the ability to turn light into energy is, plants can’t live on light alone. They are also very dependent on certain nutrients that mostly come from the ground and are taken up into the plant, along with water, through the roots. The roots, of course, are one of the rare parts of the plant that are not green. This is because they contain no chloroplasts which would, of course, be completely wasted underground where there is no light.

Broadly speaking, the plant needs three main nutrients from the ground; nitrogen (N), phosphorus (P) and potassium (K). For any gardeners out there, you should recognise these nutrients as the main components of artificial fertilisers. These fertilisers will often be called NPK fertilisers for this reason and these fertilisers differ based on the ratio of N, P and K in them.

Nitrogen is considered the most important nutrient because it’s used to make proteins. Just like in humans, proteins are used to build everything that makes up the body. Everything from the smallest enzyme to the largest organ needs proteins to be built which means they need nitrogen. You can easily tell a plant that’s nitrogen deficient because it won’t be growing very well. The leaves will also be a pale green or yellow colour because no protein also means no chlorophyll.

Phosphorus is used by the plant to create its genetic material as well as being used by the chloroplasts to convert sunlight into energy. Plants deficient in phosphorus will not grow as well, produce small, acidic fruits and typically develop purple leaves. The purple leaves actually come from the build up of sugars which can’t be used by the plant without phosphorus. Excess sugars cause the build up of a chemical called anthocyanin, a purple pigment which, in high quantities, overpowers the green pigment in chloroplasts.

Potassium has a large range of roles within the plant. It’s involved in photosynthesis, protein production, transport of water and nutrients through the plant and is particularly important in starch production. Starch is a form of sugar that isn’t designed to be immediately used for energy but rather as long term energy storage. Potassium is therefore very important for plants to survive through the winter when the lack of sunlight reduces the amount of energy they can produce.

 

 

The nitrogen conundrum

 

One interesting thing about how plants get their nitrogen is how backwards it seems. The air we breathe is some 78% nitrogen but plants get their nitrogen from the ground?

The reason for this is how available that nitrogen is. Nitrogen in the air exists as an incredibly stable compound of two nitrogen atoms triple bonded to each other. Plants can’t use nitrogen in this form so in order for the plant to access that nitrogen it would first need to break that triple bond. And let me tell you, triple bonds are not easy to break.

For that reason, plants take nitrogen from the ground where it tends to be in less strongly bound forms. There is the added benefit of soil living bacteria which form a symbiotic relationship with plants, breaking up nitrogen near the plant roots in exchange for carbohydrates, proteins and oxygen.

 

As an interesting comparison it’s like asking why we, as humans, don’t get our oxygen from water. After all, water makes up 71% of the Earth’s surface and water is 89% oxygen by mass. The answer is the same, the oxygen isn’t in a form we can use. It’s fine for fish but we evolved to take our oxygen from the air just like plants evolved to take their nitrogen from the ground.

 

While we’re on the subject of evolution, I hope that what you take away from this is that the way we evolved was directly related to how plants evolved. If plants hadn’t made the sun’s energy available to us who knows if we would even be a species. So if you happen to have houseplants or a garden, maybe don’t forget to water them now and then. After all, they’re responsible for your existence, the least you could do is return the favour.

 

How to turn lead into gold

How to turn lead into gold was the legendary question that plagued the alchemists of old. Their answer to this question was the belief in the existence of the fabled philosopher’s stone, a substance with the extraordinary ability to turn any common metal into gold (as well as offering eternal life and immortality because why not). Sadly, they never succeeded in their goal… but we have, and we didn’t even need the stone.

gold nuggets

 

Alchemy

Alchemists were like the chemists, physicists and physicians of their day, their day being a really long time incidentally- possibly as early as 3500 BC and extending right up until the 19th century. But the most famous time of alchemy, where we in the west get most of our ideas of what alchemists did, was probably from the Renaissance period around the 14th century through to the eventual collapse of alchemy in the 18th century.

Their central beliefs combined religion and spirituality with magic and mythology to form an early ‘science’ that used laboratory techniques and experimental methods to create new chemicals, compounds and medicines as well as seeking spiritual enlightenment.

Although many people dismissed them as charlatans practising pseudoscience you can’t deny that they were really ambitious in their goals. These goals came from all fields of science and included, but were not limited to, transmuting common metals into gold, the creation of the elixir of immortality, the creation of a panacea able to cure all disease, the creation of a universal solvent (a liquid that anything can dissolve in, even gold) and achieving human spiritual perfection. Quite the shopping list. But as not to be too overwhelming for aspiring alchemists, the discovery of the philosopher’s stone was key to all of these things.

Above all though, they really had a special love of gold and its perceived perfection. As I wrote in my post on the black death, during the time of the plague alchemists even suggested swallowing gold as a cure. The idea being that the perfection of gold would counteract the corruption of the plague. It didn’t work sadly and they didn’t have their panacea yet so they couldn’t offer that either. Although, one of their ideas for their universal solvent would be to dissolve gold in it for its great medicinal properties, a possible panacea. Sadly they didn’t have that either. The point is though, they really liked gold.

 

Lead into gold

Now, with their love of gold, one of the central goals of alchemy was to get more of it. To that end they worked on transmuting common metals into gold, lead being the main metal they were interested in. Why lead you may ask? Well, you’ll have to try and stay with me for this explanation.

The Islamic philosopher Jabir ibn Hayyan was the guy who really started this belief way back in the 8th century. You see, he thought that every element was made up of four qualities; hotness, coldness, dryness and moistness. He went on to say that two of these qualities were external which is how they appear to us but that the other two qualities were internal and invisible to us.

Gold, he said, is externally hot and moist whereas lead is externally cold and dry. That meant that lead was internally hot and moist like gold. Therefore, if we could rearrange those qualities we could turn lead into gold. Of course, this reaction would require a catalyst and this catalyst was the philosopher’s stone. Take a minute for that to sink in.

The idea of transmuting lead into gold persisted for over a millenium until alchemy finally lost respect in favour of real chemistry in the 18th century. To add insult to injury, the new chemists were so serious in their intent to divorce chemistry from alchemy that they wrote so much against it that alchemy was demoted to the branch of science solely devoted to transmuting lead into gold.

 

The legacy of alchemy

The separation of alchemy from chemistry is really what gave rise to the modern belief that alchemists were all pseudoscientists. It reduced them to scientific frauds brought together in the ridiculous belief that they could magically change common metals into gold and the misguided quest for the mythical philosopher’s stone.

The fact is that alchemists, despite their many incorrect beliefs, were scientists and helped develop many techniques, medicines and pigments that we’ve either improved upon or still use outright today. Many of them were very good experimentalists with skills that wouldn’t be out of place in a laboratory today.

Indeed, one of the founders of modern chemistry and pioneers of the scientific method, Robert Boyle, was an alchemist, as was Isaac Newton who revolutionised physics with his ideas of gravitational force.

It’s easy to criticise the ideas of the past with our modern knowledge but we must never forget that something had to come first to get us to where we are now. So it was then and so it will be the case for scientists in the future who will no doubt criticise our own outdated techniques.

This is especially true when it turns out that they may not have been completely wrong after all…

 

Lead into gold – a modern approach

Where alchemists went wrong is that they didn’t know that lead and gold are two completely different elements, not compounds that can be changed. Thankfully, nowadays we have the periodic table which tells us that they are two different elements and you can see it below.

PeriodicTable-NoBackground3

Now, you don’t need to understand the periodic table in any real detail and I’ve highlighted the only elements that we care about; Gold (Au) and Lead (Pb). But it’s nice to see the whole picture to appreciate that they’re close together in the table which tells us that they’re not very chemically dissimilar. They are, after all, both metals.

We do need to understand a small amount of chemistry to continue on so bear with me here. So, aside from the name, the only thing you need to know is that number in the upper left hand corner of the element; Gold is number 79 and Lead is number 82. That number is the number of protons in one atom of the element. The amount of protons an element has gives it all of its properties. If you’re holding a block of lead and a block of gold the only thing that makes them different, that gives them all the differences you can see and feel, is that lead has 82 protons and gold has 79. Pretty crazy.

One atom of gold will only ever have 79 protons, never more, never less. You can’t have gold with 20 protons any more than you can have gold with 100 protons. If someone gives you an unknown element and they tell you it has 79 protons, it’s gold. If you’re wearing a gold ring or necklace now, the gold in it has 79 protons just like every atom of gold in the entire world. The same is true of lead, lead has the number 82 in the upper left hand corner so one atom of lead always has 82 protons.

Good job, the chemistry lesson is over.

 

You can see then that lead has three more protons than gold. The question you might ask yourself then is whether removing protons from lead would give you gold. If you could take lead and selectively remove three protons from it, would you have gold? After all, I just told you that any element with 79 protons must be gold. Well, I have good news, that’s exactly what would happen. If you selectively removed three protons from lead you would indeed get gold.

Even better, we can do it. The kicker is it’s really damn hard, not to mention expensive. Oh, and you’ll need a particle accelerator.

 

Smashing lead into gold

A particle accelerator is a pretty cool piece of technology. You might already know that the largest particle accelerator we have, the Large Hadron Collider (LHC) is at CERN (Conseil Européen pour la Recherche Nucléaire) in Switzerland and despite the uproar caused when it first opened it didn’t immediately create a black hole when it was first switched on in 2010 and destroy the world.

Now, there are a whole host of things we can do with a particle accelerator but to keep it simple and relevant I’ll just tell you how we can use one to remove protons from lead.

LHC-partThe LHC is essentially a long circular tunnel surrounded by extremely powerful electromagnets. Into this tunnel we can introduce a stream of single protons which are then accelerated around and around the circular tunnel by the electromagnets (like in the .gif to the right). Eventually the protons reach almost the speed of light and when the protons have this much energy they can smash into other atoms smashing more protons off of them. This is our relevant point.

So in our example, the stream of protons is like the proverbial bull in the china shop full of lead atoms, smashing them indiscriminately. Unfortunately, we can’t exactly smash off the three protons we need to get gold, our bull is not particularly accurate, but if we smash enough lead the law of averages suggests that we’re bound to get some lead that only had three protons smashed off. And as we already said, lead that had 82 protons and lost three protons will have become gold which has 79 protons. If you could pull this off you’d be the envy of over a thousand years of alchemists.

 

If we can do it, why don’t we?

As of the time of writing we haven’t produced gold from lead in this way although it is definitely possible. We did, however, produce gold from bismuth back in the 80’s. Bismuth is the element next to lead and has 83 protons, just one more than lead. Therefore it needed to lose four protons to become gold.

The problem was, that although we produced gold it was in such small quantities as to be almost undetectable by modern methods. Particle accelerators, by definition, work with small quantities of things. Not just small, but minute quantities of things. After all, we use them to investigate things the size of atoms.

This is only one hurdle of course, producing small quantities of things wouldn’t be such a big deal on is own. Unfortunately, it’s also prohibitively expensive.

When the bismuth experiment was done in the 80’s, it cost approximately $5000 per hour to use the particle accelerator and they used it for about a day to get the results they did. Scaling up their results to produce an ounce (~28 g) of gold would have cost in the region of $1 quadrillion.

The price of an ounce of gold at the time was $560.

The end of helium balloons?

You probably know helium, it’s that gas that gives you a squeaky voice and makes party balloons float. Aside from that we don’t really think about helium which is a shame because it’s actually a super interesting element. Unfortunately, one thing you probably don’t know about helium is that it’s non-renewable. Just like coal, oil and natural gas there’s only a certain amount of it on Earth and we’re just beginning to realise that we’re kind of wasting it.

up balloons

 

Why does helium float?

Before we get into that, let’s talk about what makes helium a pretty interesting element. I mean, the first and most obvious thing is that it’s lighter than air which is why helium balloons float. This is because, without going too much into the science of air, a litre of normal air weighs about 1.25 grams whereas a litre of pure helium weighs about 0.18 grams. We know that lighter things float above heavy things so as long as the helium + balloon weighs less than the same amount of air, it’ll float. It’s the same reason that a balloon filled with air would float to the top of a swimming pool, air is lighter than water.

Following on from this idea, an interesting question I often hear about helium is whether you can ship a package filled with helium to make it weigh less and therefore cost less to ship. We can work this out pretty easily using only the two numbers I just gave you. So a litre of air (1.25 g) is about 1 g heavier than a litre of helium (0.18 g). This means we could ship a 1 L package of air for the same price as a 1 L package of helium + an item weighing about 1 g for the same price!

So you could realistically cheat the system by sending something that weighs about 1 g, like a pen for example, across the country in a 1 L package and the pen would have been sent effectively for free (you still paid to ship the 1 L box). The issue with this is that you need to also buy the helium as well as balloons or bubble wrap to trap the helium in the package. Let’s also not forget that we don’t really need a 1 L package to hold a pen and most postage is based on size as well as weight. Overall, I’m sorry to say, it would probably be more expensive to ship something with helium than just using air…

 

Why does helium make your voice squeaky?

This effect is actually nothing to do with your vocal cords or your biology at all. Again, without going too much into the physics of it, sound moves faster or slower depending on what it’s travelling through and because helium is lighter than air sound travels faster through helium. That squeaky sound is your voice travelling faster which amplifies the higher resonant frequencies and makes you sound like a chipmunk.

So the next question you might ask is if there’s a heavy gas you can breathe to make your voice seem deeper. Well, you’re in luck! You can breathe something like xenon or sulphur hexafluoride which makes your voice travel slower amplifying the lower resonant frequencies turning you into Barry White.

Keep in mind though that breathing other gases isn’t all that healthy and that buzz you feel after a suck of helium is your brain complaining of a lack of oxygen. And please don’t suck helium straight from the pressurised canister because that’s a sure fire way to blow your lungs out. Otherwise, go nuts.

 

Aside from having fun at parties, helium actually has a lot of important industrial and medical uses which take advantage of it’s lesser known properties.

 

Helium is inert

The first one of these, and one you might be familiar with, is in airships and by airships I mean things like blimps, zeppelins and dirigibles. The logic is pretty simple, if a small balloon will float then a large enough balloon with enough buoyancy will be able to lift people as well.

Hindenburg_disasterIncidentally, there is one element lighter than helium and that is hydrogen which used to be the gas used in airships until the infamous Hindenburg disaster (pictured right). You see, although hydrogen is lighter and therefore gives better lift, it’s really flammable and even if every safety precaution is taken it only needs a spark…

So these days airships are almost exclusively filled with helium which has the great advantage of being inert. That means it won’t react with anything and most importantly won’t spontaneously combust.

 

Liquid helium is really cold

Helium is an excellent coolant because it becomes a liquid at -269 °C. That is just 4 degrees warmer than absolute zero (-273 °C), the coldest temperature possible, and this has found excellent usage in MRI scanners. You might already know that MRI scanners use magnets, superconducting magnets to be specific, to create a powerful magnetic field that allows us to see inside the human body. The problem is that the magnets lose their magnetic potential unless kept at absolute zero. By immersing these magnets in liquid helium they can be kept at this temperature indefinitely.

The interesting thing is that helium is practically our only answer to this cooling issue. For many cases we would use liquid nitrogen as a coolant which becomes a liquid at -196 °C but it has the unfortunate property of then changing to a solid at -210 °C. A solid is pretty useless for immersing things in.

 

Helium is non-narcotic and non-toxic

Divers have to be very careful about the type of air they bring with them in their scuba tanks. 100% oxygen is out of the question because pure oxygen is toxic so they typically use a combination of oxygen and nitrogen just like the air we breathe on the surface: 21% oxygen, 78% nitrogen, 1% other gases.

The issue is that as we dive deeper the pressure causes oxygen to become toxic even at these ratios. Deep sea divers have a further thing to worry about as this amount of nitrogen also starts to become toxic at greater depths. We therefore need to reduce the amount of oxygen and nitrogen even more and for this we need a gas to replace them that is inert, non-toxic and non-narcotic even at high pressure. Thankfully, helium is the answer. With the right combination of oxygen, nitrogen and helium we’ve been able to make dives of over 300 m without a vehicle.

As a side effect, when deep sea divers speak under the effects of this gas mixture their voices take on that squeaky quality we’re already familiar with.

 

Helium is released by radioactive elements

I already dedicated an article to the science of carbon dating and how we can use radioactive elements to calculate the age of dinosaur bones and such things so I won’t go into too much detail. All we need to know for now is that the great thing about radioactive elements is that they produce helium when they decay.

If something is capable of containing helium we can then calculate the amount of helium present in it. Using our knowledge of how long it takes a radioactive element to produce helium we can effectively judge how old it is based on how much helium has been produced.

This point moves us nicely onto exactly why helium is non-renewable…

 

Helium is non-renewable

As I just mentioned, helium is created when radioactive elements decay. The problem is that that is the only way we can get helium. We can’t make helium artificially and any helium we do use escapes immediately into the atmosphere where it cannot be reclaimed or recycled. Despite being the second most abundant element in the universe it’s extremely rare on Earth.

You might think, well if it comes from radioactive elements then it’s not like we can’t produce more. And you would be right, to an extent. If we take the decay of uranium for example, the two types of uranium that produce helium are uranium-235 and uranium-238 which have half lives of 700 million years and 4.5 billion years respectively. So you could get more helium if you’re prepared to wait a little bit.

What this all means is that all the helium we have took the entire history of the Earth, 4.5 billion years, to create and once we use it all, it’s gone.

 

How much we have left

The largest supply of helium we have is the US National Helium Reserve which has space for one billion cubic metres of it, that’s about half the worlds supply. The problem is that in 1996 the US government decided that maintaining the reserve was no longer a priority as the reserve was already heavily in debt. So they made plans to sell it off. The problem was that they decided to sell it off in a terrible way.

Rather than selling the helium at market rates, they wanted the same amount of helium to be sold off each year regardless of global demand. They couldn’t sell it fast enough any other way because the goal was set to deplete the National Reserve by the end of 2015. The result was that the market became flooded with cheap helium.

This had a number of negative effects. Firstly consumption went way up as people found new and interesting ways to use this gas that’s lighter than air. This made us all believe that helium isn’t a precious resource. I mean, we use it for party balloons, is there another precious resource that people think of as a toy? Experts have suggested that to make party balloons representative of the actual cost of helium they should cost about $100 each. Secondly, the price of helium was so cheap and we had so much of it that no company saw any need, or profit, in extracting more of it. The result of which is that when the National Reserve is empty there’s nothing to replace it.

Thankfully the lifetime of the National Helium Reserve was extended in 2013 where it was agreed that they should keep selling helium until 2021 and at market rates. It’s now expected that the National Reserve will be empty by 2020 and by then we really will need a new plan. At current rates of consumption it’s estimated that we will deplete the world’s supply within 100 years.

 

The future of helium production

Just because the US is giving up on helium it doesn’t mean the rest of the world has. It’s estimated that by 2020 we will start getting the majority of our helium from other countries instead. There are already 7 other extraction plants active around the world and countries like Algeria and Qatar are stepping up production to rival current US rates.

China is even getting in on the game by considering plans to mine the moon for helium.

 

But regardless of our future production of helium we will most likely never see helium prices like the past 20 years ever again. This is going to have massive detrimental effects for many scientific and technological fields as well as the extra financial pressure felt on our hospitals. There really is no alternative for helium cooling when temperatures of absolute zero are required.

The impact you’ll likely see is that helium balloons are going to go from everyday item to luxury item similar in status to real ivory piano keys. There may even come a time in the future where we look back on our frivolous use of helium with embarrassment and contempt. Who knows, one day pretentious film students will be analysing Disney Pixars Up as an example of mankinds hubris. Or maybe not.

 

Milk and human evolution

Milk and milk products are enjoyed by over 6 billion people on the planet (out of 7 billion). In the developed world the primary reason for people not consuming milk products is because they have a reduced tolerance or intolerance to milk. That means the main reason why milk products aren’t enjoyed by everyone is not because they don’t choose to but because they physically can’t. Which is crazy especially if we consider that even someone with an intolerance must have tried a milk product at some point to find out…

Milk spill

The global pattern of intolerance to milk is quite interesting in itself. In Europe and America there is a historical reliance on milk products as a main food source. We know that the cow was domesticated in North Africa and the Middle East as early as 10,000 BC where herders then spread the domestic cow through Europe, India and the eventually the Americas. This animal was so important as a food source for early populations in these regions that today only ~5% of people of European descent are intolerant to milk. In East Asia, where the cow was not used by early populations, up to 90% are intolerant to milk. The simple truth of these statistics is that if you were an early European with an intolerance to milk you couldn’t eat. And if you can’t eat, well…

It is also for this reason that a common observation of East Asian people is that Europeans smell like milk.

 

Milk intolerance

The question that immediately comes to mind then is how can so many people have intolerance to milk when new babies only drink milk? Human milk is almost no different to cow milk and the body deals with them both in the same way. Indeed, it so happens that intolerance to milk as a baby is extremely rare. Babies from all cultures have no problems digesting human or animal milk and signs of intolerance only start to reveal themselves at around 6 years old. The question is, why?

The answer to this problem lies in a gene called LCT which produces the very useful enzyme lactase. Lactase is what you need to thank for your ability to digest milk. You see, milk is mostly made up of a sugar called lactose. Lactose is pretty big though and needs to be broken down into things the body can use – lactase is what makes this possible. Lactase is produced in the small intestine and when any milk products pass through it the lactase breaks down the lactose into smaller bits which are easily absorbed by the body.

However, lactase is pretty useless in mammals after they finish breast feeding because normally we wouldn’t be expected to drink milk ever again. For this reason, lactase production slows down once we no longer need the breast. A regulatory system in the genome prevents the LCT gene from expressing and when LCT doesn’t express, the body can’t make more lactase. This is advantageous for many mammals because then the body has more energy to express more useful genes – producing slightly more adrenaline to aid hunting for example. Humans, however, decided that we would rather keep drinking milk.

So now the question becomes, how did we change this genetically encoded switch to allow us to keep drinking milk?

 

Milk and human evolution

The great thing about genetics, and what makes evolution possible, is that no one is exactly the same in any given population. Things like hair colour and eye colour are nice and obvious but it’s the underlying things, those unnoticed biological processes that can be so much more interesting.

In this case we need to cast our minds back to a population of cattle herders more than 10,000 years ago. Thanks to genetics most of those people will stop producing lactase at around the same age – they are the average cattle herder. A few, though, will stop producing lactase early, maybe around 2 years old. But one or two of them won’t stop producing lactase until they are much older, let’s say 16 years old. This is plenty old enough to have kids of their own. When they do have kids genetics helps us out again as their kids are quite likely to inherit the ability to keep producing lactase until they are 16 years old. Suddenly we have a family that can drink milk into their teens.

So the majority of the group are surviving by raising animals for meat or eating plants and berries but one family is eating that plus they have a secondary food source – milk. Milk is even better than meat and berries, it’s renewable and happily follows you around.

So what happens when disaster strikes? A particularly bad winter makes foraging for plants and berries difficult and the animals can’t survive the cold with no food in their bellies. Our milk drinking family though can give their cow the plants and berries they find and in turn they can drink its milk. 80% of the group might die from starvation but our milk drinking family survives.

The advantage of being the ones to survive means that you are also the ones who are going to have kids. That means more kids will be born with the ability to drink milk. And the longer you live, the more kids you can have. Genetics might do you a favour again and the newest kid can drink milk until he’s 20 years old. A 20 year old milk drinker might have 3 kids or more. Now who will be more successful? The family who can drink milk until they are 16 or the family who can drink milk until they’re 20? What about until they’re 30? 50? Keep that up long enough and suddenly it’s 2015 and around 95% of people of European descent can drink milk well into old age.

This is what is called natural selection or more specifically evolution through natural selection. The environment (nature) greatly favoured those who could drink milk so they were positively selected for because they were able to live long enough to breed and pass on their genes. This is what people mean when they say ‘survival of the fittest’ although biologists don’t like this phrase too much because it’s not very descriptive. For example, just because a family can drink milk it doesn’t make them more fit, it makes them better adapted to live in that particular environment. People from East Asia are just as fit to as people from Europe but Europeans happened to evolve to drink milk.

milk evolutionOf course, now that we don’t just rely on cattle and milk to survive there is no selective pressure for our bodies to keep producing lactase so don’t expect that ~5% intolerant population to change much. In fact, it’s likely to get higher again as people with intolerance to milk are just as likely to breed as anyone else now. Such is evolution.

So if you thought the best use for evolution would be human flight I’m sorry to disappoint you… but at least you can console yourself with a nice, cold glass of milk!

 

Preparation of milk

Drinking milk straight from the cow was then, for the most part, fine. Problems started occurring though once populations started spreading out more. You see, it just so happens that milk is an excellent breeding ground for bacteria and other microbes and the longer the milk is waiting around, the more bacteria can breed in it. People who lived far from the cow needed milk to be transported to them, this meant that the bacteria had plenty of time of multiply… Because of this, the chance of getting an infection associated with drinking milk was around 25%.

Fortunately for us, in the 19th century Louis Pasteur discovered that heating beer and wine just enough to kill most of the bacteria prevented it from going bad. This procedure was so successful that it was named after him to give us what we now know as pasteurization. Pasteurization was quickly applied to milk too so then for the first time milk could be stored and transported safely. Since the invention of pasteurization the chance of getting an infection associated with drinking milk has fallen to around 1%.

 

Pasteurization

There are a couple of different ways of pasteurizing milk:

High-temperature, short-time (HTST) pasteurization is the standard method whereby milk is forced through pipes heated to 72 °C for 15 seconds. It goes straight in the bottle/carton and labelled ‘pasteurized’ – easy as that.

Ultra heat treating (UHT) heats the milk to almost double that of standard pasteurization, 140 °C but for just 4 seconds. This milk isn’t exactly ‘pasteurized’, it’s more like sterilized. The difference is that pasteurization kills almost all of the harmful bacteria but isn’t hot enough to destroy absolutely everything – it preserves a small amount of bacteria as well as many nutrients but is perfectly safe to drink. Sterilization kills everything in the milk so it doesn’t have the exact same nutritional properties but the advantage is that you can store it for ages. The shelf life of pasteurized milk is around 3 weeks in the fridge because the remaining bacteria will eventually multiply in it. UHT milk however will stay fresh for 8-9 months.

 

Raw milk

Despite our advances in making milk safe to drink, there is a growing community of people who now prefer to drink raw milk. That is, unpasteurized, untreated milk straight from the cow. They claim that pasteurization removes many beneficial enzymes and nutrients from the milk, as well as weakening the flavour. Flavour is, of course, entirely subjective but it is true that pasteurization removes enzymes and nutrients.

The enzymes it removes though don’t affect our ability to digest milk at all so it’s highly unlikely we’re missing out on something important. The nutrients such as vitamins B and C which are destroyed by pasteurization are also not important to get from milk as our diet is already naturally high in these vitamins. However, there are testimonies and anecdotes you can find online from people who claim that raw milk has cured everything from acne to arthritis and irritable bowel syndrome.

In the end it comes down to personal preference and I’m in no position to say whether raw milk improves a medical condition or not. However, it doesn’t change the fact that raw milk harbours bacteria such as E. coli, Campylobacter, Salmonella, and Listeria. Since 2006 there have been over seven disease outbreaks in Pennsylvania alone directly caused by Campylobacter and Salmonella bacteria in raw milk. The potential health risks are considered so great by some that raw milk is actual illegal to sell in over 20 states in the US. But that doesn’t stop determined raw milk drinkers who can literally time share a cow to get a ‘gift’ of raw milk in return for paying for its food and barn space.

Of course, I believe people should be able to make their own decisions about what they eat and drink. But we mustn’t forget that children can’t make their own decisions and when a parent forces their own dietary choices onto their children, especially when something in that diet is considered dangerous, it might not turn out so well. Indeed, the Centres for Disease Control and Prevention (CDC) in the US have shown that from the 104 outbreaks of disease associated with raw milk between 1998-2011, 82% involved at least one person younger than 20 years old. It’s something to keep in mind.

Good intentions, bad science I

Sometimes there comes along an invention or a discovery that, often unbeknownst to the inventor or discoverer, is destined to cause more harm than good. In this series I will talk about some scientists who sadly left rather unfortunate legacies often leading to harm to themselves or others. My goal here is not to insult or ridicule, the scientists I will discuss mostly had good intentions that regrettably turned out poorly. Rather, we should use these examples to guard against scientific arrogance – often the trap of the proud scientist.

It seems fitting then that I start this series with a man once described as having ‘more impact on the atmosphere than any other single organism in Earth’s history’ by environmental historian J. R. McNeill, unfortunately none of it good.

ThomasMidgleyJr

This man, Thomas Midgley Jr. (1889-1944), worked as a chemical engineer at General Motors during the 1920s and it is during his employment there that he came up with his first major discovery – the addition of tetraethyllead (TEL) to gasoline. Put simply, he had invented leaded gasoline.

 

Leaded gasoline

So why was leaded gasoline a necessity? Well, the problem we have with car engines is that they can ‘knock’. Without going too much into the specifics of internal combustion, knocking is caused by fuel igniting itself before it’s supposed to be ignited. You see, fuel gets compressed in the cylinder of the engine and the greater the compression the more powerful and efficient the engine is. But poor (and cheap) fuel like methane, propane, hexane and heptane can’t deal with compression – it easily ignites itself which causes knocking and damage to the engine. Good (and expensive) fuel like octane however deals very well with compression, it doesn’t ignite itself and therefore causes no knocking.

Fuel is therefore given an octane rating based on what percentage of the fuel is octane. A good fuel will have a high octane rating and these days the standard octane rating is the US is 87, the remaining 13% of the fuel being made up of heptane or equivalent fuels. There exist higher octane fuels of course as the higher the octane rating the more powerful and efficient the engine. The problem is (if you are a company like General Motors), people want more powerful and efficient cars but expensive fuel is expensive.

Thomas Midgley discovered that the addition of tetraethyllead (TEL) to weaker fuels allowed them to perform as well as high octane fuels without needing so much of that expensive octane. The discovery was a slam dunk – fuel could be made cheaper, engines could be made better and the icing on the cake for General Motors was that they owned the TEL patent. The problem was that lead is highly toxic and this was no secret.

 

Lead toxicity

It’s been known that lead is toxic since the 2nd century BC when Greek botanist Nicander described the colic and paralysis associated with lead poisoning. General Motors was clearly aware of this too as they chose to name the product Ethyl, disassociating it from the lead it contained. Midgley however seemed unfazed by the known health risks even when, in 1922 while planning the introduction of leaded gasoline, he came down with lead poisoning himself and had to take an extended vacation in Miami, Florida to recover.

Regardless, in 1923 the manufacture of leaded gasoline went ahead. And the problems of lead poisoning started showing themselves almost immediately. At one manufacturing plant in Deepwater, New Jersey one worker died in 1923 followed by another three deaths in 1924 and four more in 1925. At another TEL chemical plant, newly built at the Bayway Refinery in New Jersey in 1924, five workers died and forty-four were hospitalised with lead poisoning within two months of opening. Following the public outcry after the Bayway disaster, Midgley participated in a press conference to demonstrate the safety of TEL. He did this by washing his hands in TEL and inhaling the fumes for one full minute. He said he was taking no risk doing so and neither would he be taking a risk if he did this every day. Unsurprisingly he ended up seeking treatment a few months later for lead poisoning.

It would be tempting to say that Midgley was acting in corporate or selfish interests to continue promoting the safety of TEL in the face of all this evidence. However, you could also say that the way he risked his own health to deal with the negative publicity appeared to be more like the desperate action of an arrogant scientist unable to admit fault in his own invention. Or maybe it was a bit of both.

 

The end of leaded gasoline

It wasn’t until 1975 that leaded gasoline was phased out and even then not because of its toxicity. Around this time there was high governmental pressure for car manufacturers to reduce harmful fuel emissions like carbon monoxide. They did so with the invention of the catalytic converter, an incredible invention placed in the exhaust system that converted harmful emissions into harmless carbon dioxide, nitrogen and water. However, TEL clogged these converters rendering them inoperable. To continue to comply with governmental restrictions and to finally deal with the mounting public pressure against TEL, leaded gasoline production was terminated and by 1986 it was practically non-existent.

 

Modern discoveries of lead toxicity

Thanks to the prevalence of leaded fuel throughout the 20th century we are now slowly understanding more about the horrific effects of the release of so much lead into the air. Following the phasing out of leaded fuel (between 1978 and 1991), average levels of lead in the blood decreased 78% and the percentage of children with a ‘high’ blood-lead level decreased from 88% to 9%. There have been statistical links that suggest a causal relationship between atmospheric lead concentration and a child’s intelligence, neural development, aggressiveness and criminality as they age – none of it favourable. Indeed, the rapid reduction in criminality in the biggest cities of the US since the 1950s has been directly attributed to the removal of lead from fuel1.   But that’s not the end of Midgleys contribution to the planet. His next great invention was even better…

 

Refrigeration 

In the late 1800s and up to the early 1930s refrigerators were kept cold using the refrigerant gases ammonia, methyl chloride and sulphur dioxide. The problem with these gases is that they’re all either toxic, flammable or explosive and accidental leakage of these gases in the home could be fatal. Regular stories of deaths and injuries relating to refrigerants got so bad that people starting keeping their refrigerators in their gardens.

Enter Thomas Midgley Jr., who in 1928, in the employment of Frigidaire (a division of General Motors), invented Freon – a colourless, odorless, non-flammable, non-corrosive compound that made for an excellent refrigerant. Always one for the dramatic demonstration, in 1930 Midgley showed a gathering of the American Chemical Society the non-toxic and non-flammable nature of Freon by breathing in a lungful of it and blowing out a candle. He suffered no ill effects and the candle was extinguished, it seemed too good to be true…

And it was. Freon, or its chemical name dichlorodifluoromethane, was the first synthesised chlorofluorocarbon (CFC).

 

CFCs

CFCs are now well-known for causing the depletion of the ozone layer. They do this by introducing highly reactive chlorine atoms into the atmosphere which react with ozone (O3) turning it into oxygen (O2). The problem with this is that ozone absorbs UV-B radiation from the sun so the less ozone we have the more UV-B radiation makes it to the planet surface. For humans this means an increased risk of developing skin cancer and for many species of plants, including important food crops, this means reduced growth.

However, in 1930 the existence of the ozone layer had only just been described by the mathematician and geophysicist Sydney Chapman. It wasn’t until 1973 that scientist and inventor James Lovelock detected CFCs in the atmosphere using an electron capture detector, a device of his own invention. Then, in 1984, scientists at the British Antarctic Survey discovered the Antarctic ozone hole. The conclusions were obvious and the Montreal Protocol was signed in 1987 which restricted the production of ozone depleting chemicals. By 2005 CFCs were completely phased out but the Antarctic ozone hole isn’t expected to close until 2050.

 

The legacy of CFCs

The large scale production of Freon began in 1930 and by 1935 over 8 million Freon refrigerators had been sold. By the 1950s it is estimated that over 90% of urban homes had a Freon refrigerator. Furthermore, Freon allowed the invention of the worlds first home air conditioning unit in 1932. Air conditioning allowed the southern and southwestern states of the US to develop as the previously prohibitive summer temperatures were no longer a problem.

The huge popularity of Freon cooled devices meant that once the ozone problems had been detected large amounts of research was dedicated to finding a suitable replacement. Hydrofluorocarbons (HFCs) are the results of this research as they have the same refrigerant properties of CFCs but very low lifetimes once they reach the atmosphere, taking ozone damage down to negligible levels.

Of course, Midgley had no idea that his invention of Freon would have such damaging effects on the planet. By the time CFCs were detected in the atmosphere he had already been dead for 30 years. He died believing that he had had a positive influence on humanity. But then, who’s to say he didn’t? Regardless of the effects on the environment, he did actually improve the safety of refrigeration by removing the toxic chemicals that used to be used. Refrigeration and air conditioning were vital for the development of the human race throughout the 20th century and because it was so vital the industry was forced to produce a safer alternative once the damaging effects of CFCs were discovered. So the question becomes, was the damage to the environment worth it for the advancement we gained as a species? After all, the ozone hole will be closed by 2050… It’s something to think about.

 

Midgleys final invention

In 1940 Midgley contracted polio and was left severely disabled. Therefore, to help others lift him from the bed he invented a complex system of ropes and pulleys. This worked very well up until November 1944 when he got entangled in the ropes and died of strangulation. Of the three inventions discussed here you might say that the one that did the least damage to humanity was the one that killed him…

 

1 Nevin, R. (2000) How Lead Exposure Relates to Temporal Changes in IQ, Violent Crime, and Unwed Pregnancy. Environmental Research. 83(1): 1-22

Dating dinosaurs

Age is one of those things frequently lied about in the dating world despite it being one of the more important things to know about your future potential partner. For an archaeologist or paleontologist in the dating world however, age is all they want to know. When dating dinosaurs an accurate age is the only thing worth knowing and thankfully we have many techniques to accurately measure it. 

dating dinosaurs

 

The clever among you will have realised that we are not talking about the wine and dine type of dating but instead the accurate measurement of the age of things like dinosaur skeletons or the first rocks formed on our planet. Just how is it possible to calculate the age of such things that we can confidently say when the time of the dinosaurs was or how old the planet is? To answer these questions the type of dating we will be discussing is radiometric dating. It’s likely that you’ve already heard of the radiometric dating method carbon dating as it’s often in the news when an unusual old thing is discovered, so this will be the focus of this post.

 

The carbon cycle

Carbon is the key element in organic life. We, and most of the living things on our planet, are carbon-based life forms which means that carbon is at the centre of everything that makes your body. Your very DNA is built from carbon. We get this carbon from the things we eat and in particular from plants, this being from eating plants directly, plant products or from animals that eat plants (or from animals that eat animals that eat plants and so on).

So where do plants get their carbon? Well, plants come up from the ground so it would make sense if the carbon came from the earth but the answer is not so intuitive. All the carbon in plants actually comes from the air, the carbon dioxide in the air in fact. During photosynthesis plants take in carbon dioxide and produce oxygen which is useful for us because we use this oxygen to breathe. The carbon is then used by the plant to build it’s stem, leaves, flowers and the rest of it.

So if you can see where this is going, all living things effectively get their carbon from the air. Plants get their carbon from the air and, because plants are always lowest on the food chain, animals get it from the plants. If I eat a delicious steak I am absorbing carbon that was once in the air, then in the grass and then in a cow – the carbon doesn’t change, just how it was used. This is important to remember for the next part…

 

Radioactive carbon

Bear with me for a small amount of chemistry here. In Earths upper atmosphere there are a great many gases but most importantly for us there is carbon and there is nitrogen. Normally carbon exists as an element with an atomic weight of 12 and we write this 12C, nitrogen is slightly heaver and exists as an element with an atomic weight of 14 and we write this 14N. These normal forms of carbon and nitrogen account for about 99% of all carbon and nitrogen on the planet. However, cosmic rays from the sun in the upper atmosphere constantly generate neutrons which have the interesting effect of smashing into nitrogen and transforming it into radioactive carbon. This is like normal carbon except it has an atomic weight of 14 so we write this 14C. This reaction is in equilibrium which means there is and always has been a consistent amount of 14C in the atmosphere and a consistent amount of 12C.

The fact that 14C is radioactive doesn’t affect it very much, all it means is that it is ‘unstable’ – it will gradually, over time, lose neutrons and revert back to 12C which is stable. Aside from that, 14C functions exactly the same as 12C. So our indifferent plants just take in both types of carbon from the air and animals eat the plants and take the radioactive carbon for themselves. Because plants have no preference for the type of carbon, the ratio of radioactive 14C to normal 12C in the plant and any animal that eats it is exactly the same as in the atmosphere. In fact, inside your body right now there is the tiniest amount of radioactive 14C. But don’t worry it has no negative effect on us or the plants, for all intents and purposes it may as well be normal carbon.

 

Half-life

We are finally getting to how this can all be used for dating. As I said before, the only difference between radioactive 14C and normal 12C is that 14C wants to lose those extra neutrons and become normal 12C again. This is true of all radioactive elements, they all just want to be normal again and to do this they have to lose the extra neutrons. The process of losing these neutrons is called radioactive decay. Different radioactive elements lose neutrons at different rates and we call this rate of radioactive decay its half-life.

The half-life is so called because it is the length of time required for half of the radioactive element to decay back to the normal element. If an element has a half-life of 50 years for example, you can expect there to be half as much of it 50 years from now. It’s as simple as that.

As an aside, the problems we have with proper disposal of radioactive substances is because of radioactive decay. For example, let’s say we have 100 g of radioactive substance with a half life of 50 years. After 50 years we will have 50 g of radioactive substance. But after 50 more years we will have 25 g of radioactive substance, then 50 years later we will have 12.5 g of radioactive substance. We halve the amount every 50 years which is great for the first 50 years but then we start getting diminishing returns. After 500 years though we will only have about 0.1 g of radioactive substance. That doesn’t sound so bad but consider that the most widely used nuclear material, uranium-238, has a half life of over 4 billion years and the next most widely used material, plutonium, has a half-life of around 24,000 years. By this scale 24,000 years seems almost reasonable but remember that even after 24,000 years we still have half of the damn stuff left!

 

Carbon dating

At this point in the story we know that the ratio of radioactive 14C and normal 12C is consistent in the atmosphere as well as plants and animals, we have plants and animals full of normal carbon and radioactive carbon and knowledge of the half-life of radioactive elements. We just need a couple more important facts and we’ll have this wrapped up.

Firstly, once the plant or animal dies it can no longer take in any new carbon – that means no radioactive 14C or normal 12C. So at this point the radioactive 14C will decay back to normal 12C and nothing will be replacing it.

Secondly, if we find some animal remains (a bone for example) we can accurately measure the current amount and ratio of radioactive 14C and normal 12C.

Armed with this information, we know:

  • the ratio of radioactive 14C to normal 12C in the atmosphere at the time of death of the animal because the ratio is consistent in the atmosphere and always has been.
  • the ratio of radioactive 14C to normal 12C in the animal bone where the radioactive 14C has had time to decay to normal 12C.

Using these two ratios and our knowledge of the half-life of radioactive 14C (5730 years) we can accurately calculate when that bone stopped acquiring new radioactive 14C, therefore we know when it died and how old the bone is.

 

In real life terms, you can think of it like this:silhouette-dancing-people

Imagine you’re at a party and your friend calls. He asks how long you’ve been there but you have no way to find out the time and therefore have no idea. Thankfully, you realised that everyone was drunk when you arrived and, for whatever reason, half of the people at this party sober-up every hour. You know there were 100 drunk people at the beginning of the night and now there are only 12. A little math tells us that after 1 hour there must have been 50 drunk people left, then after 2 hours there must have been 25 drunk people left and after 3 hours there must have been 12.5 drunk people left. Ignoring this half-person we can safely estimate, based on the amount of drunk people remaining that we’ve been here for about 3 hours.

This isn’t a completely accurate analogy but I feel that it serves the purpose.

 

Uses of carbon dating

Carbon dating isn’t just used for living things, it can be used to date almost anything that has carbon in it which is a whole lot of things. Paper, for example, has carbon in it so paintings, books and scrolls can be dated, textiles have carbon in them so clothes can be dated, pots, jewelry or items of decoration made of ivory can be dated because ivory has carbon in it. Sometimes you can get a bit creative and date the remains of food found in pots to get an estimate of the age of the pot for example. The possibilities for carbon dating are huge.

Unfortunately, carbon dating does have a major drawback which is that you can only reliably date back around 50,000 years as the half-life of radioactive 14C (5730 years) is too short to go much further. By 50,000 years practically all of the radioactive 14C has decayed back to normal 12C. Which leads me to an apology on behalf of the title… As dinosaurs existed around 65 million years ago it is far beyond the reach of carbon dating which has been the focus of this post. But dating bones, pots and books didn’t quite have the same ring to it… However, dinosaurs are still dated using an extremely similar method to carbon dating known as potassium-argon dating. In this method the decay of radioactive 40K to stable 40Ar in the rocks encasing/surrounding the dinosaur bones are analysed. This method is capable of dating samples more than 100,000 years old!

Photography, chemistry and the fall of Kodak

In 1976 Kodak sold 85% of all cameras and 90% of  all photographic film in the US, in the late 1990s they posted revenues of $16 billion and profits of $2.5 billion per year and in 2012 they filed for bankruptcy. 

The story of the fall of Kodak is probably one of the most well taught examples of a company failing to move with the times and paying the ultimate price. But how did this happen? And what has it got to do with chemistry?

 

Let’s start by quickly explaining how a standard film camera, like anything produced by Kodak in the 20th Century, works. Broadly speaking there are only three parts to a camera; the lens, the film and the camera body.

The lens is just a curved piece of glass or plastic that works in much the same way as the lens in your eye (see The eye of the beholder), which is to focus the light entering the lens onto a specific point behind it. In the eye it’s the retina and in a camera it’s the film.

The camera body exists primarily to keep the film in the dark to prevent light from affecting the film until the right moment. When you press the button a shutter between the lens and the film opens for a short time allowing light focused by the lens to hit the film. The longer the shutter stays open for, the longer the exposure.

The film is where the chemistry happens, light focused by the lens onto the film is chemically recorded when the shutter opens to take the picture. The chemical change induced on the film is called the latent image and this can then be developed into a photograph of the moment the latent image was made.

 

The following few sections will be reasonably detailed in the chemistry of film development but it’s not that important to follow everything written here (unless you have an interest of course!). My primary goal is to portray just how much chemistry is involved.

How is this chemical record made on the film?

Film is a roll of chemically coated plastic that’s incredibly thin, around 0.025 mm thin! This plastic, referred to as the base, used to be made of celluloid which is a highly flammable and expensive product. Thankfully, in the early 20th century Kodak invented cellulose acetate film or safety film which is considerably less flammable and it has since then become the standard.

Now this is where the chemistry happens. On one side of the base there are over 20 chemical coatings, each is of course less than the 0.025 mm overall thickness of the film, and they are all responsible for the final photograph. Not all of them form the image but they work to improve the image by filtering the light or controlling the chemical reactions.

The most important coating for imaging is a layer of silver-halide crystals. When hit by light these crystals undergo a photochemical reaction which changes the crystal surface into metallic silver. These specks of metallic silver form the latent image which will be important later for development. More light coming from one area of the picture will produce more specks of metallic silver which is how they give the picture its density. However, there is a problem with silver-halide crystals – they are only sensitive to blue light which is quite useless for a colour photograph. This is where the other chemicals come in.

A group of chemicals called spectral sensitisers are able to sensitise the silver-halide crystals to the full spectrum of red, green and blue light. In fact, the crystals are very versatile and there are a great many chemicals which can be used to affect the crystals to also produce differences in contrast, resolution and sensitivity.

These chemicals are all held together on the film with gelatin. That same stuff that makes jelly bouncy is also vital for photo development. Unbelievably, the gelatin also makes up the majority of the 0.025 mm thickness of the film.

 

How do we get the image off the film?

Plenty more chemistry is involved here to turn the latent images into negatives. I’m assuming we took a colour photograph so we’ll be producing colour negatives. The steps must be done in a dark room or light will ruin the film.

Firstly the film is placed in a developing solution which develops the previously activated silver-halide crystals to pure silver. This process also releases chemicals called oxidised developers which react with chemicals in the colour coatings of the film called couplers. The couplers produce colour depending on the spectral sensitisers used.

Development is stopped in a ‘stop’ bath which neutralises the development chemicals. The silver-halide crystals which were not activated to form the latent image are washed away in a fixing solution which also prevents the film from being activated by any more light. The developed silver is then removed by bleaching chemicals. Finally, the film is washed to remove any remaining chemicals and dried.

You now have colour negatives!

 

But that’s still not a photograph…

To convert a negative into a photograph is almost the same procedure as making the negatives.

Firstly, you need an enlarger which is simply a projector with a lens. The negative is placed into the enlarger then projected onto a sheet of photographic paper and the distance adjusted so the image covers the entire sheet. The photographic paper contains silver-halide crystals sensitised with red, blue and green spectral sensitisers similar to the film.

The lights are turned off and the photographic paper is exposed to the image of the negative projected by the enlarger. Yet again we form a latent image but this time on the photographic paper.

The latent image is developed as before with oxidised developers and couplers but this time the end result is a silver image and a dyed image. After the ‘stop’ bath, the silver image and all silver-halide crystals that were not activated in the dyed image are removed in a bleach-fix solution. The final image is washed, dried and finally ready!

 

How is this relevant to the fall of Kodak?

So as I said before, you don’t need to understand everything you just read to realise that there is a huge amount of chemistry involved in photo development. By seeing the amount of chemistry involved we can also see that calling Kodak a photography company isn’t entirely accurate. They were, in fact, a chemical manufacturer that specialised in photographic development.

Sure, a certain amount of the business was involved in optics, print processes and camera manufacture but the vast majority of the business was turning celluloid, silver and gelatin into memories on paper. Consider the amount of research and development that went into improving and perfecting the chemical processes at every step in the development of the photograph for a company that started in 1888. The amount of patents produced by the chemists at Kodak in that time was staggering to the point where the sale of 1100 patents in 2012 brought $525 million to the struggling company. It was estimated that if the company were not in so much trouble these patents could have been worth as much as $2.6 billion.

 

The digital revolution

Sadly, a company deeply invested in the production and sale of film and film cameras did not fare well into the digital age. In the 2000s we saw the explosion of digital cameras which were quickly replaced by camera phones (2003) and then to smartphones (iPhone, 2007) at an unprecedented pace.

Kodak had actually previously invented the digital camera in 1975 but forwent the opportunity to pursue it because film photography was so much more lucrative. The money, you see, was in the film not the cameras. By not advancing the digital camera they secured the future of their own product. The problem is, when another company takes the first step it is very difficult to catch up. When it became apparent that digital cameras were going to win it was already too late. Kodak’s business was too much in chemistry and not enough in digital photography.

It’s easy to point the finger at the mistakes made by Kodak but was it really just selfish business practice that killed them? The fact is that the first digital cameras were terrible, not just for image quality but they contained poor storage space and required a computer to properly utilise them. What businessman in the 1980s could foresee a world where everyone has a home computer? Or hell, the internet and the subsequent explosion of social media and photo sharing? We’re talking about technology that started becoming popular in 2000 when in 1999 Kodak posted profits of $2.5 billion. It’s easy to say that they should have seen it coming but they stuck to what they knew and what was profitable.

They gambled and they lost.

 

 

The chemistry of baking

Rain droplets hit the window in a slow, rhythmic beat as slivers of sunlight break through the clouds to coat the surfaces of the Sunday morning kitchen in a domestic glow. Coils of cinnamon and twists of toffee drift through the air like kite tails and the smell is that of home, of knowing that you could be anywhere else in the world but you chose to be here.

baking

 

The images and smells created by baking are nothing less than art but baking, my friends, is chemistry. And like all deliberate chemical reactions any deviation from the ingredient ratios in a bake may be the difference between a moist, airy muffin and a small brick.

To bake you only really need 6 basic ingredients; flour, eggs, baking powder/soda, butter/oil, water/milk and sugar. But they each individually have an important role in the chemical reaction that results in a delicious bread or cake.

 

The building blocks – flour

Flour is what’s called a strengthener as its main job is to provide structure to the bake. Flour is simply ground up cereal grains with wheat flour being the most popular choice. Like all cereals it contains lots of gluten and starch which exist as protein chains when they’re dry which don’t move, react or do much of anything. But once you add water those protein chains suddenly start interacting with each other, allowing them to become both longer protein chains or shorter protein chains. This allows you to stretch it out or ball it up or make any kind of shapes you like with it and it will keep that shape. Depending on the amount of water added, this mixture can become a dough or a batter.

A dough is the dryer mixture where the water will never be more than 1/3 of the weight of the flour. Dry doughs (water is ~1/8 of the flour weight) are used to make things like pasta and pie crusts whereas wet doughs (water is ~1/4 of the flour weight) are used to make breads and rolls.

A Batter is a wet, almost liquid mixture where the water can be as much as equal to the weight of the flour. Sticky batters (water is ~1/2 of the flour weight) are used to make muffins and cookies whereas pourable batters (water is equal to the flour weight) are used to make pancakes and waffles.

Milk can of course be used instead of water in the same ratios and you do this to add flavour from the milk to the bake.

flour

 

Adding some texture – butter and oil

Butter and oil are collectively called the fats and are referred to as tenderizers or shortening. They contribute a lot of flavour but they also like to interfere with the gluten structure of the bake. They do this by sticking to the gluten protein chains preventing gluten from sticking to itself. A large amount of fats in the bake results in a lump of dough that isn’t very cohesive but when cooked develops into a beautiful flaky pastry. The flakiness has been created by the fatty gluten being unable to hold the bake together.

To create something like a sandwich bread, which is typically a fluffy bread, a smaller amount of fats are used to produce another effect. Heated fats release water as steam, but once the water has left the bread there is nothing to fill the space it left behind. This leaves the bread with a lot of tiny air pockets throughout which we taste as ‘fluffiness’.

Butter

 

Making sure it doesn’t fall apart – eggs

The use of eggs in baking is honestly an entire story in itself that I could dedicate an entire post to but we’ll keep it simple for clarity.

Eggs and, in particular, egg whites are the glue that keeps the bake together so they are also strengtheners. When heated, the proteins in the egg get broken down as their molecular bonds break. However, as the heat increases further these broken down proteins start cross-linking with each other. You can think of it like this; the egg white starts as nonreactive liquid evenly distributed throughout a mixture but the introduction of heat breaks down this nonreactive liquid into smaller pieces. Like a clingy ex after a breakup these small pieces suddenly become very reactive and want to get back together. As the egg white was evenly distributed throughout the mixture it rebinds to itself creating a mesh that permeates the entire bake holding it nice and firm.

An overuse of eggs (such as in many reduced-fat recipes) will make the bake very dry due to the greater extent of cross-linking but you should always try to use at least one for the binding effect.

eggs

 

Getting a rise – baking powder

Baking powder is simply a mixture of bicarbonate of soda and a weak acid which in many cases is cream of tartar due to its neutral flavour. Bicarbonate of soda is alkaline and you may remember from high school that acids and alkalines are at opposite ends of the pH scale. When the amount of acid and alkaline is equal (such as in our baking powder) you get a solution that is neutral. The interesting thing is that when acids and alkalines react together they harmlessly produce water and carbon dioxide. During a bake the bicarbonate of soda reacts with the weak acid and the resulting carbon dioxide is released into the bake in the form of air bubbles which expand causing the bake to expand with it. This not only causes the bake to rise but gives it that characteristic airy texture. For this reason it is referred to as a leavening agent which is a scientific term for a mixture that releases a gas. Adding too much baking powder however is trouble as too many bubbles rising to the surface will inevitably pop and sink the bake in the middle.

Many recipes will only call for bicarbonate of soda instead of baking powder and this is because another acid ingredient is already in the bake such as lemon, chocolate or honey.

Alternatively you can use yeast which makes carbon dioxide from sugar in the process of fermentation. But for many bakes you don’t necessarily want the extra flavours that get produced from fermentation so baking powder is more common.

 

Putting the icing on the cake – sugar

The most obvious use of sugar is for sweetening but it also acts like a fat to tenderize the bake. It does this by competing with the flour for water – a battle that sugar always wins. By restricting the amount of water that can combine with the flour it limits the amount of gluten protein strands that can interact with each other. Just like fats, this adds an airiness and moistness to the bake. Of course this also means that too much sugar will destabilise the bake.

The dry ingredients must be added in the right order when mixing because sugar will surely steal all the water away from the flour if given the chance. This means your carefully weighed ingredients will suddenly become a dry clump and it’s back to the drawing board.

Sugar is also used when yeast is the leavening agent because yeast needs the energy from sugar to produce carbon dioxide.

sponge

Getting the balance

Following the guidelines in a recipe is absolutely vital when baking due to the ratios of the ingredients having such a large effect on the bake – remember, this is a science! And just like chemistry, once you get some experience with the ingredients and you know how they act you gain a much better understanding of the reaction. So get baking! When successful, unlike chemistry, you get a delicious reward and there really is nothing like the smell of a house following a bake…

What is fire?

Picture the scene, you’re sitting on the cold, late night earth by the campfire, a couple of friends and a couple of beers for company. The night, like the fire, is dying down and conversation takes a backseat to the rhythmic pop and crackle of the vanishing flames. A friend sits up with an inquiring look and starts slowly, pensively, ‘Dude,’ he says, taking a long drag of his cigarette, ‘what is fire anyway?’. And thus begins the debate…

Bonfire

 

Let’s start with what fire is not. It’s not a solid or liquid obviously but it’s also not a gas. One property of solids, liquids and gases is that they are able to exist in this state indefinitely and, as we know, fire burns out. It’s also not an element as it’s not on the periodic table. Fire is, in fact, not even matter – you could not, even with Godly powers, stick atoms together to produce fire. Water – yes, air – yes, that one magical disappearing sock that always gets eaten by the dryer – sure! But it is impossible to ‘build’ fire.

 

So what is it!?

Fire is, simply put, a chemical reaction that you can watch, a transient byproduct of something else happening. Fire is the process of carbon (in wood for example) binding to oxygen to create carbon dioxide. It just happens that this chemical reaction releases a lot of heat and light so you can see it.

 

How does oxygen in the air not cause a fire every time it interacts with carbon?

In their native states carbon (in the wood of a tree for example) and oxygen in the air exist in harmony with each other and not reacting, we know this is true because trees are not spontaneously combusting all the time. However, once you introduce heat to the equation things start getting interesting…

Imagine you want to start a small wood fire. Once you’ve set up the firewood you need to get the fire going which is usually achieved by lighting smaller, drier sticks or newspaper on fire under the firewood (although the old cartoon method of rubbing two sticks together would also suffice). As long as there is heat provided the activation energy of the firewood will eventually be reached. The activation energy is simply the amount of energy required for a reaction to start – in this case, when there is enough heat that carbon in the firewood starts reacting with oxygen in the air. A bigger or thicker log would have a higher activation energy than a small twig for example which is why it’s comparatively easier to light a small twig on fire than a big log.270px-Fire_tetrahedron.svg

But here is what makes fire so chemically interesting (and super dangerous); once the activation energy has been reached and the reaction begins, the fire produced gives out more heat than the amount of heat required to reach its activation energy. This means that the reaction between carbon and oxygen produces more energy than you put in. The result is a chain reaction, one piece of firewood will produce enough heat to light a piece of adjacent firewood on fire. Thus, fire is self perpetuating until it runs out of flammable objects as shown by the fire tetrahedron to the right.

 

How can firewood put out more energy than you put in?

This interesting property of fire has certainly not gone unnoticed. Fire is directly used in power stations to produce as much as 80% of the worlds energy by burning coal, oil and natural gas. These fossil fuels are so good at producing energy because they contain an abundance of carbon (coal is 80-85% carbon). But it’s how the carbon got there in the first place that gives it so much energy.

Think of a growing tree, to keep growing it needs a constant supply of carbon to create new wood and thus increase its size (wood is about 50% carbon by weight). But where does the tree get this carbon from? Well, trees come up from the ground so it would make sense if the carbon came from the earth but the answer is not so intuitive. All the carbon in wood actually comes from the air, the carbon dioxide in the air in fact. And if you take a minute to think back to your high school biology classes this answer is not so surprising as the key is in photosynthesis.

Photosynthesis is the process that plants use to convert carbon dioxide (CO2) to oxygen (O2) using energy from the sun and is the reason we have oxygen to breathe. Carbon dioxide is 2 parts oxygen to 1 part carbon so after photosynthesis breaks the oxygen away we are left with carbon. Did you ever wonder what happens to that leftover carbon? It combines with hydrogen from water and the result is wood. Trees are literally made from air.

But what of the energy? Well, there was only one source of energy involved and that was the energy from the sun. And indeed it is this energy that stays locked up in the carbon. When you throw that wood on your campfire heat is introduced, the activation energy of the wood is reached, the carbon reacts with oxygen in the air to once again form carbon dioxide and all of that stored energy from the sun is released. That, my friends, is fire. When you stare at a wood fire you are watching the release of energy from the sun originally used by the tree in photosynthesis. Likewise, fossil fuels are just dead organic matter that contains stored energy from photosynthesis just waiting to be released.

 

Extra Science!

 

We’ve covered the heat, but why does fire also produce colour and light?

Anything with a temperature above absolute zero gives off some amount of thermal radiation, however the human eye is capable of seeing only what is called ‘visible radiation’ – the rainbow of colours you should be familiar with. We can’t see other forms of thermal radiation like ultraviolet or infrared for example because they give off energy at a lower frequency than the eye can see. If you’re a fan of police or spy drama then you should know that humans have a high enough temperature to give off infrared radiation which is why you can see humans through walls using infrared goggles.

However, you can increase the thermal radiation frequency by adding heat to something and this little phenomenon is called incandescence. Starting at about 525°C, things start to glow orange/red like when a blacksmith removes a glowing orange sword from the forge. As the temperature increases the colour changes to white and then blue. Due to the energy release when carbon interacts with oxygen, the air around the reaction becomes heated – the fire we see is simply the incandescence of the heated air.