Wednesday, October 22, 2014

Why is Ebola so scary?


Unless you've been living under a reasonably sizable rock for the last few months, it can't have escaped your attention that the world has yet another terror to throw on the mountain of things we should be scared of: Ebola. The ongoing situation in Africa is the largest Ebola outbreak in history and has seen the disease spread beyond Africa for the first time. At the time of writing this, nearly 10,000 people have become infected, almost half of whom have died. This number is growing...rapidly.
Ebola cases and deaths in the 2014 outbreak.
In this post, I will describe what Ebola is, why it is so scary, and what chances we have of defeating it.

What is Ebola?

'Ebola' as a biological term actually refers to a group of five viruses within the Filoviridae family, of which four can cause the disease generally called Ebola, but more specifically known as Ebola virus disease. The recent outbreak has been caused by just one of these viruses, which used to be known as Zaire Ebolavirus, but is now simply 'Ebola virus' given that it is the most common among humans, and Zaire no longer exists! It doesn't look a whole lot like most viruses, it has to be said - with long, tubular filaments waving around rather than the tight, spherical viruses we're used to seeing for 'flu, HIV, and most others.

The Ebola virus.

Friday, September 19, 2014

Comparing Planck's noise and dust to BICEP2

In case anyone reading this doesn't recall, back in March an experiment known as BICEP2 made a detection of something known as B-mode polarisation in the cosmic microwave background (CMB). This was big news, mostly because this B-mode polarisation signal would be a characteristic signal of primordial gravitational waves. The detection of the effects of primordial gravitational waves would itself be a wonderful discovery, but this potential discovery went even further in the wonderfulness because the likely origin of primordial gravitational waves would be a process known as inflation which is postulated to have occurred in the very, very early universe.

The B-mode polarisation in the CMB as seen by BICEP2. Seen here for the first time in blog format without the arrows. Is it dust, or is it ripples in space-time? Don't let Occam's razor decide!

I said at the time, and would stand by this now, that if BICEP2 has detected the effects of primordial gravitational waves, then this would be the greatest discovery of the 21st century.

However, about a month after BICEP2's big announcement a large crack developed in the hope that they had detected the effects of primordial gravitational waves and obtained strong evidence for inflation. The problem is that light scattering of dust in the Milky Way Galaxy can also produce this B-mode polarisation signal. Of course BICEP2 knew this and had estimated the amplitude of such a signal and found it to be much too small to explain their signal. The crack was that it seemed they had potentially under-estimated this signal. Or, more precisely, it was unclear how big the signal actually is. It might be as big as the BICEP2 signal, or it might be smaller.

Either way, the situation a few months ago was that the argument BICEP2 made for why this dust signal should be small was no longer convincing and more evidence was needed to determine whether the signal was due to dust, or primordial stuff.

Tuesday, August 26, 2014

The Cold Spot is not particularly cold

(and it probably isn't explained by a supervoid; although it is still anomalous)

In the cosmic microwave background (CMB) there is a thing that cosmologists call "The Cold Spot". However, I'm going to try to argue that its name is perhaps a little, well, wrong. This is because it isn't actually very cold. Although, it is definitely notably spotty.

That's the cold spot. It even has its own Wikipedia page (which really does need updated).

Why care about a cold spot?

This spot has become a thing to cosmologists because it appears to be somewhat anomalous. What this means is that a spot just like this has a very low probability of occurring in a universe where the standard cosmological model is correct. Just how anomalous it is and how interesting we should find it is a subject for debate and not something I'll go into much today. There are a number of anomalies in the CMB, but there is also a lot of statistical information in the CMB, so freak events are expected to occur if you look at the data in enough different ways. This means that the anomalies could be honest-to-God signs of wonderful new physical effects, or they could just be statistical flukes. Determining which is true is very difficult because of how hard it is to quantify how many ways in which the entire cosmology community have examined their data.

However, if the anomalies are signs of new physics, then we should expect two things to happen. Firstly, some candidate for the new physics should come up, which can create the observed effect and produce all of the much greater number of other measurements that fit the standard cosmological model well. If this happens, then we would look for additional ways in which the universe described by this new model differs from the standard one, and look for those effects. Secondly, as we take more data, we would expect the unlikeliness of the anomaly to increase. that is, it should become more and more anomalous.

In this entry, I'm not going to be making any judgement on whether the cold spot is a statistical fluke or evidence of new physics. What I want to do is explain why, although it still is anomalous, and is definitely a spot, the cold spot isn't very cold. Then, briefly, I'll explain why, if it is evidence of new physics, that new physics isn't a supervoid.

So, what is the cold spot, and why is it anomalous?

Friday, June 27, 2014

The human machine: obsolete components



The previous post in this series can be found here.

In my last post in this series I described some of the ways in which gene therapy is beginning to help in the treatment of genetic disorders. A caveat of this (which was discussed further in the comments section of that post) is that currently available gene therapies do not remove the genetic disorder from the germline cells (i.e. sperm or eggs) of the patient and so do not protect that person's children against inheriting the disease. This could be a problem in the long run as it may allow genetic disorders to become more common within the population. The reason for this is that natural selection would normally remove these faulty genes from the gene pool as their carriers would be less likely to survive and reproduce. If we remove this selection pressure by treating carriers so that they no longer die young, then the faulty gene can spread more widely through the population. If something then happened to disrupt the supply to gene therapeutics - conflict, disaster, etc. - then a larger number of people would be adversely affected and could even die.

Although this is a significant problem to be considered, it is one that is fairly simply avoidable by screening or treating the germline cells of people undergoing gene therapy in order to remove the faulty genes from the gene pool. This is currently beyond our resources on a large scale, but will almost certainly become standard practice in the future.

All of this got me thinking: are there any other genes that might be becoming more or less prevalent in the population as a result of medical science and/or civilisation in general? If so, can we prevent/encourage/direct this process and at what point do we draw the line between this and full-blown genetic engineering of human populations? This is the subject of this post, but before we get into this, I want to first give a little extra detail about how evolution works on a genetic scale.

Imperfect copies

Evolution by natural selection, as I'm sure you're aware, is simply the selection of traits within organisms based on the way in which those traits affect that organism's fitness. An organism with an advantageous trait is more likely to survive and reproduce and so that trait becomes more and more common within the population. Conversely, traits that disadvantage the organism are quickly lost through negative selection as the organism is less likely to reproduce. The strength  of selection in each case is linked to how strongly positive or negative that trait is - i.e. a mutation that reduces an animal's strength by 5% might be lost only slowly from a population, whereas one that reduces it by 90% will probably not make it past one generation. In turn, the strength of that trait is determined by the precise genetic change that has occurred to generate it.

Monday, May 5, 2014

The human machine: replacing damaged components


The previous post in this series can be found here.


The major theme of my 'human machine' series of posts has been that we are, as the name suggests, machines; explicable in basic mechanical terms. Sure, we are incredibly sophisticated biological machines, but machines nonetheless. So, like any machine, there is theoretically nothing stopping us from being able to play about with our fundamental components to suit our own ends. This is the oft feared spectre of 'genetic modification' that has been trotted out in countless works of science fiction, inexorably linked to concepts of eugenics and Frankenstein-style abominations. Clearly genetic modification of both humans and other organisms is closely tied to issues of ethics, and biosafety, and must obviously continue to be thoroughly debated and assessed at all stages, but in principle there is no mechanistic difference between human-driven genetic modification and the mutations that arise spontaneously in nature. The benefit of human-driven modification, however, is that it has foresight and purpose, unlike the randomness of nature. As long as that purpose is for a common good and is morally defensible, then in my eyes such intervention is a good thing.

One fairly obvious beneficial outcome of genetic modification is in the curing of various genetic disorders. Many human diseases are the result of defective genes that can manifest symptoms at varying times of life. Some genetic disorders are the result of mutations that cause a defect in a product protein, others are the complete loss of a gene, and some are caused by abnormal levels of gene activity - either too much or too little.  A potential means to cure such disorders is to correct the problematic gene within all of the affected tissue. The most efficient means to do that would be to correct it very early in development, since if you corrected it in the initial embryo then it would be retained in all of the cells that subsequently develop from that embryo. This is currently way beyond our technical limitations for several reasons. Firstly, we don't routinely screen embryos for genetic abnormalities and so don't know which ones might need treatment. Secondly, the margin for error in this kind of gene therapy is incredibly narrow as you have to ensure that every single cell that the person has for the rest of their life will not be adversely affected by what you do to the embryonic cells in this early stage - we're not there yet. Thirdly, our genetic technology is not yet sophisticated enough to allow us to remove a damaged gene and replace it with a healthy one in an already growing embryo - the best we can do it stick in the healthy gene alongside the defective one and hope it does the job. There is certainly no fundamental reason why our technology could not one day reach the stage where this kind of procedure is feasible, but we are a long way off yet.

So, for the time being what can we do? Well instead of treating the body at the embryonic stage, the next best approach is to treat specifically the affected cells later on in life.  This involves identifying the problematic gene and then using a delivery method to insert the correct gene into whatever tissues manifest the disease, preferably permanently. This is broadly known as gene therapy, and is one of the most promising current fields of 'personalised' medicine.  

Thursday, March 27, 2014

A new cosmological coincidence problem?

One of the consequences of the BICEP2 data from last week, should it hold up to scrutiny, and be seen by other experiments (I hope it holds up to scrutiny and is seen by other experiments), is that there is a significant lack of "power" in the temperature anisotropies on large angular scales.

What that sentence means is that when you look at the CMB in very large patches on the sky (about the size of the moon and bigger) its temperature fluctuates from patch to patch less than we would expect.

This was already somewhat the case before the BICEP2 discovery, but BICEP2 made it much more significant. The reason for this will hopefully turn into a post of its own one day, but, essentially, the primordial gravitational waves that BICEP2 has hopefully discovered would themselves have seeded temperature anisotropies on these large angular scales. Previously, we could just assume that the primordial gravitational waves had a really small amplitude and thus didn't affect the temperature much at all. Now, however, it seems like they might be quite large and therefore, this apparent lack of power becomes much more pertinent.

That's all fine and is something that any model of inflation that hopes to explain the origin of these gravitational waves will need to explain, despite what many cosmologists already writing papers on the ArXiv seem to want to believe (links withheld). As a side, ever-so-slightly-frustrated, note, the only papers I've seen that have actually analysed the data, rather than repeating old claims, have confirmed this problem that was clear from, at the latest, the day after the announcement.

But why does it imply a "cosmological coincidence problem"? And why is it a new coincidence problem? What's the old one?

Monday, March 24, 2014

The human machine: finely-tuned sensors


The previous post in this series can be found here.

All good machines need sensors, and we are no different. Everyone is familiar with the five classic senses of sight, smell, touch, taste, and hearing, but we often forget just how amazingly finely tuned these senses are, and many people have little appreciation of just how complex the biology behind each sense is. In this week's post, I hope to give you an understanding of how one of our senses, smell, functions and how, in light of recent evidence, is far more sensitive than we previously thought.

Microscopic sensors

The olfactory system is an extremely complex one, but it is built up from fairly simple base units. The sense of smell is of course located in the nose, but more specifically it is a patch of tissue approximately 3 square centimetres in size at the roof of the nasal cavity that is responsible for all of the olfactory ability in humans. This is known as the olfactory epithelium and contains a range of cell types, the most important of which is the olfactory receptor neuron. There are roughly 40 million of these cells packed into this tiny space and their job is to bind odorant molecules and trigger neuronal signals up to the brain to let it know which odorants they've detected. They achieve this using a subset of a huge family of receptors that I've written about before, the G protein-coupled receptors (GPCRs). These receptors are proteins that sit in the membranes of cells and recognise various ligands (i.e. molecules for which they have a specific affinity) and relay that information into the cell. There are over 800 GPCRs in the human genome and they participate in a broad range of processes, from neurotransmission to inflammation, but the king of the GPCRs has to be the olfactory family, which make up over 50% of all the GPCRs in our genome.