Monday, May 21, 2012

The ISW Mystery IV: Where does the evidence lead?

Where does the evidence lead? (Photograph: H Armstrong Roberts/Corbis)

In my last three major posts (I, II and III) I've been talking you through a mystery: the integrated Sachs-Wolfe mystery. This post will be able to be read on its own, but it you will appreciate it much more if you have also read them. In today's post I will be playing detective, examining the evidence, looking for leads and weighing up the various possible solutions to the mystery. Like any good mystery there are hints as to what the resolution might be, but like any good mystery story some of these hints might turn out to be just be red herrings, so we need to be careful.

At the beginning of my most recent post in this series I warned you that it would be my most technical post to date, but encouraged you to stick with it. With this post, the situation is the opposite. That post contained the details of the actual measurement that was made, which is necessarily going to be somewhat dry and technical. This post, however, speculates about what might have caused the effect. As you'll soon see, solving the mystery potentially requires modifications to our understanding of fundamental physics or the initial conditions of the universe. All very exciting stuff, so congratulations for making it to this point.

An overview of the case:

Before embarking on the detective work, let me recap the first three posts in this series. In the first post, I introduced what the integrated Sachs-Wolfe effect is. It is the very subtle heating and cooling of light as it passes through over and under dense regions of the universe. In the second post I explained that this effect is so small that it almost certainly will never be observed directly. The only hope we have to observe it is to look for statistical correlations between the temperature of light on the sky and the density of the matter that the light travelled through to reach us. Only the cosmic microwave background (CMB) is uniform enough that such a statistical correlation could ever be observed. In the last instalment, I told you of a particular measurement that intended to detect this ISW effect by looking at extreme over and under densities in the universe. The measurement appeared to be a success because it did measure a correlation. The only problem, and the source of the mystery, is that the size of the correlation is far too big to be from the ISW effect.

Something in those structures is heating/cooling the CMB, but what?

The leading suspect's alibi:

Throughout these posts I've called this mystery the “ISW mystery” but that is actually quite a poor name. The essence of why this is a mystery is that the measurement's results seem like they can't possibly be caused by the ISW effect. Nevertheless, at least superficially, the measurement results do look like an ISW-type effect, so it does deserve to start out as the leading suspect. Before looking for leads regarding what the real cause might be, it is instructive to go over how we know that this suspect didn't do it, at least not alone.

The expected average temperature shift of the CMB due to the ISW effect inside the most extreme regions of in the Sloan Digital Sky Survey (arXiv: 1109.4126)

To recap: the measurement was made by stacking patches of the CMB that had travelled through 50 of the most over and under-dense regions of the universe, as found by one particular algorithm. To predict the size of the measured signal we need to predict what the temperature shift should be in those patches. To do this, we need to know exactly what types of structures the algorithm will find. This is do-able, but it would be messy. As I've mentioned before, the stuff we can see (galaxies, etc) only follows the distribution of all matter on average.

What we can do instead is calculate the signal from the expected 50 most extreme regions in the volume of the survey considered. This provides a robust upper bound, a maximum possible signal that could have been observed from the ISW effect. If even this maximum possible signal is less than the observed signal then we know the observed signal cannot be just the ISW effect.

This calculation is exactly what myself and my collaborators did last year (following on from the work of many others of course). I will leave the technical details of that calculation to a future, bonus, post at some time, but the results can be seen in the figure above. This figure shows the average temperature shift expected in a patch of the CMB when it travels through spherical under-densities that satisfy a minimum cut in how under-dense they are. The y-axis of this figure is the change in the temperature, the x-axes show what happens when we set a minimum radius on the regions allowed as well. One of the plots shows the temperature as a function of this minimum radius and the other shows the temperature as a function of the number of regions in the survey volume expected to pass the under-density cut with this minimum radius. The blue curve is the theoretical expectation. The orange region in each plot is the lower limit allowed from the measurement (remember this was 11 micro Kelvin, with a standard error of 3 micro Kelvin).

It is clear that even the one or two most extreme spherical under-densities expected from the standard theories of structure formation will not create an ISW effect from standard gravity that is big enough to explain the measurement.

This rules out the most likely suspect, but the measurement still stands, so who committed the crime?

The leads and clues in the mystery:

A magnifying glass. The thing detectives use to examine evidence and look for clues. Also a sign that I lack imagination and really couldn't think of a more appropriate image to put here.

We need to study the evidence we've got and look for clues and possible leads from which to find more evidence. What are the particularly unique, or suspicious parts of this whole mystery?

The first clue is that this measurement focussed on extremes. If we can simply make bigger, more extreme under/over densities more likely, we might be able to resolve this whole thing without changing the ISW effect itself at all. This focus on extremes actually also makes this measurement believable in a way that the faster than light neutrinos (for example) never were. Any change to our understanding of physics that we make in order to understand this measurement can't simultaneously change anything else that we have already measured. If there is some modification just to the extremes of the primordial distribution, it is believable that it won't have been seen until this measurement was made.

The next clue is that the ISW effect arises because of dark energy. We don't understand dark energy well yet at all. In the standard cosmological model it is assumed to possess a constant energy density (in fact it is assumed to be a fundamental constant of nature - the cosmological constant). Any deviations from a cosmological constant will also change the theoretical expectations for the ISW effect. If this is an ISW type effect, it might be the first indications that dark energy is not constant. Unfortunately, the equation of state of dark energy (which parametrises how its energy density changes with time) has now been quite well constrained by other measurements. Any change in just the equation of state of dark energy large enough to affect this measurement will almost certainly break these constraints and so not be a viable solution.

Most of the other measurements of the effects of dark energy rely on the effect dark energy has on the expansion of the universe. Supernovae are dimmer, because they're further away. Standard rulers are slightly longer, because the universe's expansion has stretched them slightly more. The ISW effect is different in that it measures the effect of dark energy on the growth of structure, and more specifically, the rate of growth of structure. If the growth rate of structures on large length scales was different to what we currently expect then the ISW effect would also be different. While there are quite tight constraints on the primordial perturbations at these length scales, the constraints on how these perturbations grow at every point of time are not so strong. Tantalisingly, this ISW mystery is not the only measurement that currently suggests that large length scale structure formation might be throwing us a curve-ball (for example here and here).

All of those leads are highly suggestive of quite substantial changes to fundamental physics. That would be really exciting and is what draws me to this mystery, but we should stop and also ask if there are there any clues that point to a more mundane solution? After-all this cosmological constant does seem to fit almost everything else we've measured quite well. 

I wrote earlier in this post that it is clear that even the one or two most extreme spherical under-densities expected will not create an ISW effect that is big enough to explain the measurement.  Surely the over/under densities in the universe have all sorts of shapes though, so why do we just calculate the signal for spheres? Well, the universe is statistically isotropic and homogeneous. This means it has no preferred direction and looks the same everywhere. The algorithms used to find these regions have no preferred direction either. Therefore, although any individual structure found by these algorithms will of course not be a sphere, the average shape of 50 structures should be. Unless that is, we're missing something...

Finally, what about noise? What about measurement errors? I have stressed that, unless this whole measurement is just a statistical fluke, the noise from the primordial CMB cannot be causing this. 

Any noise that can explain this signal must be noise that is correlated with the existence of matter in the universe. Now, the CMB itself doesn't come to us in a neat packaged form. The experimental teams that measure it need to extract it from a background radiation field that includes light emitted by galaxies, supernovae, dust, etc. This bit of the post is a little bit beyond my expertise, but it seems conceivable (although improbable) that when all of this foreground is extracted there is some leftover light that is similar enough to the CMB to be missed, and small enough that it doesn't mess with other CMB analyses. I don't know what this light source could be, but it is certainly not an impossibility that it could be there.

The new suspects:

It's probably not the guy on the right.
So who emerge as the new suspects? And if each suspect happened to be the true culprit, what does this mean for the rest of cosmology, physics and our place in the world? Keep in mind when reading that everything from now on is entirely speculation. I don't yet know the  resolution to this mystery (or it wouldn't be a mystery).

Non-Gaussianity in the primordial perturbations

Candidate solution number one is that the primordial perturbations in the density of the universe are subtly different to what we currently expect. Currently, we expect them to almost perfectly follow a Gaussian distribution (the familiar bell-shaped curve). The measurements we've made of the primordial CMB show that they are very, very, close to Gaussian. The variance of the perturbations has been very well measured at the scales relevant to this measurement and for a Gaussian this is enough to completely define the entire distribution.

One way to measure how “non-Gaussian” a distribution is is through its higher moments. The first two of these are called the skewness and kurtosis. The skewness is exactly what it sounds like, it measures how asymmetric the distribution is. The kurtosis isn't so obvious, but it effectively measures how much wider than a Gaussian the distribution is. Tantalisingly, a very small skewness or kurtosis will have almost no effect on most of a distribution, but can have a disproportionately large effect on its extremes. So, a small amount of non-Gaussianity can make extreme regions much more likely. And, because non-Gaussianity disproportionately affects extremes it is highly possible that the amount of skewness and/or kurtosis required to solve this mystery won't affect any other cosmological measurement that has already been made.

This might sound like a very compelling, almost trivial, modification to the standard cosmological model, but it isn't quite that simple. We believe the primordial perturbations were Gaussian for a reason. The ground state of a system in quantum field theory (the vacuum) is Gaussian. This is well tested. In cosmological inflation, which is by far our best theory for the origin of the primordial perturbations in the universe, the perturbations begin as fluctuations in the vacuum of a quantum field. Therefore, the existence of measurable non-Gaussianity in the primordial perturbations would have enormous ramifications for this model. Either the theory of inflation is wrong, quantum field theory needs to be modified at small length scales, or there was more than one field around during inflation. Any of these possibilities would mean that the discovery of non-Gaussianity in the primordial density perturbations would be as big a discovery as the discovery of dark energy 14 years ago, if not bigger.

Modifying dark energy

Today, dark energy is a complete mystery. It really is one of the most baffling discoveries fundamental physics has made. The existence of some sort of cosmological constant, or vacuum energy isn't itself mysterious, but the fact that its energy density is so ridiculously small compared to everything else we've encountered, but not quite zero, is really weird. In cosmology it is easy to just put it into a model and calculate its value, but for those people trying to explain its origin, its magnitude is the biggest mystery of fundamental physics, if not all human endeavours today. 

If the energy density of dark energy isn't constant it could easily resolve this mystery. This would also be a huge discovery. It would give us the tiniest piece of understanding of what this dark energy thing actually is. Unfortunately, as I mentioned earlier, this possibility is very heavily constrained. While a compelling thought, I can't see how any change from constancy can be made large enough to explain this and not ruin other measurements. Also, while it is easy for cosmologists to start speculating that it isn't actually constant, coming up with a compelling model for what it is and why it isn't constant without messing up many other things is extremely difficult.

Modifying gravity

The ISW effect is a gravitational effect. It relies on the pushes and pulls of matter on light (the effect itself) as well as the pushes and pulls of matter on matter (the evolution of the over/under dense regions of the universe). It is possible that these pushes and pulls are subtly different to what we expect. In other words, gravity itself might be different over large distance scales compared to how it acts over small distance scales. There are many candidates for models of alternative gravity, but general relativity (the current theory of gravity) is very well tested so any modification will need to be very subtle indeed.

However, if at precisely the length scales (or time scale) at which this measurement was made gravity changes in such a way that it pulls matter together more slowly, or in such a way that matter starts to affect light more weakly, then this could also resolve the mystery. Achieving this is both really easy and extremely difficult. It is easy because one can just dictate exactly how gravity must act at which time and on which scales and explain away this signal. It is difficult because we also want to understand why gravity is how it is. General relativity is so compelling because, although the mathematics behind it is difficult, the concepts underlying it aren't. It would be a tragic step backwards to throw GR away and replace it with just an empirical set of rules for how gravity acts and when. Coming up with a more fundamental model that just happens to deviate in the way required to solve this mystery without deviating anywhere else, or having internal inconsistencies, is much more difficult.

It is needless to say, however, that if this particular measurement is one of the first indications that General Relativity needs modified on large distance scales, that would be pretty big news.

Redshift errors

The mysterious measurement I'm discussing used a particular set of galaxies to map out the density of space and then used a particular algorithm to find a set of over/under dense regions. This algorithm assumed it knew the locations of each galaxy perfectly. 

We measure how far away a galaxy is from us by measuring how fast it is moving away from us. On average, there is a simple relationship between these two quantities, dictated by the expansion of the universe. You tell me how fast something is moving away from us, I can tell you how far away it is. However, for the particular measurement we're considering two things ruin this cosy picture. Firstly, we can't measure the recession velocity of these galaxies very precisely. Or at least we can, but it takes time. So, for most of the galaxies we've observed, we haven't. Secondly, the relationship is only true on average. Some galaxies will be receding from us slightly faster and some slightly slower when compared to the rest of the matter around them.

Both of these effects combine to make our knowledge of where a galaxy is along our line of sight imperfect. Comparably, our knowledge of where a galaxy is on the sky is much better. We can directly see that it is along one line of sight and not another. This means that the process to detect the over/under dense regions is not actually isotropic. We are much less certain about things along our line of sight than perpendicular to it. The effect of this uncertainty is to smear structures out along our line of sight. The net result of this is that an algorithm that assumes it has perfect knowledge of where a galaxy is, applied to the real situation where we don't, will preferentially find structures that are aligned along the line of sight. In other words, the algorithm doesn't find structures that are on average spheres, but structures that are on average cylinders.

Curiously, a cylinder with an equal volume and equal degree of over/under density will also give a bigger ISW signal than a sphere, for a number of reasons. Firstly, a photon will remain inside a cylinder for longer. Therefore, the integrated Sachs-Wolfe effect will be bigger. Secondly, spherically shaped under-densities are also less likely than cylinders. Look at any picture of structures in the universe and you'll see many more large scale filaments than large scale balls. 

This possible resolution starts to appear extremely compelling and, for a while, when we first realised this lead, I was convinced that it definitely was the resolution. I am still particularly attracted to this solution, but, as I showed right at the beginning of my last post, people have made maps of the total ISW signal expected over the entire sky. Looking at these maps and making by eye estimates of the maximum possible signal it isn't clear that even very long filaments could generate an effect large enough to completely resolve this mystery (though nor is it clear that they can't). Nevertheless, the full calculation remains to be done and I'm still on the fence as to what the result will be.

If this does turn out to be the true resolution of this mystery, it would be an amusing one. By neglecting the redshift errors in their measurements, the observers will have accidentally found a bigger signal than they would have found had they not. How very serendipitous.

One other remaining possibility that deserves a mention is that this is just a statistical fluke. There is a small probability (less than 1%) that a universe in the standard cosmological model could throw up primordial CMB fluctuations that mimic this signal. We've measured a lot of things in the sky and if we measure enough we do expect the occasional fluke. If this is true, then the universe has played an unfortunate trick on us. If so, the only way to resolve this is to make more measurements of more things and not see any more “flukes”. (Sesh, one of my collaborators in this calculation, points out in this comment that the probability of this being a fluke is actually less than 0.3% - i.e. 0.003)

What now to solve the case?

How people (including myself and my collaborators) are trying to resolve this mystery must wait for future posts... (in the meantime you can vote on the answer in the poll on the side)

Now continued here

Twitter: @just_shaun

5 comments:

  1. Seeing as at least one other person has now voted in the poll... if you do have a guess, feel free to write here why you chose the option you chose.

    I chose "noisy foregrounds", partially because this would be the least interesting real solution and it's best not to get one's hopes up and partially because it is the solution that I understand the physics of the least, so I probably also just don't understand all the reasons why it won't work. Some people who do know more of that stuff than me are sure it can't be foregrounds (some aren't).

    Time will tell...

    ReplyDelete
  2. I see that someone has voted for the answer to be "a statistical fluke". I'd just like to point out that there is a less than 0.3% chance of it being a statistical fluke (that's what it means to be more than 3 sigma away from the expectation). That's not to say it can't be a statistical fluke of course - and this is why 3 sigma evidence is not regarded as conclusive in fields where it is possible to have better standards, say in particle physics - but do you really think that the chance of any of the other explanations being right is less than 0.3%?

    I'd understand a position that said "yeah, looks odd on the face of it, but Shaun, you and your collaborators probably made a mistake in your calculations so it isn't a mystery after all" (I'd disagree, but I'd understand it). But that's not the same as saying it is a statistical fluke.

    I voted for "other", by the way, because this includes the possibility that we made a mistake or some over-simplifying assumption, the possibility that the actual observation had some (undetected) flaw in it, and every other possibility that we might not yet have thought of. On balance I thought that won.

    ReplyDelete
    Replies
    1. Both these points are really good points. I'm going to link to this comment where I mention the fluke option.

      And, for anyone reading, it's actually quite a bit less than 0.3% once you take into account how conservative we were in our calculation.

      Delete
  3. Shaun, why do you think that the discovery of non-Gausssianity "would be as big a discovery as the discovery of dark energy 14 years ago, if not bigger"?

    Ok, so single-field inflation would be wrong. But there are many models of inflation with more than one field. And if we are going to have to postulate new scalar fields at higher energy scales, it is not clear to me why we should not expect more than one. On the other hand, having a tiny but non-zero cosmological constant did present a fine-tuning problem, which many people still take very seriously (as we both know!).

    ReplyDelete
    Replies
    1. Hah, as we both know indeed. Subir probably wouldn't approve of how lazily I am lumping the apparent accelerated expansion under the label dark energy. I don't really approve either, but it is easier, for now.

      I guess the significance of the discoveries depends on the perspective. I wasn't trying to make a strong claim about the comparison between the two discoveries. I was just trying to stress that n-G would be a big discovery.

      Non-Gaussianity would be telling us something non-trivial about the universe 14 billion years ago. Not necessarily something mysterious, I agree, but definitely something interesting. It would also be telling us, very indirectly (unfortunately), about how nature behaves at energies that aren't able to be probed by any other means.

      Dark energy is more mysterious because we think we better know what to expect from it. So, yes, in that sense it was a big discovery because it required something of a paradigm shift. And yes, because we don't have quite as clear expectations for inflation, n-G wouldn't require so much of a paradigm shift. But both discoveries would be revealing just as much new stuff about the universe.

      When I claimed that n-G might be a bigger discovery I was coming from the perspective that learning anything about nature at 10^(any number bigger than 3) GeV would be incredible.

      Delete

Note: Only a member of this blog may post a comment.