Perturbation theory: are we covering up new physics?

A timely award of the J. J. Sakurai Prize acknowledges how hard it can be sometimes to pin down what the Standard Model really thinks

bang! A graphical representation of a proton-proton collision. Loosely speaking, the red, yellow and some blue bits are the skeleton, and the green stuff is squishy. Credit: Frank Krauss, Sherpa.

We're measuring all kinds of stuff at the Large Hadron Collider right now. The question we're addressing could be summed up as

Does the Standard Model of particle physics work at LHC energies or not?


If it works, there is a Higgs boson but not much else new. If it doesn't, there might not be a Higgs but there must be something weird and new going on. As I have said before, the energy range of the LHC is special.

This begs the question (of me at least)

How well do we really understand the predictions of the Standard Model at these energies?

This isn't an easy one. In general we can't solve the Standard Model exactly. We use approximations. Most of these rely on the fact that the "coupling", that is the strength of the fundamental forces, is not very large.

The strength of a force can be expressed as a number. If it was 0.1, say, then the chances of two particles interacting would be proportional to 0.1 x 0.1 = 0.01. But for three to interact it would be 0.1 x 0.1 x 0.1 = 0.001, four would be 0.0001 and so on. This means when the coupling is small, you can ignore the contributions which involve more than say four particles - they are just a small perturbation on the main result, because they are multiplied by 0.1 x 0.1 x 0.1 x 0.1 x 0.1 = 0.00001. They don't change the result much. This is "perturbation theory". It is accurate if the coupling is small, that is if the force is weak.

This is mostly true at LHC energies, except for when it isn't.

The bits when isn't mostly involve the strong nuclear force, Quantum Chromodynamics. That's why it's called the strong force. (We don't intentionally obfuscate, it's tough enough as it is.)

For example, aspects of how quarks and gluons are distributed inside the protons we collide can't be calculated from first principles. Neither can the way the quarks and gluons turn in to new hadrons in the end. We have some constraints from our theory, we have basic stuff like the conservation of energy and momentum, and we have a lot of data from other places. But we can't use perturbation theory. The coupling number gets near to one, and 1 x 1 x 1 x ... = 1. This means no matter how many particles you include in your calculation, you don't converge on a solid answer. In the end we have to make educated guesses, or models. And these are always adjustable.

A long time ago Lily wrote a piece, where she, and commenters, worried that we might be adjusting these models in such a way that we actually covered up exciting new physics. This is a real worry. To avoid it, you need to have calculations of what you know, done with perturbation theory, linked up to models of what you don't know very well. I think of this rather gruesomely as a skeleton of hard predictions inside and squidgy body of best guesses. The body can change shape. You can push in its stomach quite painlessly, but you really know about it if you break a bone.

Anyway, marrying the squidgy models to the rigid perturbation theory is mostly done using Monte Carlo event generators. These not only encode much of what we know about what happens when particles collide, but they are also an invaluable tool in designing new experiments and working out how your existing experiment is responding to data. "Monte Carlo" is an allusion to the fact that they use a lot of random numbers, like roulette, or taxmen.

As a theorist you can sometimes lose out for being involved in one of these generators. You can have a paper with thousands of citations and people will say "it's only a Monte Carlo thing" whereas with a similar number of citations in string theory you might stride the world like a colossus, despite that fact that the generator will describe data whereas string theory isn't even wrong.

Monte Carlos aren't the only way, but in general they are part of an effort to understand the implications of the Standard Model and to try and get it to make as many precise predictions as possible. The American Physical Society's J. J. Sakurai Prize has just been awarded to three theorists, Bryan Webber, Guido Altarelli and Torbjorn Sjostrand, who work in this area

For key ideas leading to the detailed confirmation of the Standard Model of particle physics, enabling high energy experiments to extract precise information about Quantum Chromodynamics, electroweak interactions and possible new physics.

This makes me very happy because, for one thing, calculations and code written by all three of them are essential to understanding pretty much everything we're doing at the LHC; including making sure we don't cover up any new physics by mistake. I'm proud to have worked closely with two of them.

Another interesting fact, while I'm on a roll, is that if there's no Higgs, the next best guess as to what nature is up to probably involves the weak force becoming strong*, and then we may really be perturbed.


* See here for what might be a good quote on that.


Your IP address will be logged

Comments

18 comments, displaying oldest first

  • This symbol indicates that that person is The Guardian's staffStaff
  • This symbol indicates that that person is a contributorContributor
  • nocod

    12 October 2011 1:33AM

    Once you have metaphysics and quantum theory in the blood there is nothing to stop you trying ketamine, of the finest kind, and get to actually see it.
    It's the key difference between mathematicians and physicists.

  • 31428571J

    12 October 2011 1:47AM

    O how I love 'perturbation theory'. (almost as much as I love 'set theory')

    ... This means when the coupling is small, you can ignore the contributions which involve more than say four particles - they are just a small perturbation on the main result ...

    Always thought though that it could do with a touch of the OCD's:-)

  • Gelion

    12 October 2011 8:01AM

    The Standard Model has manifest problems.

    Einstein's great blunder, his Cosmological Constant, is back in the form of Dark Energy which is still inflating the Universe - or not, depending on your point of view, though no one understands why or what it is.

    And then there is the Big Bang, pre inflation, which many physicists do not believe happened at all now, it not being needed for some theories for the Universe to reach the first round of inflation.

    There are also issues about having multi-verses and more than 4 dimensions. Einstein does not need them, but other theories do - there is a great argument over on New Scientist about this issue every time something comes up over a piece of data that might or might not add to the suggestion that these are needed.

    And then last month there was "proof" - though there are now a host of theories trying to explain it - of particles going faster than light. Which the media thought meant time travel was possible.

    So, 100 years after Einstein's relativity theory which is the backbone of the Standard model there are still very serious questions about what is going on.

  • oharar

    12 October 2011 9:37AM

    So, 100 years after Einstein's relativity theory which is the backbone of the Standard model there are still very serious questions about what is going on.


    I'm a statistician (I wrap my Monte Carlo simulations up in Markov chains) so I only read phsyics blogs,, but isn't the standard model a product of quantum physics, not relativity?

  • Rhcbiz

    12 October 2011 10:03AM

    Given what you say how can even remotely reasonable probability assessments be made on how potentially dangerous some LHC experiments might be?

  • MightyDrunken

    12 October 2011 10:16AM

    I'm not sure I see the problem here. Of course there is always the chance that theory is incorrect but isn't that what the LHC is there to test?
    If Professor Butterworth is saying that we may misunderstand the results of the LHC and miss some vital clues then could he give an example of what may be missed and how.

    oharar the standard model is a synthesis of quantum mechanics and relativity, though the standard model does not fully incorporate gravitation.

    Rhcbiz we know the LHC is very likely to be safe as more powerful particle interactions happen in the upper atmosphere due to ultra-high-energy cosmic rays. We are still here.

  • ChrisBenton

    12 October 2011 10:21AM

    Rhcbiz:

    Given what you say how can even remotely reasonable probability assessments be made on how potentially dangerous some LHC experiments might be?

    The safety assessments aren't based on the fundamental physics. Instead, they're based on the fact that high energy particles from outer space are constantly bombarding our atmosphere. If the LHC posed the slightest risk, then the disaster would already have occurred naturally.

  • ChrisBenton

    12 October 2011 10:29AM

    oharar

    I'm a statistician (I wrap my Monte Carlo simulations up in Markov chains) so I only read phsyics blogs,, but isn't the standard model a product of quantum physics, not relativity?

    It's a product of both. Quantum mechanics is compatible with special relativity, and the standard model is derived from this relativistic quantum mechanics.

    The problem between quantum mechanics and relativity emerges when you try to extend the model to encompass general relativity.

  • e5equalmt

    12 October 2011 4:26PM

    Quite frankly it all apears to be oxymoronic !

  • ifsowhyso

    12 October 2011 5:30PM

    Well exactly. I've been saying this for years.


    Me too, thisismycreed. ;-)

  • TigerRepellingRock

    12 October 2011 7:24PM

    Damn right! Good to see theorists who spend their careers being usefully wrong getting the recognition they deserve.

    Gelion:
    You seem to be talking about the standard model of cosmology, which is a whole other thing. The standard model of particle physics has generally had the problem that it doesn't have any problems.

  • djbroadhurst

    13 October 2011 8:46AM

    John wrote:

    >> The American Physical Society's J. J. Sakurai Prize has just been awarded to three theorists, Bryan Webber, Guido Altarelli and Torbjorn Sjostrand <<

    This is the 2012 Sakurai prize. John's choice of the past tense is in line with current CERN neutrino practice.

    David Broadhurst

  • campanel

    13 October 2011 4:26PM

    @MightyDrunken:I'm not sure I see the problem here. Of course there is always the chance that theory is incorrect but isn't that what the LHC is there to test?

    this is exactly the point. What is the theory? the predictions made are extremely complicated, like the distribution of energy and directions for various particles produced in a collision.
    So, how do you know if your data agrees with theory? you produce simulated collisions, based on your favourite theory (be it the Standard Model or any extension of it), and compare them to data. Actually particles produced by the simulated collisions can be sent through an *extremely* detailed model of the detector, the electronics and digitisation chain, so that in the end they can be reconstructed by the same code that reconstructs the data.
    All that is done with MonteCarlo simulations (so called because based on tossing random numbers, like the roulette in a casino), and the three theorists who got the prize (well, I did not know Altarelli worked on MonteCarlos, maybe I'm too young) were pioneers in taking this technique to very high levels of precision and sophistication.

  • pppumpkineater

    15 October 2011 1:04AM

    First , Dirac's theory of quantum mechanics is in correspondence with the special relativity because both theories are invariant under transformations generated by the special linear group L4. While Monte Carlo techniques are adequate from the epistemological point of view , and one can use them to continue to acquire knowledge in conjunction with experiment, I suspect that the mathematical idealist will never be satisfied with anything less than an ultimate unique static ontology--a theory of everything, not contingent upon experiment. Thus the Standard model appears to them to be an asymptotic hacks entailing weak notions of universality, similar to that of statistical mechanics with it's maximum entropy , phase transitions, random Markov fields etc. What the Standard Model has going for it is at least it is honest about uncertainty---so it is akin to Bayesian learning.

    String theory , being spectacular in it's claims that are out of reach of present experimental art, as any good theory should be , must at least offer up some predictions ---notwithstanding Duetches advocacy for explanations--which if disciplined , is worthy . But the experimental art is being advanced by way of developments in the area of non-extensive and non-additive entropy. This may provide a way to augment qm with a non-linear statistical mechanics, based on say gamma function, rather than Dirac function interpolations of discrete manifolds, in a way that is not ad hoc in it's prior constraints. This may provide a super statistical way to bring gravity into the quantum mechanical fold , as an empirical matter.

  • Gredel

    15 October 2011 1:51AM

    Quantum Physics, String Theory, Higgs boson...we are trekking a trail that is loosey goosey. As science goes so go we...

  • pppumpkineater

    15 October 2011 1:41PM

    @Gredel has a point , but reality is loosey goosey we are entering the era of an empirically based science of the loosey goosey as our cyber-physical information gathering and utilization infrastructure and appliances continue to rapidly evole in the context of machine learning.

Comments on this page are now closed.

Bestsellers from the Guardian shop

Guardian Bookshop

This week's bestsellers

  1. 1.  Quantum Universe

    £20.00

  2. 2.  Why Does E=mc2?

    by Brian Cox £8.99

  3. 3.  God Delusion

    by Richard Dawkins £8.99

  4. 4.  Short History of Nearly Everything

    by Bill Bryson £9.99

  5. 5.  Grand Design

    by Stephen Hawking £8.99

Life and Physics weekly archives

Oct 2011
M T W T F S S

Latest Guardian science blogs