• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Flip Tanedo | USLHC | USA

Read Bio

The Delirium over Beryllium

Thursday, August 25th, 2016

This post is cross-posted from ParticleBites.

Article: Particle Physics Models for the 17 MeV Anomaly in Beryllium Nuclear Decays
Authors: J.L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. P. Tait, F. Tanedo
Reference: arXiv:1608.03591 (Submitted to Phys. Rev. D)
Also featuring the results from:
— Gulyás et al., “A pair spectrometer for measuring multipolarities of energetic nuclear transitions” (description of detector; 1504.00489NIM)
— Krasznahorkay et al., “Observation of Anomalous Internal Pair Creation in 8Be: A Possible Indication of a Light, Neutral Boson”  (experimental result; 1504.01527PRL version; note PRL version differs from arXiv)
— Feng et al., “Protophobic Fifth-Force Interpretation of the Observed Anomaly in 8Be Nuclear Transitions” (phenomenology; 1604.07411; PRL)

Editor’s note: the author is a co-author of the paper being highlighted. 

Recently there’s some press (see links below) regarding early hints of a new particle observed in a nuclear physics experiment. In this bite, we’ll summarize the result that has raised the eyebrows of some physicists, and the hackles of others.

A crash course on nuclear physics

Nuclei are bound states of protons and neutrons. They can have excited states analogous to the excited states of at lowoms, which are bound states of nuclei and electrons. The particular nucleus of interest is beryllium-8, which has four neutrons and four protons, which you may know from the triple alpha process. There are three nuclear states to be aware of: the ground state, the 18.15 MeV excited state, and the 17.64 MeV excited state.

Beryllium-8 excited nuclear states. The 18.15 MeV state (red) exhibits an anomaly. Both the 18.15 MeV and 17.64 states decay to the ground through a magnetic, p-wave transition. Image adapted from Savage et al. (1987).

Most of the time the excited states fall apart into a lithium-7 nucleus and a proton. But sometimes, these excited states decay into the beryllium-8 ground state by emitting a photon (γ-ray). Even more rarely, these states can decay to the ground state by emitting an electron–positron pair from a virtual photon: this is called internal pair creation and it is these events that exhibit an anomaly.

The beryllium-8 anomaly

Physicists at the Atomki nuclear physics institute in Hungary were studying the nuclear decays of excited beryllium-8 nuclei. The team, led by Attila J. Krasznahorkay, produced beryllium excited states by bombarding a lithium-7 nucleus with protons.

Preparation of beryllium-8 excited state

Beryllium-8 excited states are prepare by bombarding lithium-7 with protons.

The proton beam is tuned to very specific energies so that one can ‘tickle’ specific beryllium excited states. When the protons have around 1.03 MeV of kinetic energy, they excite lithium into the 18.15 MeV beryllium state. This has two important features:

  1. Picking the proton energy allows one to only produce a specific excited state so one doesn’t have to worry about contamination from decays of other excited states.
  2. Because the 18.15 MeV beryllium nucleus is produced at resonance, one has a very high yield of these excited states. This is very good when looking for very rare decay processes like internal pair creation.

What one expects is that most of the electron–positron pairs have small opening angle with a smoothly decreasing number as with larger opening angles.

Screen Shot 2016-08-22 at 9.18.11 AM

Expected distribution of opening angles for ordinary internal pair creation events. Each line corresponds to nuclear transition that is electric (E) or magenetic (M) with a given orbital quantum number, l. The beryllium transitionsthat we’re interested in are mostly M1. Adapted from Gulyás et al. (1504.00489).

Instead, the Atomki team found an excess of events with large electron–positron opening angle. In fact, even more intriguing: the excess occurs around a particular opening angle (140 degrees) and forms a bump.

Adapted from Krasznahorkay et al.

Number of events (dN/dθ) for different electron–positron opening angles and plotted for different excitation energies (Ep). For Ep=1.10 MeV, there is a pronounced bump at 140 degrees which does not appear to be explainable from the ordinary internal pair conversion. This may be suggestive of a new particle. Adapted from Krasznahorkay et al., PRL 116, 042501.

Here’s why a bump is particularly interesting:

  1. The distribution of ordinary internal pair creation events is smoothly decreasing and so this is very unlikely to produce a bump.
  2. Bumps can be signs of new particles: if there is a new, light particle that can facilitate the decay, one would expect a bump at an opening angle that depends on the new particle mass.

Schematically, the new particle interpretation looks like this:

Schematic of the Atomki experiment.

Schematic of the Atomki experiment and new particle (X) interpretation of the anomalous events. In summary: protons of a specific energy bombard stationary lithium-7 nuclei and excite them to the 18.15 MeV beryllium-8 state. These decay into the beryllium-8 ground state. Some of these decays are mediated by the new X particle, which then decays in to electron–positron pairs of a certain opening angle that are detected in the Atomki pair spectrometer detector. Image from 1608.03591.

As an exercise for those with a background in special relativity, one can use the relation (pe+ + pe)2 = mX2 to prove the result:

Untitled

This relates the mass of the proposed new particle, X, to the opening angle θ and the energies E of the electron and positron. The opening angle bump would then be interpreted as a new particle with mass of roughly 17 MeV. To match the observed number of anomalous events, the rate at which the excited beryllium decays via the X boson must be 6×10-6 times the rate at which it goes into a γ-ray.

The anomaly has a significance of 6.8σ. This means that it’s highly unlikely to be a statistical fluctuation, as the 750 GeV diphoton bump appears to have been. Indeed, the conservative bet would be some not-understood systematic effect, akin to the 130 GeV Fermi γ-ray line.

The beryllium that cried wolf?

Some physicists are concerned that beryllium may be the ‘boy that cried wolf,’ and point to papers by the late Fokke de Boer as early as 1996 and all the way to 2001. de Boer made strong claims about evidence for a new 10 MeV particle in the internal pair creation decays of the 17.64 MeV beryllium-8 excited state. These claims didn’t pan out, and in fact the instrumentation paper by the Atomki experiment rules out that original anomaly.

The proposed evidence for “de Boeron” is shown below:

Beryllium

The de Boer claim for a 10 MeV new particle. Left: distribution of opening angles for internal pair creation events in an E1 transition of carbon-12. This transition has similar energy splitting to the beryllium-8 17.64 MeV transition and shows good agreement with the expectations; as shown by the flat “signal – background” on the bottom panel. Right: the same analysis for the M1 internal pair creation events from the 17.64 MeV beryllium-8 states. The “signal – background” now shows a broad excess across all opening angles. Adapted from de Boer et al. PLB 368, 235 (1996).

When the Atomki group studied the same 17.64 MeV transition, they found that a key background component—subdominant E1 decays from nearby excited states—dramatically improved the fit and were not included in the original de Boer analysis. This is the last nail in the coffin for the proposed 10 MeV “de Boeron.”

However, the Atomki group also highlight how their new anomaly in the 18.15 MeV state behaves differently. Unlike the broad excess in the de Boer result, the new excess is concentrated in a bump. There is no known way in which additional internal pair creation backgrounds can contribute to add a bump in the opening angle distribution; as noted above: all of these distributions are smoothly falling.

The Atomki group goes on to suggest that the new particle appears to fit the bill for a dark photon, a reasonably well-motivated copy of the ordinary photon that differs in its overall strength and having a non-zero (17 MeV?) mass.

Theory part 1: Not a dark photon

With the Atomki result was published and peer reviewed in Physics Review Letters, the game was afoot for theorists to understand how it would fit into a theoretical framework like the dark photon. A group from UC Irvine, University of Kentucky, and UC Riverside found that actually, dark photons have a hard time fitting the anomaly simultaneously with other experimental constraints. In the visual language of this recent ParticleBite, the situation was this:

Beryllium-8

It turns out that the minimal model of a dark photon cannot simultaneously explain the Atomki beryllium-8 anomaly without running afoul of other experimental constraints. Image adapted from this ParticleBite.

The main reason for this is that a dark photon with mass and interaction strength to fit the beryllium anomaly would necessarily have been seen by the NA48/2 experiment. This experiment looks for dark photons in the decay of neutral pions (π0). These pions typically decay into two photons, but if there’s a 17 MeV dark photon around, some fraction of those decays would go into dark-photon — ordinary-photon pairs. The non-observation of these unique decays rules out the dark photon interpretation.

The theorists then decided to “break” the dark photon theory in order to try to make it fit. They generalized the types of interactions that a new photon-like particle, X, could have, allowing protons, for example, to have completely different charges than electrons rather than having exactly opposite charges. Doing this does gross violence to the theoretical consistency of a theory—but they goal was just to see what a new particle interpretation would have to look like. They found that if a new photon-like particle talked to neutrons but not protons—that is, the new force were protophobic—then a theory might hold together.

Schematic description of how model-builders “hacked” the dark photon theory to fit both the beryllium anomaly while being consistent with other experiments. This hack isn’t pretty—and indeed, comes at the cost of potentially invalidating the mathematical consistency of the theory—but the exercise demonstrates the target for how to a complete theory might have to behave. Image adapted from this ParticleBite.

Theory appendix: pion-phobia is protophobia

Editor’s note: what follows is for readers with some physics background interested in a technical detail; others may skip this section.

How does a new particle that is allergic to protons avoid the neutral pion decay bounds from NA48/2? Pions decay into pairs of photons through the well-known triangle-diagrams of the axial anomaly. The decay into photon–dark-photon pairs proceed through similar diagrams. The goal is then to make sure that these diagrams cancel.

A cute way to look at this is to assume that at low energies, the relevant particles running in the loop aren’t quarks, but rather nucleons (protons  and neutrons). In fact, since only the proton can talk to the photon, one only needs to consider proton loops. Thus if the new photon-like particle, X, doesn’t talk to protons, then there’s no diagram for the pion to decay into γX. This would be great if the story weren’t completely wrong.

Avoiding NA48

Avoiding NA48/2 bounds requires that the new particle, X, is pion-phobic. It turns out that this is equivalent to X being protophobic. The correct way to see this is on the left, making sure that the contribution of up-quark loops cancels the contribution from down-quark loops. A slick (but naively completely wrong) calculation is on the right, arguing that effectively only protons run in the loop.

The correct way of seeing this is to treat the pion as a quantum superposition of an up–anti-up and down–anti-down bound state, and then make sure that the X charges are such that the contributions of the two states cancel. The resulting charges turn out to be protophobic.

The fact that the “proton-in-the-loop” picture gives the correct charges, however, is no coincidence. Indeed, this was precisely how Jack Steinberger calculated the correct pion decay rate. The key here is whether one treats the quarks/nucleons linearly or non-linearly in chiral perturbation theory. The relation to the Wess-Zumino-Witten term—which is what really encodes the low-energy interaction—is carefully explained in chapter 6a.2 of Georgi’s revised Weak Interactions.

Theory part 2: Not a spin-0 particle

The above considerations focus on a new particle with the same spin and parity as a photon (spin-1, parity odd). Another result of the UCI study was a systematic exploration of other possibilities. They found that the beryllium anomaly could not be consistent with spin-0 particles. For a parity-odd, spin-0 particle, one cannot simultaneously conserve angular momentum and parity in the decay of the excited beryllium-8 state. (Parity violating effects are negligible at these energies.)

Parity

Parity and angular momentum conservation prohibit a “dark Higgs” (parity even scalar) from mediating the anomaly.

For a parity-odd pseudoscalar, the bounds on axion-like particles at 20 MeV suffocate any reasonable coupling. Measured in terms of the pseudoscalar–photon–photon coupling (which has dimensions of inverse GeV), this interaction is ruled out down to the inverse Planck scale.

Screen Shot 2016-08-24 at 4.01.07 PM

Bounds on axion-like particles exclude a 20 MeV pseudoscalar with couplings to photons stronger than the inverse Planck scale. Adapted from 1205.2671 and 1512.03069.

Additional possibilities include:

  • Dark Z bosons, cousins of the dark photon with spin-1 but indeterminate parity. This is very constrained by atomic parity violation.
  • Axial vectors, spin-1 bosons with positive parity. These remain a theoretical possibility, though their unknown nuclear matrix elements make it difficult to write a predictive model. (See section II.D of 1608.03591.)

Theory part 3: Nuclear input

The plot thickens when once also includes results from nuclear theory. Recent results from Saori Pastore, Bob Wiringa, and collaborators point out a very important fact: the 18.15 MeV beryllium-8 state that exhibits the anomaly and the 17.64 MeV state which does not are actually closely related.

Recall (e.g. from the first figure at the top) that both the 18.15 MeV and 17.64 MeV states are both spin-1 and parity-even. They differ in mass and in one other key aspect: the 17.64 MeV state carries isospin charge, while the 18.15 MeV state and ground state do not.

Isospin is the nuclear symmetry that relates protons to neutrons and is tied to electroweak symmetry in the full Standard Model. At nuclear energies, isospin charge is approximately conserved. This brings us to the following puzzle:

If the new particle has mass around 17 MeV, why do we see its effects in the 18.15 MeV state but not the 17.64 MeV state?

Naively, if the new particle emitted, X, carries no isospin charge, then isospin conservation prohibits the decay of the 17.64 MeV state through emission of an X boson. However, the Pastore et al. result tells us that actually, the isospin-neutral and isospin-charged states mix quantum mechanically so that the observed 18.15 and 17.64 MeV states are mixtures of iso-neutral and iso-charged states. In fact, this mixing is actually rather large, with mixing angle of around 10 degrees!

The result of this is that one cannot invoke isospin conservation to explain the non-observation of an anomaly in the 17.64 MeV state. In fact, the only way to avoid this is to assume that the mass of the X particle is on the heavier side of the experimentally allowed range. The rate for emission goes like the 3-momentum cubed (see section II.E of 1608.03591), so a small increase in the mass can suppresses the rate of emission by the lighter state by a lot.

The UCI collaboration of theorists went further and extended the Pastore et al. analysis to include a phenomenological parameterization of explicit isospin violation. Independent of the Atomki anomaly, they found that including isospin violation improved the fit for the 18.15 MeV and 17.64 MeV electromagnetic decay widths within the Pastore et al. formalism. The results of including all of the isospin effects end up changing the particle physics story of the Atomki anomaly significantly:

Parameter fits

The rate of X emission (colored contours) as a function of the X particle’s couplings to protons (horizontal axis) versus neutrons (vertical axis). The best fit for a 16.7 MeV new particle is the dashed line in the teal region. The vertical band is the region allowed by the NA48/2 experiment. Solid lines show the dark photon and protophobic limits. Left: the case for perfect (unrealistic) isospin. Right: the case when isospin mixing and explicit violation are included. Observe that incorporating realistic isospin happens to have only a modest effect in the protophobic region. Figure from 1608.03591.

The results of the nuclear analysis are thus that:

  1. An interpretation of the Atomki anomaly in terms of a new particle tends to push for a slightly heavier X mass than the reported best fit. (Remark: the Atomki paper does not do a combined fit for the mass and coupling nor does it report the difficult-to-quantify systematic errors  associated with the fit. This information is important for understanding the extent to which the X mass can be pushed to be heavier.)
  2. The effects of isospin mixing and violation are important to include; especially as one drifts away from the purely protophobic limit.

Theory part 4: towards a complete theory

The theoretical structure presented above gives a framework to do phenomenology: fitting the observed anomaly to a particle physics model and then comparing that model to other experiments. This, however, doesn’t guarantee that a nice—or even self-consistent—theory exists that can stretch over the scaffolding.

Indeed, a few challenges appear:

  • The isospin mixing discussed above means the X mass must be pushed to the heavier values allowed by the Atomki observation.
  • The “protophobic” limit is not obviously anomaly-free: simply asserting that known particles have arbitrary charges does not generically produce a mathematically self-consistent theory.
  • Atomic parity violation constraints require that the X couple in the same way to left-handed and right-handed matter. The left-handed coupling implies that X must also talk to neutrinos: these open up new experimental constraints.

The Irvine/Kentucky/Riverside collaboration first note the need for a careful experimental analysis of the actual mass ranges allowed by the Atomki observation, treating the new particle mass and coupling as simultaneously free parameters in the fit.

Next, they observe that protophobic couplings can be relatively natural. Indeed: the Standard Model Z boson is approximately protophobic at low energies—a fact well known to those hunting for dark matter with direct detection experiments. For exotic new physics, one can engineer protophobia through a phenomenon called kinetic mixing where two force particles mix into one another. A tuned admixture of electric charge and baryon number, (Q-B), is protophobic.

Baryon number, however, is an anomalous global symmetry—this means that one has to work hard to make a baryon-boson that mixes with the photon (see 1304.0576 and 1409.8165 for examples). Another alternative is if the photon kinetically mixes with not baryon number, but the anomaly-free combination of “baryon-minus-lepton number,” Q-(B-L). This then forces one to apply additional model-building modules to deal with the neutrino interactions that come along with this scenario.

In the language of the ‘model building blocks’ above, result of this process looks schematically like this:

Model building block

A complete theory is completely mathematically self-consistent and satisfies existing constraints. The additional bells and whistles required for consistency make additional predictions for experimental searches. Pieces of the theory can sometimes  be used to address other anomalies.

The theory collaboration presented examples of the two cases, and point out how the additional ‘bells and whistles’ required may tie to additional experimental handles to test these hypotheses. These are simple existence proofs for how complete models may be constructed.

What’s next?

We have delved rather deeply into the theoretical considerations of the Atomki anomaly. The analysis revealed some unexpected features with the types of new particles that could explain the anomaly (dark photon-like, but not exactly a dark photon), the role of nuclear effects (isospin mixing and breaking), and the kinds of features a complete theory needs to have to fit everything (be careful with anomalies and neutrinos). The single most important next step, however, is and has always been experimental verification of the result.

While the Atomki experiment continues to run with an upgraded detector, what’s really exciting is that a swath of experiments that are either ongoing or in construction will be able to probe the exact interactions required by the new particle interpretation of the anomaly. This means that the result can be independently verified or excluded within a few years. A selection of upcoming experiments is highlighted in section IX of 1608.03591:

Experimental searches

Other experiments that can probe the new particle interpretation of the Atomki anomaly. The horizontal axis is the new particle mass, the vertical axis is its coupling to electrons (normalized to the electric charge). The dark blue band is the target region for the Atomki anomaly. Figure from 1608.03591; assuming 100% branching ratio to electrons.

We highlight one particularly interesting search: recently a joint team of theorists and experimentalists at MIT proposed a way for the LHCb experiment to search for dark photon-like particles with masses and interaction strengths that were previously unexplored. The proposal makes use of the LHCb’s ability to pinpoint the production position of charged particle pairs and the copious amounts of D mesons produced at Run 3 of the LHC. As seen in the figure above, the LHCb reach with this search thoroughly covers the Atomki anomaly region.

Implications

So where we stand is this:

  • There is an unexpected result in a nuclear experiment that may be interpreted as a sign for new physics.
  • The next steps in this story are independent experimental cross-checks; the threshold for a ‘discovery’ is if another experiment can verify these results.
  • Meanwhile, a theoretical framework for understanding the results in terms of a new particle has been built and is ready-and-waiting. Some of the results of this analysis are important for faithful interpretation of the experimental results.

What if it’s nothing?

This is the conservative take—and indeed, we may well find that in a few years, the possibility that Atomki was observing a new particle will be completely dead. Or perhaps a source of systematic error will be identified and the bump will go away. That’s part of doing science.

Meanwhile, there are some important take-aways in this scenario. First is the reminder that the search for light, weakly coupled particles is an important frontier in particle physics. Second, for this particular anomaly, there are some neat take aways such as a demonstration of how effective field theory can be applied to nuclear physics (see e.g. chapter 3.1.2 of the new book by Petrov and Blechman) and how tweaking our models of new particles can avoid troublesome experimental bounds. Finally, it’s a nice example of how particle physics and nuclear physics are not-too-distant cousins and how progress can be made in particle–nuclear collaborations—one of the Irvine group authors (Susan Gardner) is a bona fide nuclear theorist who was on sabbatical from the University of Kentucky.

What if it’s real?

This is a big “what if.” On the other hand, a 6.8σ effect is not a statistical fluctuation and there is no known nuclear physics to produce a new-particle-like bump given the analysis presented by the Atomki experimentalists.

The threshold for “real” is independent verification. If other experiments can confirm the anomaly, then this could be a huge step in our quest to go beyond the Standard Model. While this type of particle is unlikely to help with the Hierarchy problem of the Higgs mass, it could be a sign for other kinds of new physics. One example is the grand unification of the electroweak and strong forces; some of the ways in which these forces unify imply the existence of an additional force particle that may be light and may even have the types of couplings suggested by the anomaly.

Could it be related to other anomalies?

The Atomki anomaly isn’t the first particle physics curiosity to show up at the MeV scale. While none of these other anomalies are necessarily related to the type of particle required for the Atomki result (they may not even be compatible!), it is helpful to remember that the MeV scale may still have surprises in store for us.

  • The KTeV anomaly: The rate at which neutral pions decay into electron–positron pairs appears to be off from the expectations based on chiral perturbation theory. In 0712.0007, a group of theorists found that this discrepancy could be fit to a new particle with axial couplings. If one fixes the mass of the proposed particle to be 20 MeV, the resulting couplings happen to be in the same ballpark as those required for the Atomki anomaly. The important caveat here is that parameters for an axial vector to fit the Atomki anomaly are unknown, and mixed vector–axial states are severely constrained by atomic parity violation.
KTeV anomaly

The KTeV anomaly interpreted as a new particle, U. From 0712.0007.

  • The anomalous magnetic moment of the muon and the cosmic lithium problem: much of the progress in the field of light, weakly coupled forces comes from Maxim Pospelov. The anomalous magnetic moment of the muon, (g-2)μ, has a long-standing discrepancy from the Standard Model (see e.g. this blog post). While this may come from an error in the very, very intricate calculation and the subtle ways in which experimental data feed into it, Pospelov (and also Fayet) noted that the shift may come from a light (in the 10s of MeV range!), weakly coupled new particle like a dark photon. Similarly, Pospelov and collaborators showed that a new light particle in the 1-20 MeV range may help explain another longstanding mystery: the surprising lack of lithium in the universe (APS Physics synopsis).

Could it be related to dark matter?

A lot of recent progress in dark matter has revolved around the possibility that in addition to dark matter, there may be additional light particles that mediate interactions between dark matter and the Standard Model. If these particles are light enough, they can change the way that we expect to find dark matter in sometimes surprising ways. One interesting avenue is called self-interacting dark matter and is based on the observation that these light force carriers can deform the dark matter distribution in galaxies in ways that seem to fit astronomical observations. A 20 MeV dark photon-like particle even fits the profile of what’s required by the self-interacting dark matter paradigm, though it is very difficult to make such a particle consistent with both the Atomki anomaly and the constraints from direct detection.

Should I be excited?

Given all of the caveats listed above, some feel that it is too early to be in “drop everything, this is new physics” mode. Others may take this as a hint that’s worth exploring further—as has been done for many anomalies in the recent past. For researchers, it is prudent to be cautious, and it is paramount to be careful; but so long as one does both, then being excited about a new possibility is part what makes our job fun.

For the general public, the tentative hopes of new physics that pop up—whether it’s the Atomki anomaly, or the 750 GeV diphoton bumpa GeV bump from the galactic center, γ-ray lines at 3.5 keV and 130 GeV, or penguins at LHCb—these are the signs that we’re making use of all of the data available to search for new physics. Sometimes these hopes fizzle away, often they leave behind useful lessons about physics and directions forward. Maybe one of these days an anomaly will stick and show us the way forward.

Further Reading

Here are some of the popular-level press on the Atomki result. See the references at the top of this ParticleBite for references to the primary literature.

UC Riverside Press Release
UC Irvine Press Release
Nature News
Quanta Magazine
Quanta Magazine: Abstractions
Symmetry Magazine
Los Angeles Times

Share

What is “Model Building”?

Thursday, August 18th, 2016

Hi everyone! It’s been a while since I’ve posted on Quantum Diaries. This post is cross-posted from ParticleBites.

One thing that makes physics, and especially particle physics, is unique in the sciences is the split between theory and experiment. The role of experimentalists is clear: they build and conduct experiments, take data and analyze it using mathematical, statistical, and numerical techniques to separate signal from background. In short, they seem to do all of the real science!

So what is it that theorists do, besides sipping espresso and scribbling on chalk boards? In this post we describe one type of theoretical work called model building. This usually falls under the umbrella of phenomenology, which in physics refers to making connections between mathematically defined theories (or models) of nature and actual experimental observations of nature.

One common scenario is that one experiment observes something unusual: an anomaly. Two things immediately happen:

  1. Other experiments find ways to cross-check to see if they can confirm the anomaly.
  2. Theorists start figure out the broader implications if the anomaly is real.

#1 is the key step in the scientific method, but in this post we’ll illuminate what #2 actually entails. The scenario looks a little like this:

An unusual experimental result (anomaly) is observed. One thing we would like to know is whether it is consistent with other experimental observations, but these other observations may not be simply related to the anomaly.

An unusual experimental result (anomaly) is observed. One thing we would like to know is whether it is consistent with other experimental observations, but these other observations may not be simply related to the anomaly.

Theorists, who have spent plenty of time mulling over the open questions in physics, are ready to apply their favorite models of new physics to see if they fit. These are the models that they know lead to elegant mathematical results, like grand unification or a solution to the Hierarchy problem. Sometimes theorists are more utilitarian, and start with “do it all” Swiss army knife theories called effective theories (or simplified models) and see if they can explain the anomaly in the context of existing constraints.

Here’s what usually happens:

Usually the nicest models of new physics don't fit! In the explicit example, the minimal supersymmetric Standard Model doesn't include a good candidate to explain the 750 GeV diphoton bump.

Usually the nicest models of new physics don’t fit! In the explicit example, the minimal supersymmetric Standard Model doesn’t include a good candidate to explain the 750 GeV diphoton bump.

Indeed, usually one needs to get creative and modify the nice-and-elegant theory to make sure it can explain the anomaly while avoiding other experimental constraints. This makes the theory a little less elegant, but sometimes nature isn’t elegant.

Candidate theory extended with a module (in this case, an additional particle). This additional model is "bolted on" to the theory to make it fit the experimental observations.

Candidate theory extended with a module (in this case, an additional particle). This additional model is “bolted on” to the theory to make it fit the experimental observations.

Now we’re feeling pretty good about ourselves. It can take quite a bit of work to hack the well-motivated original theory in a way that both explains the anomaly and avoids all other known experimental observations. A good theory can do a couple of other things:

  1. It points the way to future experiments that can test it.
  2. It can use the additional structure to explain other anomalies.

The picture for #2 is as follows:

A good hack to a theory can explain multiple anomalies. Sometimes that makes the hack a little more cumbersome. Physicists often develop their own sense of 'taste' for when a module is elegant enough.

A good hack to a theory can explain multiple anomalies. Sometimes that makes the hack a little more cumbersome. Physicists often develop their own sense of ‘taste’ for when a module is elegant enough.

Even at this stage, there can be a lot of really neat physics to be learned. Model-builders can develop a reputation for particularly clever, minimal, or inspired modules. If a module is really successful, then people will start to think about it as part of a pre-packaged deal:

A really successful hack may eventually be thought of as it's own variant of the original theory.

A really successful hack may eventually be thought of as it’s own variant of the original theory.

Model-smithing is a craft that blends together a lot of the fun of understanding how physics works—which bits of common wisdom can be bent or broken to accommodate an unexpected experimental result? Is it possible to find a simpler theory that can explain more observations? Are the observations pointing to an even deeper guiding principle?

Of course—we should also say that sometimes, while theorists are having fun developing their favorite models, other experimentalists have gone on to refute the original anomaly.

pheno_05

Sometimes anomalies go away and the models built to explain them don’t hold together.

 

But here’s the mark of a really, really good model: even if the anomaly goes away and the particular model falls out of favor, a good model will have taught other physicists something really neat about what can be done within the a given theoretical framework. Physicists get a feel for the kinds of modules that are out in the market (like an app store) and they develop a library of tricks to attack future anomalies. And if one is really fortunate, these insights can point the way to even bigger connections between physical principles.

I cannot help but end this post without one of my favorite physics jokes, courtesy of T. Tait:

 A theorist and an experimentalist are having coffee. The theorist is really excited, she tells the experimentalist, “I’ve got it—it’s a model that’s elegant, explains everything, and it’s completely predictive.”The experimentalist listens to her colleague’s idea and realizes how to test those predictions. She writes several grant applications, hires a team of postdocs and graduate students, trains them,  and builds the new experiment. After years of design, labor, and testing, the machine is ready to take data. They run for several months, and the experimentalist pores over the results.

The experimentalist knocks on the theorist’s door the next day and says, “I’m sorry—the experiment doesn’t find what you were predicting. The theory is dead.”

The theorist frowns a bit: “What a shame. Did you know I spent three whole weeks of my life writing that paper?”

Share

The Post-Higgs Hangover: where’s the new physics?

Thursday, July 19th, 2012

Now that the good people at CERN have finished their Higgs-discovery champagne, many of us have found ourselves drawn to harder drinks. While the Higgs is the finishing touch on the elegant edifice of the Standard Model, it’s the culmination of theoretical physics from the 1960s. Where’s all the exciting new physics that we’d been expecting “just around the corner” at the terascale?

My generation of particle physicists entered graduate school expecting a cornucopia of supersymmetry and extra dimensions at the TeV scale just waiting for us to join the party—unfortunately those hopes and dreams have yet come up short. While the book has yet to be written on whether or not the Higgs branching ratios are Standard Model-like, two recent experimental updates in collider and dark matter physics have also turned up empty.

No Z’ at 1 TeV

The first is the search for Z’ (“Z prime”) resonances, these are “smoking gun” signatures of a new particle which behaves like a heavy copy of the Z boson. Such particles are predicted by several models of new physics. There was some very cautious excitement after the 2011 data showed a 2σ bump in the dilepton channel around 1 TeV (both at CMS and ATLAS):

The horizontal axis is the mass of the hypothetical particle (measured by the momenta of the two leptons it supposedly decays to) in GeV, while the vertical axis is the rate at which these two lepton events are seen. (The other lines are examples for what one would expect for a Z’ from different models, for our purposes we can ignore them.) A bump would be indicative of a new particle causing a resonance: an increased rate in the observation of two leptons with a given energy. You can see something that is beginning to “kinda-sorta” look like a bump around 1 TeV. Of course, 2σ signals come and go with statistics—and this is indeed what happened with this year’s data [CMS EXO-12-015]:

Bummer. (Again, one really doesn’t have much right to be disappointed—that’s just the way the statistics works.)

Still no WIMP dark matter

Another area where we have good reason to expect new physics is dark matter. Astrophysical observations have given very strong evidence that the dark matter that gravitationally seeds our galaxies is composed of some new particle that is not described by the Standard Model. One nice feature is that astrophysical and cosmological data tell us the dark matter density in our galaxy, from which we can deduce a relation between the dark matter mass and its interaction strength.

Physicists observed that one particularly interesting scenario is when the dark matter particle interacts via the weak force—the sector of our the Standard Model that gets tied up with electroweak symmetry breaking and the Higgs. In this case, the dark matter mass should be right around a few hundred GeV, right in the ballpark of the LHC. To some, this is very suggestive evidence that dark matter may be related to electroweak physics. This class of models got a cute name: WIMPs, for weakly interacting massive particles. There are other types of dark matter, but until fairly recently WIMPs were king because they fit so nicely with models of new physics that were already modifying the electroweak scale.

Unfortunately, the flagship dark matter detector, XENON, recently released a sobering summary of its latest data at the Dark Attack conference in Switzerland. Yes, that’s really the conference name. XENON is a wonderful piece of detector technology that any particle physicist would be proud of. Their latest data-taking run found only two events (what’s expected from background). The result is the following plot:

How to read this plot: the horizontal axis is the mass of the WIMP particle. You get to pick this (or your model of new physics predicts this).  The vertical axis is the cross section, which measures the number of dark matter–detector interactions that such a WIMP is expected to undergo. The large boomerang-shaped lines are the limits set by the experiment—as the red text says, for a mass of around 55 GeV, it rules out cross sections that are above a certain number. For “garden variety” interaction channels, this number is already much smaller than the ball park estimate for the weak force.

The blob at the bottom right is some fairly arbitrary slice of the supersymmetry parameter space, but this is really just there for illustrative purposes and shouldn’t be interpreted as any kind of exclusion of supersymmetry. The other lines are other past experiments. The circles at the top left are slightly controversial ‘signals’ that have been ruled out within the WIMP paradigm by the last few direct detection experiments (XENON and CDMS).

The story is not necessarily as dour as the plot seems to indicate. There are many clever ways to get dark matter, not all of them WIMP-like. In fact, even the above plot is limited to the “spin-independent” coupling—an assumption about the particular way that dark matter interacts with nuclei. But these WIMP searches will eventually hit a brick wall around 2017: that’s when the XENON 1T (“one ton”) experiment will be sensitive to cross sections that are three orders of magnitude smaller than the current bounds. At that level of sensitivity, you end up with a lot of background noise from cosmic neutrinos which, as far as the detector is concerned, behave very much like dark matter. (They’re not.) Looking for a dark matter signal against this background is like looking for a needle in a stack of needles.

Where do we stand?

Between the infamous magnet quench of 2008 to the sobering exclusion plots of the last couple of years, an entire generation of graduate students and young postdocs is internalizing the idea that finding new physics will not be as simple as turning on the LHC as some of us had believed as undergrads. Despite our youthful naivete, the LHC is also still in its infancy with a 14 TeV run coming after its year-long shutdown. The above results are sobering, but they just mean that there wasn’t any low-hanging fruit for us to gobble up right away.

Share

More Post-Higgs silliness

Friday, July 6th, 2012

I recently got to eavesdrop on a delightful and silly e-mail exchange between US LHC’s very own Burton and Aidan, both ATLAS physicists, after I pointed out that Wikipedia now mentions the ATLAS Higgs talk as a “notable use” of the infamous font Comic Sans. The quotes below are lifted directly from their e-mail exchange (with their permission), as illustrated by yours truly.

For more substantial physics discussion, check out Aidan and Seth’s Higgs postgame video and Anna’s ongoing posts from ICHEP.

Update [7/08]: the “4.9 sigma” comment below is a mistake, the actual “global significance” includes the ‘look elsewhere effect’ and is lower than this.

 

Share

Photoshop the Higgs

Thursday, July 5th, 2012

Symmetry Breaking has a fun contest going on to photoshop the Higgs into interesting photos… by the way this is not how ATLAS and CMS do their data analysis.

Here are a few examples featuring familiar faces from the US LHC blog:

Many thanks to Aidan for his Higgs liveblog. He's now a certified Higgs-buster.

 

 

Shout out to Katie Yurkewicz, Fermilab office of communications director and former US LHC communicator

 

And a very special congrats to Kathryn Jepsen, US LHC communicator, who got married earlier this year!

 

 

Share

What to look for: the Higgs-to-gamma-gamma branching ratio

Tuesday, July 3rd, 2012

There’s a lot of press building up to the Higgs announcement at CERN in just a few hours, and you’ll have Aidan’s live-blog for the play-by-play commentary. I just wanted to squeeze in more chatter about what to look for in the talks besides the usual “oh look how many sigmas we have.”

[caveat: the above cereal guy meme is purely hypothetical!]

Since we’re all friends here, I’ll be candid and say that many physicists have taken the existence of a 125 GeV-ish Higgs-like particle as a foregone conclusion—in large part because any alternative would be even more dramatic. (Recall: the Standard Model is begging for there to be a Higgs.) Whether the evidence for the Higgs is just above or just below the magic 5-sigma “discovery” threshold won’t change anything other than how much champagne Aidan will be drinking.

But that shouldn’t deter you from tuning into the 3am EST webcast. Besides getting a chance to see some famous faces in the audience, the thing to look for are hints that there’s actually more to the Higgs than the Standard Model. As described very nicely at Resonaances, the 2011 LHC data presented last December suggested that the Higgs (if it’s there) decays into photons slightly more often than the Standard Model predicts. Could this be a hint that there’s exciting (and unexpected) new physics right around the corner?

Let’s back up a little bit. Before we can talk about how the Higgs decays, we have to talk about how it’s produced at the LHC. The two main mechanisms are called gluon fusion and vector boson fusion (where the vector boson V can be a Z or W):

The gluon fusion diagram dominates at the LHC since there are plenty of high energy gluons in a multi-TeV proton beam. Note that the loop of virtual top quarks is required since the Higgs has no direct coupling to gluons (it’s not colored); the top is a good choice since it has a large coupling to the Higgs (which is why the top is so heavy). As an exercise, use the Standard Model Feynman rules to draw other Higgs production diagrams.

Once you have a Higgs, you can look at the different ways it can decay. The photon-photon final state is very rare, but particularly intriguing because the Higgs doesn’t have electric charge  and photons don’t have mass—so these particles don’t tend to talk to each other. In fact, such a Higgs-photon-photon interaction only occurs when mediated by virtual particles like the top and W:

Why these diagrams? They’re heavy enough to have a large coupling to the Higgs and also charged so they can emit photons. (Exercise: draw the other W boson diagram contributing to h to γγ.) In fact, the W diagram is about 5 times larger than the top diagram.

The great thing about loop diagrams is that any particle (with electric charge and coupling to the Higgs) can run in the loop. You can convince yourself that other Standard Model particles don’t make big contributions to hγγ, but—and here’s the good part—if there are new particles beyond the Standard Model, they could potentially push the h → γγ rate larger than the Standard Model prediction. This is what we’re hoping.

What to look for: keep an eye out for a measurement of the h → γγ cross section (a measurement of the rate). Cross sections are usually denoted by σ. Because we don’t care so much about the actual number but rather its difference from the Standard Model, what is usually presented is a ratio of the observed cross section to the Standard Model cross section: σ/σ(SM). If this ratio is one within uncertainty, then things look like the Standard Model, but otherwise (and hopefully) things are much more interesting.

The outlook on the eve of ‘the announcement’

[I thank my colleagues Jack, Mathieu, and Javi for sharing their insights on this.]

Given the assumption that there indeed is a particle at 125-ish GeV that does all the great things that the Standard Model Higgs should do, we would like to ask whether or not this is really the Standard Model (SM) Higgs, or whether it is some other Higgs-like state that may have different properties. In particular, is it possible that this particle talks to the rest of the Standard Model with slightly different strengths than the SM Higgs? And maybe, if we really want to push our luck, could this more exotic Higgs-like particle push the h → γγ rate to be larger than expected?

To answer this question, we don’t want to restrict ourselves to any one specific model of new physics, we’d rather be as general as possible. One way to do this is to use an “effective theory” that parameterizes all of the possible couplings of the “Higgs” to Standard Model particles. Here’s what one such effective theory looks like in sloppy handwriting:

Don’t worry, you don’t have to know what these all mean, but just for fun you can compare to this famous expression. The parameters here are the variables labelled a, b, c, and d. Of these, the two important ones to consider are a, which controls the Higgs coupling to two W bosons, and c, which controls the Higgs coupling to fermions (like top quarks). The Standard Model corresponds to a = c = 1.

Now we can start playing an interesting game:

  1. If we increase the coupling a of the Higgs to W bosons, then we increase the rate for h → γγ via the W loop above.
  2. If, on the other hand, we increase the coupling c of the Higgs to the top quark, then we increase the rate of h → γγ via the top quark loop above.

Thus the observation of a larger-than-expected rate for h → γγ could point to either a or c >1 (or both). How would we distinguish between these? Well, note that (see the production diagrams above):

  1. If the a (Higgs to W) coupling were enhanced, then we would also expect an enhancement in the “vector boson fusion” rate for Higgs production. When the Higgs is produced this way, you can [with some efficiency] tag the quark remnants and say that the Higgs was produced through vector boson fusion.
  2. On the other hand, if the c (Higgs to top) coupling were enhanced, then we would also expect an enhancement in the “gluon fusion” rate for Higgs production.

Thus we have some handle for where we could fit new physics to explain a possible h → γγ excess. (Again, by “excess” we mean relative to the expected production in the Standard Model.)

Here’s a quick plot of where we stand currently, including recent results from Moriond, from 1202.3697, I refer experts to that paper for further details and plots:

(There are many similar plots out there—some by good friends of mine—I apologize for not providing a more complete reference list… the Higgs seminar is only a few hours away!) The green/yellow/gray blobs are the 1,2,3 sigma confidence regions for the parameters a and c above. The red and blue lines are ATLAS and CMS exclusions. The reason why there are two green blobs is that there is a choice for the sign of c, this corresponds to whether the Higgs-top loops interfere constructively or destructively with the Higgs-W loops. For more details, see this Resonaances post.

The plot above includes the latest LHC data (Moriond, pre-ICHEP) as well as the so-called “electroweak precision observables” which tightly constrain the effects of virtual particles on the Standard Model gauge bosons. These are the blobs to keep an eye on—the lines indicate the Standard Model point a=c=1. If the blob continues to creep away from this point, then there will be good reason to expect exciting new physics beyond the Higgs… and that’s what makes it worth tuning in at 3am.

Share

Tim Tait: “Why look for the Higgs?”

Tuesday, July 3rd, 2012

For those of you who are itching to learn more about the Higgs in anticipation of the Higgs announcement and Aidan’s liveblog, I encourage you to check out Tim Tait’s recent colloquium at SLAC titled, “Why look for the Higgs?” It’s an hour-long talk aimed at a non-physics audience (Tim says “engineers and programmers”).

Tim is a professor at UC Irvine whose enthusiasm and natural ability to explain physics carries through in his talk.

Last summer Tim was a co-director for the “Theoretical Advanced Study Institute in Elementary Particle Physics” summer school for graduate students. I heard that the students tried to get Tim’s portrait immortalized on the official school t-shirt.

For more SLAC colloquia and public lectures, see their channel on YouTube.

Share

The Hierarchy Problem: why the Higgs has a snowball’s chance in hell

Sunday, July 1st, 2012

The Higgs boson plays a key role in the Standard Model: it is related to the unification of the electromagnetic and weak forces, explains the origin of elementary particle masses, and provides a weakly coupled way to unitarize longitudinal vector boson scattering.

As particle physics community eagerly awaits CERN’s special seminar on a possible Higgs discovery (see Aidan’s liveblog), it’s a good time to review why Higgs—the last piece of the Standard Model—is also one of the big reasons why we expect even more exciting physics beyond the Standard Model.

The main reason is called the Hierarchy problem. This is often ‘explained’ by saying that quantum corrections want to make the Higgs much heavier than we need it to be… say, 125-ish GeV. Before explaining what that means, let me put it in plain language:

The Higgs has a snowball’s chance in hell of having a mass in that ballpark.

This statement works as an analogy, not just an idiom. (This analogy is adapted from one originally by R. Rattazzi involving a low energy particle passing through a thermal bath. Edit: I’m told this analogy was by G. Giudice, thanks Duccio.)

If you put a glass of water in a really hot place—you expect it to also become really hot, maybe even to off into steam.  It would be really surprising if we put an ice cube in a hot oven and 10 minutes later it had not melted. This is because the ambient thermal energy is expected to be transferred to the ice cube by the energetic air molecules bouncing off it. Sure, it is possible that the air molecules just happen to bounce in a way that doesn’t impart much thermal energy—but that would be ridiculously improbable, as we learn in thermodynamics.

The Higgs is very similar: we expect its mass to be around 125 GeV (not too far from W and Z masses), but ambient quantum energy wants to make its mass much larger through interactions with virtual particles. While it is possible that the Higgs stays light without any additional help, it’s ridiculously improbable, as we learn from quantum physics.

Remark: the relation between thermal/statistical uncertainty and quantum uncertainty is actually one that is deeply woven into their mathematical descriptions and is the reason why quantum (or statistical) field theory is the common language of both particle physics and condensed matter physics.

Quantum corrections: the analogy of the point electron

The phrase “quantum corrections” is somewhat daunting, so let’s appeal to a slightly more familiar problem (from H. Murayama) and draw some pictures. The analog of the Hierarchy problem in classical physics is the question of the electron self energy:

The electron has charge but is nearly point-like. It must have a very large charge density and thus have a very large self-energy (mass).

Self-energy here just means the contribution to the electron mass coming from repulsive electrostatic energy of one part of the electron from another. The problem thus reduces to: how can the electron mass be so small when we expect it to be large due to electrostatic energy? Yet another way to pose the question is to say that the electron mass has contributions from some ‘inherent mass’ (or ‘bare mass’) m0 and the electrostatic energy, ΔE:

mmeasured = m0ΔE

Since mmeasured is small while ΔE is large, then it seems that m0 must be very specifically chosen to cancel out most of ΔE but still leave the correct tiny leftover value for the electron mass. In other words, the ‘bare mass’ m0 must be chosen to uncomfortably high precision.

I walk through the numbers in a previous post (see also the last few pages of these lectures to undergraduates [pdf] from here), but here’s the main idea: the reason why there isn’t a huge electrostatic contribution to the electron mass is that virtual electron–positron pairs smear out the electric charge over a radius larger than the size of the electron:

In other words: current experimental bounds tell me that the electron is smaller than 10-17 cm and the “electron hierarchy problem” arises when I calculate the energy associated with packing in one unit of electric charge into that radius. The resolution is that even though the electron may be tiny, at a certain length scale quantum mechanics becomes relevant and you start seeing virtual electron–positrion pairs which interact with the physical electron to smear out the charge over a larger distance (this is called vacuum polarization).

The distance at which this smearing takes place is predicted by quantum mechanics—it’s the distance where the virtual particles have enough energy to become real—and when you plug in the numbers, it’s precisely where it needs to be to prevent a large electrostatic contribution to the electron mass. Since we’re now experts with Feynman diagrams, here’s what such a process looks like in that language:

Higgs: the petulant child of the Standard Model

The Hierarchy problem for the Higgs is the quantum version of the above problem. “Classically” the Higgs has a mass that comes from the following diagram (note the Higgs vev):

This diagram is perfectly well behaved. The problem occurs from contributions that include loops of virtual particles—these play the role of the electrostatic contribution to the electron mass in the above analogy:

As an exercise, use the Higgs Feynman rules to draw other contributions to the Higgs mass which contain a single loop; for our present purposes the one above is sufficient. Recall, further, that  one of our rules for drawing diagrams was that momentum is conserved. In the above diagram, the incoming Higgs has some momentum (which has to be the same as the outgoing Higgs), but the virtual particle momenta (k) can be anything. What this means is that we have to sum over an infinite number of diagrams, each with a different momentum k running through the loop.

We’ll ignore the mathematical expression that’s actually being summed, but suffice it to say that it is divergent—infinity. This is a good place for you to say, what?! the Higgs mass isn’t infinity… that doesn’t even make sense! That’s right—so instead of summing up to diagrams with infinite loop momentum, we should stop where we expect our model to break down. But without any yet undiscovered physics, the only energy scale at which we know our description must break down is the gravitational scale: MPlanck ~ 1018 GeV. And thus, as a rough estimate, these loop diagrams want to push the Higgs mass up to 1018 GeV… which is way heavier than we could ever hope to discover from a 14 TeV (= 14,000 GeV) LHC. (Recall that these virtual contributions to the Higgs mass are what were analogous to thermal energy in our “snowball in Hell” analogy.)

But here’s the real problem: the Standard Model really, really wants the Higgs to be around the 100 GeV scale. This is because it needs something to “unitarize longitudinal vector boson scattering.” It needs to have some Higgs-like state accessible at low energies to explain why certain observed particle interactions are well behaved. Thus if the Higgs indeed has a mass around 125 GeV, then the only way to make sense of the 1018 GeV mass contribution from the loop diagram above is if the “classical” (or “tree”) diagram has a value which precisely cancels that huge number to leave only a 125 GeV mass. This is the analog of choosing m0 in the electron analogy above.

Unlike the electron analogy above, we don’t know what kind of physics can explain this 1016 ‘fine-tuning’ of our Standard Model parameters. For this reason, we expect there to be some kind of new physics accessible at TeV energies to explain why the Higgs should be right around that scale rather than being at the Planck mass.

Outlook on the Hierarchy

The Hierarchy problem has been the main motivation for new physics at the TeV scale for over two decades. There are a few obvious questions that you may ask.

1. Is it really a problem? Maybe some number just has to be specified very precisely.

Indeed—it is possible that the Higgs mass is 125 GeV due to some miraculous almost-cancellation that set it to be in just the right ballpark to unitarize longitudinal vector boson scattering. But such miracles are rare in physics without any a priori explanation. The electron mass is an excellent example. There are some apparent (and somewhat controversial) counter-examples: the cosmological constant problem is a much more severe ‘fine-tuning’ problem which may be explained anthropically rather than through more fundamental principles.

2. I can draw loop diagrams for all of the Standard Model particles… why don’t they all have Hierarchy problems?

If you’ve asked this question, then you get an A+. Indeed, based on the arguments in this post, it seems like any diagram with a loop gives a divergence when you sum over the possible intermediate momenta so that we would expect all Standard Model particles to have Planck-scale masses due to quantum corrections. However, the important point was that we never wrote out the mathematical form of the thing that we’re summing.

It turns out that the Hierarchy problem is unique for scalar particles like the Higgs. Loop contributions to fermion masses are not so sensitive to the ‘cutoff’ scale where the theory breaks down. This is manifested in the mathematical expression for the fermion mass and is ultimately due to the chiral structure of fermions in four dimensions. Gauge boson masses are also protected, but from a different mechanism: gauge invariance. More generally, particles that carry spin are very picky about whether they’re massive or massless, whereas scalar particles like the Higgs are not, which makes the Higgs susceptible to large quantum corrections to its mass.

3. What are the possible ways to solve the Hierarchy problem?

There are two main directions that most people consider:

  1. Supersymmetry. Recall in our electron analogy that the solution to the ‘electron mass hierarchy problem’ was that quantum mechanics doubled the number of particles: in addition to the electron, there was also a positron. The virtual electron–positron contributions solved the problem by smearing out the electric charge. Supersymmetry is an analogous idea where once again the set of particles is doubled, and in doing so the loop contributions of one particle to the Higgs are cancelled by the loop contributions of its super-partner. Supersymmetry has deep connections to an extension of space-time symmetry since it relates matter particles to force particles.
  2. Compositeness/extra dimensions. The other solution is that maybe our description of physics breaks down much sooner than the Planck scale. In particular, maybe at the TeV scale the Higgs no longer behaves like a scalar particles, but rather as a bound state of two fermions. This is precisely what happens with the mesons: even though the pion is a scalar, there is no pion ‘hierarchy problem’ because as you probe smaller distances, you realize the pion is actually a bound state of two quarks and it starts behaving as such. One of the beautiful developments of theoretical physics in the 1990s and early 2000s was the realization that this is precisely what is being described by theories of extra dimensions through the so-called holographic principle.

So there you have it—while you’re celebrating the [anticipated] Higgs discovery with fireworks on July 4th, also take a moment to appreciate that this isn’t the end of a journey culminating in the Standard Model, but the beginning of an expedition for exciting new physics at the terascale.

Share

An experiment: Feynman Diagrams for Undergrads

Thursday, May 31st, 2012

The past couple of weeks I’ve been busy juggling research with an opportunity I couldn’t pass up: the chance to give lectures about the Standard Model to Cornell’s undergraduate summer students working on CMS.

The local group here has a fantastic program which draws motivated undergrads from the freshman honors physics sequence. The students take a one credit “research in particle physics course” and spend the summer learning programming and analysis tools to eventually do CMS projects. Since the students are all local, some subset of them stay on and continue to work with CMS during their entire undergraduate careers. Needless to say, those students end up with fantastic training in physics and are on a trajectory to be superstar graduate students.

Anyway, I spent some time adapting my Feynman diagram blog posts into a series of lectures. In case anyone is interested, I’m posting them publicly here, along with some really nice references at the appropriate level.

There are no formal prerequisites except for familiarity with particle physics at the popular science/Wikipedia level, though they’re geared towards enthusiastic students who have been doing a lot of outside [pop-sci level] reading and have some sophistication with freshman level math and physics ideas.

The whole thing is an experiment for me, but the first lecture earlier today seems to have gone well.

Share

Name these brands/plants? Name these particles!

Tuesday, April 17th, 2012

I don’t know the original source, but there’s an image that has gone semi-viral over the past year which challenges the reader to identify several brand names based on their logos versus plant names based on their leaves. (Here’s a version at Adbusters.) The point is to contrast consumerism to the outdoors-y/science-y education that kids would get if they just played outside.

This isn’t the place to discuss consumerism, but I don’t agree with idea that the ability to identify plant names carries any actual educational value. Here’s my revision to the image:

Adapted from the original “Name these brands/plants” image (original source unknown).

On the right we’ve encoded all of the particles in the Standard Model in a notation based on representation theory. In fact, this is almost all of the information you need to know to write down all of the Feynman rules in the Standard Model (more on this below).

Tables that the one above are a compact way to describe the particle content of a model because the information in the table specifies all of the properties of each particle. And that’s the point: whether we name a particle the “truth quark” or the “top quark” doesn’t matter—what matters is the physics behind these names, and that’s captured succinctly in the table. Science isn’t about classification, it’s about understanding. I leave you with this quote from Feynman (which you can watch in his own words here):

You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird… So let’s look at the bird and see what it’s doing — that’s what counts. I learned very early the difference between knowing the name of something and knowing something.

 

Addendum: naming those particles

For those who want to know, the particles in the table are, from top down:

  1. The left-handed quark doublet, containing the left-handed up quark and left-handed down quark
  2. The anti-right-handed-up quark
  3. The anti-right-handed-down quark
  4. The left-handed lepton doublet, containing the left-handed electron and left-handed neutrino
  5. The anti-right-handed electron (a.k.a the right-handed positron)
  6. The anti-right-handed neutrino
  7. The Standard Model Higgs

SU(3), SU(2), and U(1) refer to the strong force, weak force, and hypercharge. Upon electroweak symmetry breaking, the weak force and hypercharge combine into electromagnetism and the heavy W and Z bosons. Here’s how to read the funny notation:

  1. Under SU(3): particles with a box come in three colors (red, green, blue). Particles with a barred box come in three anti-colors (anti-red, anti-green, anti-blue). Particles with a ‘1’ are not colored.
  2. Under SU(2): particles with a box have two components, an upper and a lower component. That is to say, a box means that there are actually two particles being represented. More on this below. Particles with a ‘1’ do not carry weak charge and do not talk to the W boson.
  3. Under U(1): this is the “hypercharge” of the particle.
  4. The electric charge of a particle is given by adding to the hypercharge +1/2 if it’s the upper component of an SU(2) box, -1/2 if it’s the lower component of an SU(2) box, or 0 if it is not an SU(2) box (just ‘1’).

As a consistency check, you can convince yourself that both the left- and right-handed neutrinos carry zero electric charge. Note, also, the fact that we’ve written out left-handed and right-handed particles differently. This is a reflection of the fact that the Standard Model is a chiral theory.

Finally, I said above that the table of particles almost specifies the structure of the Standard Model completely, the additional pieces of information required are:

  1. Which of the above particles are fermions and which are scalars (the gauge bosons are implied)
  2. Write down the most general ‘renormalizable’ theory (we write only the simplest interaction vertices)
  3. Specify the pattern of electroweak symmetry breaking (the Higgs)
  4. Specify the flavor symmetries (three of each type of matter  particle)

From this one can write the complete mathematical expressions for the Standard Model. One then just has to fill in the observed numerical values to be able to calculate concrete predictions for actual processes.

Share