Graduate Student Research Archive

Below, you will find graduate student research profiles from previous years. Enjoy…

Back to Top


Matthew Becker

B.S., University of Michigan, 2007 (Physics)
B.S., University of Michigan, 2007 (Mathematics)
M.S., University of Chicago, 2008 (Physics)
Ph.D. (2013), Dept. of Physics
ResearchAstrophysics & Cosmology
Awards: GAANN Fellow (Dept. of Ed), Illinois Space Grant Consortium Graduate Fellow, Sugarman Award (EFI)
Research Advisor: Andrey Kravtsov

Since the discovery of dark energy and the acceleration of the Universe in 1998 by two teams studying high-redshift Type Ia supernovae, Cosmic Microwave Background experiments and low-redhsift observations of structure formation from large area sky surveys have converged on a standard cosmological model, LambdaCDM. While this model provides convincing explanations for the formation of galaxies and large scale structure, the matter from which we are made along with a small amount of known relativistic particles compose only ~5% of the total mass-energy density of the Universe. The rest comprises two of the great mysteries in cosmology - dark matter and dark energy - each being ~25% and ~70% respectively of the total mass-energy density of the Universe.

Cosmology is now entering the era of large area sky surveys which will cover a quarter of the sky or more at unprecedented sensitivity, observing in multiple wavelengths of light.  These surveys, like the South Pole Telescope, the Dark Energy Survey, and the Large Synoptic Survey Telescope, will give us a new view of the Universe at the age when dark energy came to be the dominant fraction of the total mass-energy density and thus will provide strong constraints on the nature of dark energy.  Two of the most promising techniques to study dark energy and the growth of large scale structure in the Universe with these surveys rely on an effect called weak gravitational lensing (the tiny deflections of photons from background galaxies by large clumps of matter along the line-of-sight between us and distant galaxies).  The first technique is called cosmic shear and involves directly measuring and cross-correlating these weak lensing signals.  The second technique involves counting the number of galaxy clusters, the largest gravitationally bound objects in the Universe, as function of their mass.  Weak lensing measurements are one of the best ways to determine the masses of galaxy clusters and are thus crucial to using counts of galaxy clusters to constrain the properties of dark energy.

Unfortunately, the upcoming measurements of cosmic shear and the counts of galaxy clusters are expected to be dominated by systematic as opposed to random, statistical errors.  Understanding these errors requires, in part, the use of large supercomputer N-body simulations which model the formation of largest scale structures in the Universe and the effects of dark energy on this process.  I use these simulations to study both cosmic shear and as well as weak lensing techniques that determine the total mass contained in galaxy clusters.  As a graduate student in the Kavli Institute for Cosmological Physics, I have developed and implemented new, specialized methods for calculating the expected weak lensing signals from these N-body simulations for large area sky surveys.  These weak lensing calculations, which on single computer would take years to complete, for the first time now finish in a few days to a week enabling the production of ensembles of N-body simulations with weak lensing.

With the Dark Energy Survey, I have been working to build a large ensemble of N-body simulations of different phenomenological dark energy models with these self-consistently calculated weak lensing signals.  These weak lensing calculations are being used by the Dark Energy Survey Collaboration to study systematic effects in both cosmic shear signals and also weak lensing galaxy cluster mass estimates in order to test and understand how accurately the properties of dark energy can be constrained.  In addition to their use by the Dark Energy Survey, they have also been used by the South Pole Telescope Collaboration for studies of a different, but related effect called Cosmic Microwave Background lensing.

In addition to the numerical work described above, I have worked on a new family of methods for analyzing cosmic shear data from these large area sky surveys.  This new family of methods for cosmic shear accounts for observational effects in cosmic shear measurements, like the binning of the data.  It also allows for the clean separation of systematic signals, called B-modes, from the signals due to General Relativity, called E-modes, in the presence of complicated survey window functions and biased sampling of the cosmic shear field.  Additionally, this new family of methods is significantly simpler to use and implement than previous methods, which should make them broadly applicable to upcoming cosmic shear surveys.

Finally, I also work on continuing analyses of the data from the Sloan Digital Sky Survey.  These analyses provide consistency tests of the current cosmological model and key constraints on the amplitude of matter fluctuations in the low redshift Universe.  These constraints, along with the experience and understanding gained by working with low redshift data, will allow us to make the most of the data from upcoming large area sky surveys.  The next decade will be an exciting time for cosmology with an enormous amount of data from new surveys and potentially new discoveries about the nature of dark energy.


  • Cosmic Shear E/B-mode Estimation with Binned Correlation Function Data. M. R. Becker 2012, MNRAS, submitted [arXiv:1208.0068]
  • Cosmological Constraints from the Large Scale Weak Lensing of SDSS MaxBCG Clusters. Y. Zu, D. H. Weinberg, E. Rozo, E. S. Sheldon, J. L. Tinker, M. R. Becker 2012, MNRAS, submitted [arXiv:1207.3794]
  • A High Throughput Workflow Environment for Cosmological Simulations. B. M. S. Erickson, R. Singh, A. E. Evrard, M. R. Becker, M. T. Busha, A. V. Kravtsov, S. Marru, M. Pierce, R. H. Wechsler 2012, in Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment: Bridging from the eXtreme to the campus and beyond, XSEDE 12 (New York, NY, USA: ACM), 34:1-34:8
  • A Measurement of the Correlation of Galaxy Surveys with CMB Lensing Convergence Maps from the South Pole Telescope. L. E. Bleem, A. van Engelen, G. P. Holder, K. A. Aird, R. Armstrong, M. L. N. Ashby, M. R. Becker, et al. 2012, ApJ, 753, L9 [arXiv:1203.4808]
  • Cosmological Constraints from Galaxy Clustering and the Mass-to-Number Ratio of Galaxy Clusters. J. L. Tinker, E. S. Sheldon, R. H. Wechsler, M. R. Becker, E. Rozo, Y. Zu, D. H. Weinberg, I. Zehavi, M. Blanton, M. Busha, B. P. Koester 2012, ApJ, 745, 16 [arXiv:1104.1635]
  • On the Accuracy of Weak Lensing Cluster Mass Reconstructions. M. R. Becker & A. V. Kravtsov 2011, ApJ, 740, 25 [arXiv:1011.1681]

Back to Top


Shanthanu Bhardwaj

M.Sc., Indian Institute of Technology-Kanpur, 2007 (Physics)
Ph.D. (2015), Dept. of Physics
ResearchCondensed Matter Theory
Awards: Michelson Fellow, Wentzel Research Prize (Dept. of Physics)
Research Advisor: Ilya Gruzberg, Paul Wiegmann

My research interests lie broadly in theoretical condensed matter physics.  The idea that systems composed of well understood constituents can still behave in ways which are difficult to predict was perhaps most famously expressed in Phil Anderson's "More is Different" mantra.  The variety of electrical behaviour from metals, insulators, and semi-conductors to more  interesting phases like superconductors, and topological insulators in systems made up of electrons and nucleons with all their physics governed by electromagnetism, proves that more is indeed different.

One of the most interesting systems is the critical state of the Integer Quantum Hall Effect.  Although the initial prediction of the quantization of transverse conductivity in the presence of a magnetic field is nearly forty years old, the exact nature of the quantum hall state and the associated critical exponents is still unknown.  My research has largely been in trying to work on understanding this and hopefully connecting it to a conformal field theory that can predict the nature of these critical exponents.  There are several different approaches to this problem, and the one I have been working on is to attempt to describe the behaviour of these systems by using 'network models'.  These simple discrete models - based on Chalker and Coddington's network model to describe the Quantum Hall transition at strong magnetic field - capture the essential physics of the system.  We then attempt to construct such a discrete model that captures all the relevant behaviour of the system we wish to understand and use this to guide us to the appropriate conformal field theory (CFT) describing the transition at the critical point.

Several of the mathematical techniques involved in studying the Quantum Hall Effect are also applicable to other systems of interest in 2-dimensions, and that is one of the future avenues of interest for me.

Back to Top


Eric Feng

B.S., University of California, Berkeley, 2003 (Engineering Physics)
B.A., University of California, Berkeley, 2003 (Mathematics)
M.S., University of Chicago, 2007 (Physics)
Ph.D. (2012), Dept. of Physics
Research: Experimental High-Energy Physics
Awards: Robert Sachs Fellow (Physics), Robert McCormick Fellow (Physics), GAANN Fellow (Dept. of Ed), US LHC Award (NSF), Best Poster Prize (USLUO), Gaurang & Kanwal Yodh Prize (Physics)
Research Advisor: James Pilcher

My research involves the investigation of fundamental interactions between elementary particles at the world's highest man-made energies -- or equivalently, the shortest distances -- using data collected by the ATLAS experiment at the Large Hadron Collider (LHC). I am resident at the European Center for Particle Physics (CERN) in Geneva, Switzerland, where the LHC is located. The LHC is a proton-proton collider with the highest center-of-mass energy (7 TeV) in the world; it began operation in 2008 and produced its first collisions at 900 GeV in 2009.

The primary goal of my research is to probe quantum chromodynamics (QCD) as predicted by perturbative calculations in the Standard Model, as well to search for deviations from QCD that may arise due to new physical phenomena. As a member of the ATLAS Collaboration, I have played a leading role in the world's first cross-section measurements at a center-of-mass energy of 7 TeV of inclusive jet and dijet production, which involve final states containing at least one or two jets, respectively. Each "jet" is the result of a quark or gluon that hadronizes due to quark confinement, such that only the spray of hadrons (quarks bound together by the strong force) constituting the "jet" can be measured. Our jet measurements form the foundation for future precision tests of QCD at the LHC, including the precise measurement of the strong coupling constant, the determination of parton distribution functions (which describe the density of partons within hadrons), and constraints on non-perturbative QCD where the strong coupling constant becomes large and cannot be calculated analytically.

By performing searches using the invariant mass and angular distribution in dijet final states, we have also set the world's best limit on the possible existence of dijet resonances arising from excited quarks, as well as the best limit for contact interactions that may arise from quark compositeness. These analyses probe QCD in a new kinematic regime -- at high jet transverse momentum and large dijet mass -- that has never been investigated before, yielding sensitivity to exotic physics scenarios that may appear at these very short distance scales.

My hardware work has included substantial responsibilities for optimizing the performance, simulation, and operation of photo-electronics for the Minimum Bias Trigger Scintillator (MBTS) system, which was used to trigger the vast majority of the 2009 data that was collected and analyzed. I have also been involved in studies using both cosmic ray muons and collision data to commission the Tile Calorimeter, which measures hadronic energy depositions for jet measurements. In addition, I pioneered software for the remote monitoring system that is now used globally by the experiment, allowing collaborators worldwide to take remote shifts for detector operation and data quality.

To improve the detector performance, I have investigated a technique to remediate calorimeter failures using tracks reconstructed from charged particles passing through the detector. I have also studied a scheme to calibrate the absolute jet energy scale (JES) of the calorimeter using transverse momentum balance between a photon and a jet in the final state. The latter issue arises primarily due to the non-compensation of the hadronic calorimeter and is of critical importance for jet analyses, where the JES uncertainty is usually the dominant systematic uncertainty.


  • ATLAS Collaboration. "Measurement of inclusive jet and dijet production in pp collisions at sqrt(s)=7 TeV using the ATLAS detector." arXiv:1112.6297 [hep-ex]. Submitted to Phys. Rev. D.
  • ATLAS Collaboration. "Jet energy measurement with the ATLAS detector in proton-proton collisions at sqrt(s)=7 TeV in 2010." arXiv:1112.6426 [hep-ex]. Submitted to Eur. Phys. J. C.
  • ATLAS Collaboration. "Observation of a centrality-dependent dijet asymmetry in lead-lead collisions at sqrt{s_NN}=2.76 TeV with the ATLAS detector at the LHC." Phys. Rev. Lett. 105, 252303 (2010). Cover of Vol. 105, Issue 25.
  • ATLAS Collaboration. "Measurement of inclusive jet and dijet cross sections in proton-proton collisions at 7 TeV centre-of-mass energy with the ATLAS detector." Eur. Phys. J. C 71, 1512 (2011). Cover of Vol. 71, Issue 2.
  • ATLAS Collaboration. "Search for Quark Contact Interactions in Dijet Angular Distributions in pp Collisions at sqrt(s) = 7 TeV Measured with the ATLAS Detector." Phys. Lett. B 694, 327-345 (2011).
  • ATLAS Collaboration. "Search for New Particles in Two-Jet Final States in 7 TeV Proton-Proton Collisions with the ATLAS Detector at the LHC." Phys. Rev. Lett. 105, 161801 (2010).
  • E. Feng (for the ATLAS Collaboration). "Observation of Energetic Jet Production in pp Collisions at sqrt(s) = 7 TeV using the ATLAS Experiment at the LHC." Proceedings of Physics at the LHC 2010, DESY, 241-245 (2010).

Back to Top


Samuel Gralla

B.S., Yale University 2005 (Physics)
B.S., Yale University 2005 (Mathematics)
Ph.D. (2011), Dept. of Physics
Research: General Relativity
Awards: NSF Fellow (NSF), GAANN Fellow (Dept. of Education), Bloomenthal Fellow (Physics), Blue Apple Awards (Midwest Relativity)
Research Advisor: Robert Wald

My research concerns the problem of motion in general relativity and related theories. In general relativity, bodies move on geodesics to lowest order--but what are the corrections? Particalarly, what is the influence of a body's own self-field on its motion? Determining this "self-force" or "radiation reaction" correction is more than a theoretical enterprise: knowledge of the motion with such accuracy is necessary for the production of data analysis templates for use with the planned space-based gravitational-wave detector LISA.

In graduate school I have worked primarily on a formalism for deriving the motion of bodies in relativistic theories. Working with the case of general relativity, the basic ideas are as follows. To take a rigorous approach to the perturbative description of a small body in general relativity, one must consider a one-parameter-family of solutions to Einstein's equation that contains a body that shrinks to zero size. But the body must shrink to zero mass as well, or a black hole (which is a finite-size object) will be formed before the zero-size limit can be reached. The body thus completely shrinks and disappears in the limit, but it leaves behind a preferred worldline (the place where it "disappeared to") characterizing its lowest-order motion. We develop a formalism to build this behavior into an assumed one-parameter-family, and then prove that the worldline must be geodesic. We then compute all first-order corrections to that worldline, which include gravitational self-force. We also applied this approach to classical electromagnetism and (at lowest order) to an arbitrary second-order tensor theory that follows from a diffeomorphism-covariant Lagrangian.

An interesting side-project was an analysis of "bobbing" effects in relativistic systems. Numerical simulations of binary black holes with spin show some bizarre behavior: if the spins are (anti)aligned just right, the whole system undergoes an up-and-down "bobbing" motion in phase with the orbit. Then, if the phase at merger is right, the merged black hole receives a tremendous kick in the direction of bobbing! We looked at whether similar effects could be found in analogous--but simpler--systems. We found that the bobbing effect is in fact ubiquitous, occuring whenever two spinning bodies are held in orbit by any sort of force. For example, two spinning balls connected by a string will display this behavior. The kick, however, is more special and can only occur for systems that possess field momentum which can be radiated to infinity. We conclude that bobbing and kicks are basically unrelated phenomena, which can nevertheless appear correlated for spinning black holes because the spin parameter happens to control both the bobbing and the kick.


  • S.E. Gralla, A.I. Harte, and R.M. Wald, Bobbing and Kicks in Electromagnetism and Gravity, Phys. Rev. D 81, 104012 (2010).
  • S.E. Gralla, Motion of Small Bodies in Classical Field Theory, Phys. Rev. D 81, 084060 (2010).
  • S.E. Gralla, A.I. Harte, and R.M. Wald, A Rigorous Derivation of Electromagnetic Self-force, Phys. Rev. D 80, 024031 (2009).
  • S.E. Gralla and R.M. Wald, A Rigorous Derivation of Gravitational Self-force, Class. Quantum Grav. 25, 205009 (2008).

Back to Top

Chen-Lung Hung Chen-Lung Hung

B.S., National Taiwan University 2003 (Physics)
M.S., University of Chicago 2006 (Physics)
Ph.D. (2011), Dept. of Physics
Research: Experimental Atomic Physics
Awards: Harper Dissertation Fellowship, Physical Sciences Division
Research Advisor: Cheng Chin

In collaboration with Prof. Cheng Chin, I built a new experiment aiming at using cesium Bose-Einstein condensates (BECs) to study the universality of few- and many- body physics and the quantum phase transitions of atomic quantum gases in optical lattices. By changing the atomic interactions using magnetic Feshbach resonance, and applying precisely controlled lattice potentials formed by laser standing waves, we constructed a clean and highly controllable many-body system. This provides exciting opportunities to explore fundamental phenomena traditionally studied in the context of condensed matter or nuclear physics.

My research over the past years focuses on efficient production of cesium Bose-Einstein condensates and the realization of the superfluid to Mott insulator transition of ultracold atoms in optical lattices, which are described below:

Efficient evaporative cooling of cesium atoms to Bose-Einstein condensation: Despite the rich collision properties of ultracold cesium atoms, which allow us to tune atomic interactions over a wide range, making a cesium BEC has been considered difficult due to the inelastic collision losses at high densities. Evaporative cooling in an optical dipole trap has become a necessity for condensing cesium atoms polarized in their lowest hyperfine ground state. We developed a simple scheme to achieve fast and runaway evaporative cooling of atoms to Bose-Einstein condensation by tilting an optical dipole trap with a magnetic force. This technique overcomes speed limitations in conventional dipole trap cooling, and can produce a large number of BEC atoms within 2~4 seconds.

In-situ observation of Superfluid-to-Mott insulator transition in optical lattices: We developed a novel scheme to load BEC atoms into a monolayer two-dimensional optical lattice, and thus realized the superfluid (SF) to Mott insulator (MI) transition in 2D, simulating the Bose-Hubbard model. High resolution imaging normal to the lattice plane provides in-trap density measurements, and sensitive detection of thermal and quantum density fluctuations. We directly observe the "wedding cake" density structure of a trapped gas, reflecting the coexistence of incompressible Mott-insulator, compressible superfluid and normal gas phases in equilibrium. A precise determination of these phase boundaries can be made possible by careful study of the density profile, local compressibility and density fluctuations within the framework of a local density approximation.

A density-profile based thermometry is developed to extract temperature and chemical potential of the atomic sample in the optical lattices throughout the SF-MI transition regime. When a finite temperature superfluid is adiabatically converted into a Mott insulator without strong external compression, entropy conservation suggests a significant cooling of the sample during the loading process. By shortening the lattice ramp rate and increasing initial density of the cloud, we observed evidence of even lower temperature than expected near the Mott core, implying limited entropy flow against the direction of the mass flow.

In addition to studying quantum phase transitions and quantum criticality using in-situ density profiles and fluctuations, our system also provides promising prospects to access exotic quantum phases by controlling atomic interactions, and to explore quantum magnetism by introducing multiple internal states. Our system is also fully capable of studying physics in low dimensions such as Beresinskii-Kosterlitz-Thouless physics in two dimensions and the Tonks-Girardeau gas in one dimension.


  • Chen-Lung Hung, Xibo Zhang, Nathan Gemelke and Cheng Chin, “Accelerating Evaporative Cooling of atoms into Bose-Einstein Condensation in Optical Traps”, Physical Review A 78, 011604(R) (2008).
  • Nathan Gemelke, Xibo Zhang, Chen-Lung Hung and Cheng Chin “In-situ Observation of Incompressible Mott-Insulating Domains of Ultracold Atomic Gases”, Nature 460, 995-998 (2009).
  • Chen-Lung Hung, Xibo Zhang, Nathan Gemelke and Cheng Chin “Density Profile-Based Thermometry in Optical Lattices across The Superfluid-Mott Insulator Transition”, in preparation.

Back to Top


Imai Jen-La Plante

B.S., University of Washington 2005 (Physics)
M.S., University of Chicago 2006 (Physics)
Ph.D. (2011), Dept. of Physics
Research: Experimental High-energy Physics
Awards: LHC Graduate Student Award (NSF), Gaurang & Kanwal Yodh Prize (Dept. of Physics), Nathan Sugarman Award (Enrico Fermi Institute)
Research Advisor: James Pilcher

The start of proton-proton collisions at the Large Hadron Collider (LHC) has opened a new era in particle physics. My research with Prof. James Pilcher uses the ATLAS detector to look at these collisions, which have the highest center-of-mass energies ever produced in a laboratory. Together with 3000 collaborators from all over the world, we reconstruct particles from data collected with the 7000 ton detector located in Geneva, Switzerland.

Among the first things to be measured at the LHC are properties of the W± bosons, mediators of the weak force in the Standard Model of particle physics. Observing these particles verifies our understanding of the detector performance and physics modeling at the new collision energies. They often decay to a lepton, such as an electron, and a neutrino, which passes through the ATLAS detector without depositing measurable energy. This gives a clear event signature, as the neutrino is identified by an imbalance of energy in the plane perpendicular to the direction of the colliding protons.

Finding such missing transverse energy particularly relies on the ATLAS calorimeters. The calorimeters are massive layers of the detector that stop electromagnetic and hadronic particles and measure their energy. During my early years in graduate school, I worked to prepare the hadronic calorimeter, especially by calibrating the front-end readout electronics, which were designed and built at the University of Chicago.

My current focus is to measure the associated production of W± bosons with quarks or gluons. The additional particles are mainly detected in the calorimeters as narrow sprays of energetic particles called jets. Measuring the production rates and properties of the jets in these events gives a precise test of Standard Model predictions that rely on sophisticated theoretical and numerical techniques. These predictions have not yet been proven at LHC energies and are essential to understanding our observations there.

One exciting possibility is that the data could contain evidence of new physics beyond the Standard Model. Many models of new physics predict particles that escape from the detector like neutrinos and can only be observed through missing transverse energy. Events where W± bosons decay to a lepton and neutrino are a key background in such models. Measuring the rate of these events and using them to understand the ATLAS detector and physics at the LHC is an important step toward potential discoveries.


  • ATLAS Collaboration, Measurement of the production cross section for W-bosons in association with jets in pp collisions at sqrt(s)=7 TeV with the ATLAS detector, Phys. Lett. B 698, 325-45 (2011).
  • I. Jen-La Plante for the ATLAS and CMS Collaborations, QCD Studies with W and Z Measurements at the LHC. PoS (EPS-HEP 2009) 305.

Back to Top

nathan Nathan Keim

B.S., Haverford College 2004 (Physics)
Ph.D. (2010), Dept. of Physics
Research: Experimental Condensed Matter Physics
Awards: Michelson Fellow, McCormick Fellow, Sachs Fellow, Grainger Fellow, Dept. of Physics
Research Advisor: Sidney Nagel

I am interested in the formation of singularities: points in space or time where one or more physical quantities grow infinitely large. Static and dynamic singularities appear in many branches of physics — for example: relativity (black holes), statistical mechanics (critical phase transitions), astrophysics (star formation), nuclear physics (fission), or non-linear physics (fluid shape changes). Normally one expects that a singularity is so strong that it controls all the dynamics that lead up to it. Thus singularities ought to be universal, in that they should behave the same independent of initial or boundary conditions. In contrast, our experiments have shown that there is another class of singularity where the initial conditions affect the entire evolution.

Specifically, my experiments in Professor Sidney Nagel’s laboratory study the pinch-off of air bubbles from a submerged nozzle. As the air pinches off, the neck connecting the bubble with the nozzle must collapse down to a very small radius until it breaks into two pieces. As the neck radius approaches zero, the speed of the collapse diverges, producing a singularity. By taking videos at over 180,000 frames per second, I observe how the singularity changes when I perturb the bubble in various ways. As shown in the two images, using a nozzle shaped like a slot instead of a circle can produce dramatic results.

second bubblefirst-bubble

Caption: Bubbles pinching off from a submerged nozzle. Left: a bubble slowly emitted from a circular nozzle, in the process of pinching off. The bright spots inside the bubble are due to the back-lighting. Right: a bubble is rapidly ejected from a slot-shaped nozzle. The image is a close-up of the neck region, showing an irregular “tearing” break-up instead of a universal, symmetric singularity.

We have quantified and analyzed these effects and compared them with the theoretical analysis of Professor Wendy Zhang and her former student Laura Schmidt. They predicted that the information about the initial conditions would be encoded via a novel type or vibration on the collapsing neck. The collaboration of our two groups has discovered that a large class of perturbations is remembered as rapid vibrations of the neck shape, which disrupt the universal evolution of the system toward a singularity.


  • Nathan C. Keim, Peder Møller, Wendy W. Zhang, Sidney R. Nagel: Breakup of air bubbles in water: "Memory and breakdown of cylindrical symmetry". Physical Review Letters, 97:144503, 2006.
  • Laura E. Schmidt, Nathan C. Keim, Wendy W. Zhang, Sidney R. Nagel: "Memory-encoding vibrations in a disconnecting air bubble". Nature Physics, 5:343–346, 2009.
  • Nathan W. Krapf, Thomas A. Witten, Nathan C. Keim: "Chiral sedimentation of extended objects in viscous media." Physical Review E, 79:056307, 2009.

Back to Top

ryan Ryan Keisler

B.S., University of Texas at Austin 2005 (Physics)
B.A., University of Texas at Austin 2005 (Plan II)
Ph.D. (2011), Dept. of Physics
Research: Observational Cosmology
Awards: Nathan Sugarman Award (Enrico Fermi Institute)
Research Advisor: John Carlstrom

I work in observational cosmology, a field that is increasingly driven by rich datasets. One important source of data is the cosmic microwave background, or CMB. The CMB is relic light from the early universe, when photons, normal matter, and dark matter were coupled and undergoing oscillations. Eventually the universe expanded, cooled, and became transparent to photons. Today we see these photons as the CMB. Measurements of the statistical properties of the CMB, as exemplified by the WMAP satellite, contain a wealth of information about the composition of the universe. While it's difficult to overstate the impact that WMAP has had on modern cosmology, there is still plenty to learn from the CMB. More specifically, the angular resolution of WMAP is about twenty minutes of arc, roughly twenty times worse than the human eye, and there is much to be learned from higher resolution images of the CMB. Higher resolution requires few-meter diffracting apertures, which are very expensive to launch into space. For this reason there are a number of new, ground-based, large-aperture CMB telescopes. One such telescope is the South Pole Telescope (SPT), a project led by John Carlstrom here at UChicago, and which is the focus of my graduate research.

The SPT is a 10-meter telescope located about 1 km from the geographic South Pole. The South Pole sits upon the high, dry Antarctic plateau and is one of the best sites in the world for millimeter-wave observations. I've been involved with the SPT project since 2005. In 2006 I was part of a team which traveled to the South Pole to deploy the entire instrument over the course of three months, a very exciting time. While I've been back to the South Pole twice to help upgrade the instrument, most of my work is done from Chicago. Broadly speaking, I've helped to monitor and characterize the instrumental performance and have contributed software to our "pipeline": the body of code which converts our raw data into images of the CMB. As a concrete example, I've written software which determines where the telescope is pointing on the celestial sphere at each moment of time.

More recently I've worked on a project to use SPT data to measure the statistical properties --- namely, the angular power spectrum --- of the CMB (in fact this is the focus of my dissertation). This work will characterize the CMB with unprecedented resolution and sensitivity, and will, among other things, likely provide strong evidence that the CMB photons are gravitationally lensed by intervening matter as they travel to us. There are quite a number of ongoing scientific projects using SPT data which I won't describe here, but I should mention that the key project is to characterize the dark energy by discovering massive, distant galaxy clusters using the Sunyaev-Zel'dovich effect. To summarize, my work with the SPT has been an enjoyable mix of hardware, software, and science, and I think this is true for most students that work in observational cosmology.

Back to Top

nathan Nathan Krapf

B.A., University of Chicago 2005 (Physics)
B.S., University of Chicago 2005 (Mathematics with Specialization in Computer Science)
Ph.D. (2012), Dept. of Physics
Research: Theoretical Condensed Matter Physics
Awards: McCormick Fellow, Sachs Fellow, Dept. of Physics; GAANN Teaching Fellow, Dept. of Education
Research Advisor: Tom Witten

I am currently investigating force propagation in granular systems at the jamming transition. At this point, the system is isostatic, meaning one can uniquely solve for all inter-grain contact forces. In such systems, a proposed null-stress condition gives rise to a hyperbolic equation for the stress tensor. This predicts that on average, a point force on a single bead will propagate through the pack much like a light cone through space-time, where going in the "down" direction corresponds to going forward in time. Such behavior has been numerically verified in systems built sequentially from the floor up. However, such systems have a preferred direction throughout their entire creation history. We expect the null stress condition to hold regardless of the details of how the packing is made, but if we create it with no such preferred direction and then add a floor later, how can the system know which way "down" is? That is, can the behavior in the bulk be influenced by what we have done "in the future" at the boundary? We have found exponentially decaying modes with underdetermined forces at the bottoms of such packings and overdetermined modes at the tops, and are trying to learn how these modes affect force distribution in the bulk. In more general terms, we want to know what aspects of the packing creation determine the behavior of the force response.

Before starting my work on granular systems, my advisor and I looked at the low Reynolds number sedimentation of arbitrarily shaped objects. In general such objects twist as they sink, and we can interpret this as an expression of inherent chirality. We showed that in the limit when internal hydrodynamic interactions between different parts of the object are weak it will follow a helical path while rotating at constant angular velocity about a fixed axis. Even though there can be no such chiral response in the absence of hydrodynamic interactions, the angular velocity reaches a fixed nonzero limit as the interaction strength approaches zero. We then empirically characterized how this chirality depends on the shape of the objects and found various scaling laws governing the angular velocity.

In collaboration with some members and graduates of the Booth School of Business, the Harris School of Public Policy, and the University of Chicago Law School, I am looking at intermittency in Illinois wind speeds to evaluate a recently enacted law setting statewide renewable energy standards. I am also helping my advisor continue the work of a former undergraduate student looking at binding energies and singularities in clusters of charged metal nanoparticles.


  • Nathan W. Krapf, Thomas A. Witten, Nathan C. Keim: "Chiral sedimentation of extended objects in viscous media." Physical Review E, 79:056307, 2009.

Back to Top

ying Ying Li

B.S., Peking University 2005 (Physics)
M.S., University of Chicago 2006 (Physics)
Ph.D. (2011) Dept. of Physics
Research: Theoretical Biological Physics
Awards: Robert Sachs Fellow (Dept. of Physics)
Research Advisor: Aaron Dinner

A lot of simple biological functions are well understood through experiments. However, it is still challenging to turn our knowledge about key molecular players in a complex system into a system-level understanding that is capable of making reliable predictions. So the first aim of my studies is to develop and apply computational models to understand how complex biological behaviors arise from physical and chemical features. Another feature of biological systems is out of equilibrium (irreversible). Although many theorems have been developed for the equilibrium, there are not many for non-equilibrium. The second aim of my studies is to improve our understanding of non-equilibrium theories through studies of biological systems. My research is in collaboration with Prof. Aaron Dinner.

Force transmission by focal adhesion

Cytoskeleton is a dynamic structure and has important functions in maintaining cell shape, enabling cellular motion, intracellular transport and cellular division. Actin filament, a type of cytoskeleton, is beneath the cell membrane and under retrograde flows. Structurally, actin filament is connected to extracellular matrix (ECM) through assemblies of proteins called focal adhesions (FAs). Stresses are generated in this structure by the relative motion between actin filaments and ECM. My study was to understand how the flow of actin filament affects the traction stress on the ECM. In the computational model, I simplified the structure into three layers (actin filaments, FAs and ECM from top to bottom) and molecular bonds between layers into springs. Under steady states, traction stress was found to be consistent with experimental observations, first increase and then decrease with the speeds of actin flows. Physics underneath is the competition between a decrease in protein bonds and an increase in stress per bond. Further extension into a multiple-layer model predicted two scenarios of collective motions. At small actin flows, the structure behaves as a whole and proteins move at progressively slower speeds from the actin-end to the ECM-end. At large actin flows, breakage occurs in the structure; proteins above the breakage move with the same speeds as the actin filaments and those below the breakage are immobile. The experiment was done by Prof. Gardel in Phys. Dept.

Cell fates in the immune system

B cells are an essential component of the adaptive immune system. The principal function of B cells is to make antibodies against antigens and such capability is affected by cells’ affinities to antigens. In doing so, B cells differentiate into antibody-secreting cells either directly or through an intermediate state, where they mutate intensively and modify their affinities. My study was to explain how a gene regulatory network enables B cell to select between two competing pathways to become antibody secreting cells. Five key proteins and their interactions were identified on the gene level, e.g., activation or repression of the expression of a protein by another one. Ordinary Differential Equations with noise terms were used to model the production and degradation of proteins in a single cell level. Kinetic Monte Carlo was used to model behaviors in a population level, e.g., division, death and mutation. The key discovery was the ghosting effect, which states a control parameter (initial affinity to antigen) determines the time for a system to go through a particular region in the phase space (intermediate state). The biological rationale is that B cells whose initial responses to antigen are poor need editing in their surface receptor to improve the effectiveness (affinity) in eliminating antigens. The ghosting effect also enabled me to distinguish between two similar mechanisms (dynamic control v.s. bistability), which is beyond steady-state analysis, e.g., bifurcation diagrams. This work was in collaboration with a recent physics graduate in my group and the Prof. Singh in Immunology.

Single molecule trajectories of RNA folding

This part of research focuses on non-equilibrium dynamics. The plan was to periodically drive the system and observe the responses. Experimentally, our collaborators fluorescently labeled two positions in RNA molecules, put them in a magnesium solution whose concentration varied over time in a controlled fashion and recorded the trajectories of the efficiency of fluorescence resonance energy transfer (FRET). FRET informed us the distance between two labels and the conformational changes (folding). There are two challenges: 1) Only distances in one or two coordinates are recorded such that the observed dynamics are usually non-Markovian due to the projection from high dimensions; 2) the non-equilibrium nature of the measurement limits the choice of theoretical tools.

I studied the problem from two directions, the microscopic schemes that controls transitions between RNA folding states and the Fluctuation theorem for a projected system. In the first direction, I represented the stable folding states as wells in the phase space and transitions between states as barrier crossing. The function of magnesium ion was to change the relative positions and the chemical potentials of the stable folding states as well as the friction of motion in the hidden dimensions. I developed a hybrid approach that modeled the motions in the observed dimension as a discrete stochastic process by using a discrete Master equation and the motions in the unobserved dimensions as a continuous stochastic process by using a Langevin equation. This phenomenological model attributed the non-Markovian dynamics and a wiggled relaxation to an approximately oscillatory and slow motion in the hidden dimensions driven by the changing magnesium concentration. In the second direction, I extended the study to derive a general Fluctuation Theorem for non-equilibrium systems that are both stochastic and projected. In the second law of thermodynamics, irreversible processes result in an increase in entropy. However, microscopic events can deviate from ensemble expectation and consume rather than produce entropy. Fluctuation Theorem constrains the statistics of observing such events and presents a general mechanism capable of describing processes arbitrarily far from equilibrium, including those in living systems. People have derived fluctuation theorems for systems in steady state or stable limit cycle. However, in these works, all the microscopic states are observed (no projection). So understanding how projection of a dynamics impacts the application of fluctuation theorems is of interest for interpreting experiments. My study has shown that entropies of single trajectories can change sign under projection and projection also makes systems appear closer to equilibrium to an extent determined by the dimension of the driving. That is, if the driving impacts transitions in the hidden dimensions, the systems appear more equilibrated because the driving is washed out by projection.


  • A network architecture that translates signal strength into gene expression duration to diversity a cellular state. Sciammas, R.*, Warmflash, A.*, Li, Y.*, Dinner, A.R. and Singh, H., submitted to Cell (* equal contribution).
  • Model for how retrograde actin flow regulates adhesion traction stresses. Li, Y., Bhimalapuram, P. and Dinner, A.R., J. Phys.: Condens. Matter, 22, 194113 (2010).
  • Models of single-molecule experiments with periodic perturbations reveal hidden dynamics in RNA folding. Li, Y., Qu, X., Ma, A., Smith, G.J., Scherer, N.F. and Dinner, A.R., J. Phys. Chem. B, 113, 7579 (2009).
  • How focal adhesion size depends on integrin affinity. Zhao, T., Li, Y. and Dinner, A.R., Langmuir, 25, 1540 (2009).
  • How the nature of an observation affects single-trajectory entropies. Li, Y., Zhao, T., Bhimalapuram, P. and Dinner, A.R., J. Chem. Phys., 128, 074102 (2008).

Back to Top

jonathan Jonathan Logan

B.S., University of Florida 2004 (Physics)
M.S., University of Chicago 2006 (Physics)
Ph.D. (2013), Dept. of Physics
Research: Experimental Condensed Matter Physics
Awards: Laboratory-Graduate Research Appointment (Argonne)
Research Advisor: Eric Isaacs

The microscopic structure and dynamics of magnetic domains underlie many properties of materials important for both fundamental science and technology. In the research group of Professor Eric Isaacs I have studied the physics of antiferromagnetic domain walls in both bulk and thin film Chromium. Antiferromagnetic domain dynamics are of great interest because they are implicated in basic problems in condensed matter physics such as high temperature superconductivity and ‘heavy’ fermions. Additionally, as antiferromagnets begin to find applications in areas such as pinning layers in spintronics, there is an increasing need for a more thorough understanding of the properties of their domains.

Chromium is an elemental antiferromagnet that displays magnetic and charge order common to considerably more complex materials. Below its Néel temperature of 311K, bulk Chromium exhibits an incommensurate spin-density wave characterized by a spin polarization wave vector S and propagation wave vector Q. We have investigated the slow domain dynamics naturally present in bulk Chromium even at low temperatures. Quantum dynamics have emerged in recent years as playing a critical role in the ground state properties of many modern condensed matter systems such as high-Tc superconductors, spin glasses and CMR manganites. Time-resolved coherent x-ray diffraction patterns may be used to measure spin and charge dynamics in bulk materials with sensitivity to the mesoscale dimensions. When microscopic spin or charge domains are present in the sample, coherent x-ray diffraction produces a speckle pattern that serves as a “fingerprint” of particular domain wall configuration.

We performed coherent x-ray speckle measurements of slow dynamics of domain walls separating microscopic regions with different orientations of the spin- (charge-) density waves in bulk Chromium samples [1]. By following the time evolution of speckle pattern, our measurements reveal a cross-over from thermally assisted domain wall motion to quantum tunnelling of domain walls below a temperature of 40 K. The dynamic behaviour provides insight into the free energy landscape of domain wall configurations and reveals that even at the lowest temperatures quantum fluctuations provide a path for the system to continue to explore alternate ground states.

To facilitate more precise measurements on individual antiferromagnetic domain walls, we have also devised a method for producing artificial domains of predefined size, number, and location [2]. This method uses a proximity effect of ferromagnetic layers to rotate Q in predetermined locations of Chromium thin film samples. We grew high quality single crystal Cr films which were covered by a layer of Fe. By combining photolithography and wet etching techniques, desired parts of an Fe cap layer are selected and then etched away to expose the underlying Cr film. When the process is complete, Q lies parallel to the film plane in the Fe-covered areas and perpendicular to the film in the Fe-etched areas. We then have a single film with Q domain boundaries at the border marking the presence or absence of the Fe cap layer. X-ray diffraction was performed on the uncapped and the Fe capped regions of the Chromium film confirming the creation of the antiferromagnetic domain boundary. We also performed an x-ray microprobe experiment with a submicron beam and showed that the artificial domain boundary has a width of less than our step size of 1 micron. The ability to engineer and control well defined and temporally stable antiferromagnetic domains is an important step forward for future studies of their physical properties as well as for the viability of their technological applications.


  1. O. G. Shpyrko, E. D. Isaacs, J. M. Logan, Y. Feng, G. Aeppli, R. Jaramillo, H. C. Kim, T. F. Rosenbaum, P. Zschack, M. Sprung, S. Narayanan and A. R. Sandy. Direct measurement of antiferromagnetic domain fluctuations. Nature 447, 68–71 (2007).
  2. J. M. Logan, H. C. Kim, D. Rosenmann, Z. Cai, R. Divan and E. D. Isaacs. Antiferromagnetic Domain Wall Engineering in Chromium Thin Films. (to be published).

Back to Top

sam Samuel Meehan

B.S., University of New Hampshire, 2009 (Physics)
M.S., University of Chicago, 2010 (Physics)
Ph.D. (2014), Dept. of Physics
Research: Experimental High-energy Physics
Awards: Elsevier Best Poster Award (EPS2013), Nathan Sugarman Graduate Research Award (EFI), Robert Millikan Fellowship (Dept. of Physics), Arts-Sciences Graduate Collaboration Award, Robert G. Sachs Fellowship (Dept. of Physics)
Research Advisor: Mark Oreglia

My research interests lie broadly in the field of experimental high energy particle physics and during my time at the university, I have been involved in the ATLAS collaboration and so have focused on the physics of the Large Hadron Collider (LHC) based in Geneva, Switzerland. The LHC has been used to produce proton-proton collisions at energies four times greater than those at the Tevatron and at a rate that, during one year, delivered twice the amount of data accumulated during the lifetime of the Tevatron. At such high energies, the collisions that occur are between the underlying quarks and gluons within the proton and with such a large dataset, we can study in great detail fundamental physics of the Standard Model that we think we understand and search for physics beyond the Standard Model that may or may not exist.

My work focuses on the latter and I have been involved in two main projects during my time at the university. The first was an analysis of the data collected in 2011 when the collider ran at a 7 TeV center of mass energy. We used the dataset to search for a heavy fourth generation up and down vector-like quark that couple to the standard model through the W and Z bosons. This type of new particle appears in many new physics scenarios including extra dimensions and in the absence of super-symmetry, such heavy quarks can help to stabilize the Higgs mass against perturbative corrections. The second project in which I am involved is searching for massive resonances that couple to pairs of massive bosons, whether those pairs involve W's or Z's.   This is well motivated from beyond the Standard Model physics scenarios involving Randall-Sundrum gravitons produced in warped extra dimensions and grand unified theories that posit the existence of heavy partners of the W boson, but the approach we take is agnostic in that it focuses on the distinct experimental signature left by the decays of such particles. Namely, for very massive (~1 TeV) new particles, the intermediate bosons produced will be so energetic that if it decays to a quark anti-quark pair, then the resulting energy deposition (called a "jet") will be, to first order, indistinguishable from a single quark or gluon entering the detector. However, by exploiting the underlying distribution of energy within the jet, one can identify such energetic decays and become more sensitive to such massive particles. Thus, during the course of this project, I have contributed to understanding these techniques and in doing so, exploited their ability to search for new physics above mass scales of 1 TeV.

In addition to performing data analysis to search for new physics, I have been involved in the operation and calibration of the hadronic tile calorimeter in ATLAS. This is a detector used to measure the energy of strongly interacting particles and thus reconstruct the observable jets that serve as proxies for quarks and gluons when interpreting our measurements in terms of physics. During my time at Cern, I was involved in the day to day operation of the detector during the 2012 data taking period that lead to the discovery of the Higgs boson and also contributed to the maintenance of the calibration of front-end readout electronics designed by the Chicago group.

Beyond my research within high energy physics, during my time at Chicago, I have also taken an interest in science teaching and outreach. I am currently the instructor for a yearlong program through the KICP called Space Explorers. This program recruits students from under-represented groups in Chicago to mentor them in a number of academic disciplines including physics. Throughout the course of the academic year, we work through a number of lab-based activities culminating in a week-long summer institute based at the Yerkes observatory in Wisconsin. In addition to being a great supplement to learning and practicing teaching pedagogy in a very hands-on way, working with students is simply tons of fun.


  • Search for heavy vector-like quarks coupling to light quarks in proton-proton collisions at √s = 7 TeV with the ATLAS detector, Phys Lett B 712, 22 (2012).
  • Search for Resonant ZZ Production in the ZZ → ℓℓqq Channel with the ATLAS Detector Using 7.2 fb-1 of √s = 8 TeV pp Collision Data, ATLAS-CONF-2012-150, Nov. 2012.

Back to Top


Eric Oberla

B.S., Ohio State University, 2008 (Physics, Summa cum laude)
M.S., University of Chicago, 2009 (Physics)
Ph.D. (2015), Dept. of Physics
Research: Experimental High-energy Physics / Instrumentation
Awards: Robert Millikan Fellowship (Dept. of Physics), Grainger Graduate Fellowship (Dept. of Physics), Nathan Sugarman Graduate Research Award (Enrico Fermi Institute), Best Talk Young Speaker (TIPP2014 Conference, Amsterdam)
Research Advisor: Henry Frisch

Many of the big open questions in particle physics, such as the matter-antimatter asymmetry in our present day universe or the nature of the neutrino mass, necessitate the building of larger, higher sensitivity, and more cost-effective particle detectors. In most of these detectors, some, if not all, of the detected signal is simply visible light (photons) that is created in various processes (Cherenkov, scintillation, etc) after a particle interaction. The pattern and timing of the photons collected by photo-sensors are analyzed to reconstruct and understand these rare interactions.

My graduate research has been broadly focused around the development of advanced photo-sensors as part of the Large Area Picosecond Photo-Detector (LAPPD) collaboration. Our diverse, interdisciplinary group is made up of particle physicists, electrical engineers, material scientists, and detector physicists. We are developing large-area micro-channel plate (MCP) detectors, which allow for the detection of individual photons with timing and spatial resolutions of ~50 picoseconds and a few millimeters, respectively. These properties, incorporated with new technologies that make these LAPPD MCP photo-sensors more cost effective than similar commercial products, open up new possibilities for detectors in particle physics and related fields.

My thesis project is the demonstration of a new type of particle detector relevant for neutrino physics, which takes advantage of the fine time and spatial resolutions of MCP photo-sensors. We call it the 'Optical Time Projection Chamber' (OTPC). The idea is somewhat similar to the liquid argon TPC, in which a neutrino-nucleon interaction creates ionization electrons that drift along field lines towards readout anode planes. The time-projection of these electrons onto the anode, which drift at a velocity of a few mm/microsecond, allow for amazing detail and 3D tracks of particles created in the neutrino interaction. What if a non-cryogenic, water-based detector could reconstruct particle tracks using 'drifted' optical photons, which travel at a few hundred thousand mm/microsecond? That's the idea of the OTPC: reconstructing 3D tracks of relativistic particles using the emitted Cherenkov photons. I've built a small-scale OTPC water detector that uses several MCPs with some minimal reflecting optics, and we will test its operation at an upcoming test-beam run at the Meson Test Beam Facility at nearby Fermilab National Accelerator Lab. A successful demonstration of the OTPC technology will be important first step scaling up to similar larger detectors capable of 'real physics', including: short baseline neutrino detectors, neutrinoless double beta decay, and even medical physics applications in positron emission tomography.

Lastly, I've spent a large part of my graduate career taking advantage of the industry-class design tools and electronics engineering expertise at the Electronics Design Group in the Enrico Fermi Institute. I led the design of an 'oscilloscope on-a-chip' Application Specific Integrated Circuit (ASIC), which is now the readout chip for the LAPPD photo-sensors and the OTPC detector. Named 'PSEC4', it was designed in 0.13 micron CMOS and its specific application is the digitization of waveforms sampled at up to 15 Gigasamples-per-second (~60 picosecond sampling steps). PSEC4 has been adopted by several other HEP groups around the country, and has been used as the readout ASIC for ground-penetrating radar in a civil engineering application, as well as for x-ray spectroscopy at Sandia National Lab.


  • E. Oberla, et al "A 15 GSa/s, 1.5 GHz bandwidth waveform digitizing ASIC", Nucl.Instrum.Meth. A735 (2014) 452-461. [arXiv:1309.4397].
  • B. Adams, et al, "Measurements of the gain, time resolution, and spatial resolution of a 20×20 cm2 MCP-based picosecond photo-detector," Nucl. Instrum. Meth. A732 (2013) 392-396.
  • B. Adams, et al, "A test-facility for large-area microchannel plate detector assemblies using a pulsed sub-picosecond laser", Rev. Sci. Instrum. 84, 061301 (2013).
  • M. Cooney, et al, "Multipurpose Test Structures and Process Characterization using 0.13 μm CMOS: The CHAMP ASIC", Physics Procedia 37 (2012) 1699-1706.

Back to Top


Callum Quigley

B.Sc., University of Toronto, 2003 (Physics)
B.Sc., University of Toronto, 2003 (Mathematics)
M.Sc., University of British Columbia, 2005 (Physics)
M.Sc., University of Toronto, 2006 (Mathematics)
Ph.D. (2013), Dept. of Physics
Research: String Theory
Awards: Robert R. McCormick Fellow, Robert G. Sachs Fellow, NSERC Postgraduate Scholar, Gregor Wentzel Research Prize, Sidney Bloomenthal Fellow
Research Advisor: Savdeep Sethi

My research at the University of Chicago has focused on string theory, which is a branch of physics that begins with the simple question, “What if, instead of point-like particles, everything was made of tiny, one-dimensional, vibrating strings?” By demanding self-consistency of the theory, this one simple assumption leads to a spectacular framework that unifies all the ingredients one might expect in a fundamental theory of Nature, including non-Abelian gauge theories coupled to chiral fermions (as we have in the Standard Model), and most amazingly a quantum-mechanical theory of gravity.

One curious requirement that the consistency of the theory imposes is that there must exist six spatial dimensions beyond the three that we observe. Including time, this means string theory must live in a ten-dimensional spacetime. So what happens to all those extra dimensions? The most common answer goes back nearly a hundred years (well before the advent of string theory) to Kaluza and Klein, who suggested that extra dimensions may be curled up into a compact manifold so small we cannot detect them. To understand this idea better, it helps to think of a garden hose: we know that the surface of the hose has two dimensions, but from very far away you will not be able to resolve its circumference and you might think it only has extent in one direction. In this simple analogy, the “extra” dimension of the hose has been compactified into a small circle. In string theory the same idea applies, but the geometry of the six-dimensional space is usually much more non-trivial. All of this would seem an unnecessary complication if not for the fact that the geometry of the compactified space actually determines the number, masses and interactions of the particles we observe in our four-dimensional (non-compact) spacetime.

Unfortunately the simplest, and best studied, solutions with compactified extra dimensions have a fatal flaw: for each inequivalent way one can deform the geometry of the internal space there is a corresponding massless scalar particle, in the four-dimensional spectrum. These scalar fields are called moduli. This is a phenomenological disaster, as we have observed precisely zero massless scalar particles in Nature. Fortunately, string theory offers a remedy, which is the following. In addition to gravity, string theory contains generalizations of electromagnetic fields, as well. Turning on non-trivial fluxes of these generalized magnetic fields can lead to solutions where parts of the geometry can no longer be deformed. Thus the troublesome moduli fields are eliminated. These flux compactifications currently offer the best hope of connecting string theory solutions with real world particle physics and cosmology.

One powerful approach to studying the dynamics of a string is to consider the quantum field theory (QFT) that lives on its two-dimensional worldsheet, which is analogous to a point-particle’s one-dimensional worldline. This works extremely well in simple situations like flat spacetime, but for more involved backgrounds, such as flux compactifications, this is essentially impossible to carry-out directly. A large part of my research with Prof. Sethi and collaborators focuses on developing tools to study the worldsheet theories of flux compactifications that bypass these difficulties. The basic idea is to find a simpler two-dimensional QFT that reduces to the one you are interested in a certain limit. Then it is possible to carry out computations in the simple description, and then take the appropriate limit to extract information about the flux compactification. With this tool we have been able to build new classes of flux backgrounds, and have begun to study their properties. One feature we have discovered so far is that these models do indeed have less moduli than their counterparts without flux, which is a reassuring confirmation of we expected. Much still remains to be discovered in these models, and they are still under intense investigation.

Rather than pulling everything back to the worldsheet, it is also possible to study string theory directly in its ten-dimensional spacetime. In principle this would require a string field theory, which would capture the infinite tower of vibrational modes of the string at each point in spacetime. However at low energies it suffices to consider only the lowest excitations of the string, which are all massless states. Some of these include the graviton (the force carrier for gravity, analogous to the photon for E&M), the generalized electromagnetic fields used in flux compactifications, and the moduli fields (if present). The effective description of these massless states is captured by a ten-dimensional QFT called supergravity, which is a supersymmetric extension of Einstein’s theory of General Relativity. String theory corrects the supergravity action by an infinite set of higher-derivative terms, which are all suppressed by the Planck scale. This expansion is valid so long as we restrict to low energy phenomena, and the curvatures of spacetime remain small.

Another aspect of my research uses this expansion to investigate how supergravity results are modified once the leading higher-derivative corrections are taken into account. Our most recent result on this topic, together with Prof. Martinec and another graduate student, is an application of this idea to cosmology. It has been known for decades that supergravity alone cannot lead to accelerated expansion of the Universe, contrary to what we observe. We asked if higher-derivative interactions might improve the situation. We found that if we ignore the dynamics of everything but the four-dimensional metric, effectively freezing all the other supergravity fields at some fixed values, then accelerated expansion is not possible. This rules out the possibility of finding de Sitter solutions in a wide class of string backgrounds where the supergravity approximation is valid. This strongly indicates that in order to build de Sitter vacua, and make contact with the real world, we cannot truncate string theory to just the massless sector. Instead we need a complete description of the theory, such as the two-dimensional approach discussed above.


  • C. Quigley, S. Sethi, M. Stern, “Novel Branches of (0,2) Theories,” JHEP 1209 (2012) 064, [arXiv:1206.3228].
  • S. R. Green, E. J. Martinec, C. Quigley, S. Sethi, “Constraints on String Cosmology,” Class. Quant. Grav. 29 (2012) 075006, [arXiv: 1110.0545].
  • C. Quigley, S. Sethi, “Linear Sigma Models with Torsion,” JHEP 1111 (2011) 034, [arXiv: 1107.0714].
  • L. Anguelova, C. Quigley, “Quantum Corrections to Heterotic Moduli Potentials,” JHEP 1102 (2011) 113, [arXiv: 1007.5047].
  • L. Anguelova, C. Quigley, S. Sethi, “The Leading Quantum Corrections to Stringy Kahler Potentials,” JHEP 1010 (2010) 065, [arXiv: 1007.4793].

Back to Top

Andy Andrew Royston

B.S., University of Cincinnati 2002 (Physics with High Honors)
B.A., University of Cincinnati 2003 (Mathematics with Honors)
M.S., University of Chicago 2005 (Physics)
Ph.D. (2010), Dept. of Physics
Research: String Theory
Awards (undergrad): Dean's List, Cincinnatus Foundation Fellowship, Procter & Gamble Co. Scholarship
Awards (graduate): Gregor Wentzel Prize, Dept. of Physics, GAANN Teaching Fellowship, Dept. of Education
Research Advisor: Jeffrey Harvey

I have conducted my Ph.D. research in string theory, focusing on the connections between four-dimensional field theories and branes. String theory is a description of nature in which the fundamental objects are not particles, but tiny one-dimensional strings. The various vibrational modes of the string give rise to objects resembling particles of different masses and spin. It was realized in the mid ‘90’s that string theory also contains membranes, or “branes” for short. These higher dimensional cousins of the string come in many varieties, from the two-dimensional membrane to a nine-dimensional brane that fills all the spatial directions available in string theory. Branes are interesting objects because open strings, i.e. strings with endpoints, must have those endpoints stuck to a brane, and this effectively localizes open strings to the hypersurfaces spanned by branes. This should be contrasted with closed strings, i.e. loops of string without endpoints, which can propagate freely in the ten-dimensional “bulk” spacetime of string theory.

The connection between branes and field theories—in particular gauge theories such as quantum electrodynamics—is the following. At energy scales well below the masses of excited string states, it is reasonable to focus only on the interactions of the lightest string modes, which are usually massless or nearly massless. In this limit it turns out that the theory of open strings reduces to a gauge theory! (In complete analogy, the interactions of closed strings reproduce general relativity, Einstein’s theory of gravity, in the low energy limit). The type of gauge theory obtained from open strings depends on the type of branes involved and how they are embedded in the bulk. There are nearly endless possibilities. A major area of current research is to engineer a brane system that reproduces exactly the Standard Model at low energies. While this is difficult to achieve precisely, it now appears that string theory may have many “standard model-like” vacua. This begs a couple questions.

Firstly, why bother with strings, branes, and ten dimensions if the only goal is to reproduce the standard model, a theory that we’ve understood quite well without all these notions for over thirty years? The answer is that the standard model is only an effective theory; valid at the energy scales we’ve probed thus far. It’s fully expected that we’ll see physics beyond the standard model at the upcoming Large Hadron Collider (LHC) experiments in Geneva, Switzerland. Arguably the most anticipated discovery will be supersymmetry, a hypothesized approximate symmetry of nature, whose main observable consequence is that every particle should have a supersymmetric partner particle. String theory has supersymmetry built in, so the discovery of supersymmetry would be strong evidence in favour of string theory, though it certainly wouldn’t prove it. One way that string theory might be distinguished from other supersymmetric extensions of the standard model is the way in which supersymmetry is broken in the theory. This could lead to testable predictions for string theory at the LHC, but more work needs to be done to understand the mechanisms of supersymmetry breaking in string theory.

A second question concerns the issue of vacuum selection. If string theory contains many states with different brane configurations, why did we end up in one we did, described by our standard model at low energies? A most satisfying, but difficult to establish, answer would be that the dynamics of string theory in the early universe drove us to the vacuum we are in now. This idea is referred to as dynamical vacuum selection.

In research with Professor David Kutasov and collaborators, I addressed the issues of supersymmetry breaking and dynamical vacuum selection is a particular class of brane constructions. The brane configurations we considered may play a role in supersymmetric extensions of the standard model. Within these configurations, we showed how to describe a set of phenomenologically interesting supersymmetry breaking vacua from the low energy field theory point of view, that were previously only understood in the string theory/brane picture. The system contains many other vacua as well, both supersymmetric ones and supersymmetry breaking ones. We found that early universe dynamics naturally drives the system to the most phenomenologically interesting of all these vacua. We are currently trying to adapt the techniques we developed in this class of systems to other systems, which may provide a more natural description of the compactification of string theory’s ten dimensions down to four.

Another area of tremendous activity in string theory goes under the headings “AdS/CFT correspondence” or “gauge/gravity duality.” It began from a conjecture, motivated by studying brane systems in string theory, that certain scale invariant gauge theories known as conformal field theories (CFT) have mathematically equivalent, or “dual” descriptions in terms of string theory in particular geometric backgrounds. (In the prototypical example, the background geometry is Anti-de-Sitter (AdS) space). These backgrounds have one additional dimension beyond the dimensions of the gauge theory. For example, a four-dimensional CFT would be dual to string theory in a particular five-dimensional background. At low energies, the string theory is well described by gravity, hence gauge/gravity duality. There is mounting evidence that the duality is not restricted to CFTs. Even ordinary gauge theories such as quantum chromodynamics—the theory of quarks and gluons—may have a dual string theory description. The usefulness of the duality is that it is a strong/weak relationship. When the gauge theory is strongly coupled (hard to compute with), the string theory will be weakly coupled (easy to compute with) and vice versa.

In previous work with my advisor, Professor Jeff Harvey, I studied gauge/gravity duality in a particular brane system. The system we chose has some features in common with QCD, but has other simplifying features that afforded us analytic control over the analysis. We found significant evidence for the conjectured duality, as well as several novel features. In particular, the field theory itself was in a curved background and the dual string theory in a different curved background. This particular model is fascinating from a theoretical viewpoint, and teaches us about the mathematical workings of the duality, but it is not intended to provide a description of real world phenomena. Currently, with Prof. Harvey and a fellow student, I am working on a more hands-on model, which provides a dual description of the low lying meson spectrum in QCD. This is an energy regime where QCD is strongly coupled and it is impossible to use the field theory description directly. We are using experimentally measured decay rates to constrain the parameters of our model.


  • D. Kutasov, O. Lunin, J. McOrist and A. B. Royston, “Dynamical Vacuum Selection in String Theory,” [arXiv:0909.3319].
  • A. Giveon, D. Kutasov, J. McOrist and A. B. Royston, “D-terms and Supersymmetry Breaking from Branes,” Nucl. Phys. B822 (2009) 106, [arXiv:0904.0459].
  • J. A. Harvey and A. B. Royston, “Gauge/gravity duality with a chiral N=(0,8) string defect,” JHEP 08 (2008) 006, [arXiv:0804.2854].
  • J. A. Harvey and A. B. Royston, “Localized modes at a D-brane—O-plane intersection and heterotic Alice strings,” JHEP 04 (2008) 018, [arXiv:0709.1482].

Back to Top

satomi Satomi Shiraishi

B.A., University of Chicago, 2007 (Physics with Honors)
B.S., University of Chicago 2007 (Mathematics)
Ph.D. (2013), Dept. of Physics
Research: Accelerator Physics
Awards: Robert R. McCormick Fellow, Robert G. Sachs Fellow, Gaurang & Kanwal Yodh Prize (Dept. of Physics)
Research Advisors: Young-Kee Kim and Wim Leemans

State of the art particle accelerators are used in many scientific disciplines including biology, materials science and particle physics. Improved understanding of plasma physics and advances in laser technology opened up a new field of R&D for the next generation of accelerators. Under the faculty advisory of Professor Young-Kee Kim and research supervision of Dr. Wim Leemans, I study laser-plasma accelerators (LPAs) at the LOASIS Program at Lawrence Berkeley National Laboratory.

Experimental investigation of LPAs is a topic pushing the limits of physics and technology. In 1979, Tajima and Dawson proposed an idea to use the 4th state of matter, plasma, as a medium to exchange electromagnetic energy into the kinetic energy of charged particles. In principle, this novel concept offered a potential to reduce the size of accelerators by a factor of a thousand: plasma can sustain thousands times larger electric fields than a conventional radio-frequency cavity. Experimental investigation of LPAs advanced rapidly since the 1990’s following the invention of the chirped-pulse amplification (CPA) technique to produce ultra-high intensity laser pulses. Employing a hydrogen-filled capillary waveguide, high-quality electron beams (e-beams) of energy in the GeV’s have been produced within a few centimeters. Today, improved understanding of plasma physics and rapidly advancing laser technologies are producing higher and higher intensity laser pulses and continue to expand the realm of LPA experiments

Currently, I participate in experiments using 100 TW-class, 40 fs laser pulses to understand and diagnose laser propagation in plasma with application to future high-energy accelerators. My thesis topic, referred to as staging of LPAs involves adding a second, independently driven laser acceleration stage to boost the electron energy from the primary stage. So far, LPA experiments have been done using only a single driving laser pulse. Staging is necessary because, by exciting plasma wakes, a driving laser pulse transfers energy in plasma such that it eventually becomes too weak to excite wakefields. The staging experiment involves precise control of two high intensity short laser pulses and plasma modules, coupled in a very compact manner using a disposable plasma mirror. This experiment represents a milestone in the development of LPAs and makes LPAs even stronger candidates for the next generation accelerator technology.

An understanding of laser-plasma interaction using the laser profile and its spectra is another of my research topics. Plasma is a dynamic accelerating structure that changes with time as well as with the shape and the strength of the driving laser pulse. This dynamic nature of the plasma structure represents a freedom and also a challenge in controlling the particle acceleration, and a single-shot diagnostic of the plasma wave is critical. By comparing how the driving laser pulse changes in shape and color before and after the interaction with plasma, we intend to deduce the properties of the excited wakefield structure. In particular, we learn about the energy transfer from laser to plasma and obtain a measure of electric field amplitude. Along with simulation studies, the shape of accelerating structures can be inferred. Understanding the laser-plasma interaction and the development of a single-shot diagnostic using laser spectra are complex topics but critical for the successful development of LPAs.


    • S. Shiraishi, C. Benedetti, A. J. Gonsalves, K. Nakamura, B. H. Shaw, T. Sokollik, J. van Tilborg, C. G. R. Geddes, C. B. Schroeder, Cs. Toth, E. Esarey, and W. P. Leemans, "Laser red shifting based characterization of wake_eld excitation in a laser-plasma accelerator," Phys. Plasmas 20, 063103 (2013)
    • G. R. Plateau, C. G. R. Geddes, D. B. Thorn, M. Chen, C. Benedetti, E. Esarey, A. J. Gonsalves, N. H. Matlis, K. Nakamura, C. B. Schroeder, S. Shiraishi, T. Sokollik, J. van Tilborg, Cs. Toth, S. Trotsenko, T. S. Kim, M. Battaglia, Th. Stoehlker, and W. P. Leemans, "Low-Emittance Electron Bunches from a Laser-Plasma Accelerator Measured using Single-Shot X-Ray Spectroscopy," Phys. Rev. Lett. 109, 064802 (2012)
    • C. Lin, J. van Tilborg, K. Nakamura, A. J. Gonsalves, N. H. Matlis, T. Sokollik, S. Shiraishi, J. Osterho, C. Benedetti, C. B. Schroeder, Cs. Toth, E. Esarey, and W. P. Leemans, "Long-Range Persistence of Femtosecond Modulations on Laser-Plasma-Accelerated Electron Beams," Phys. Rev. Lett. 108, 094801 (2012).
    • A. J. Gonsalves, K. Nakamura, C. Lin, D. Panasenko, S. Shiraishi, T. Sokollik, C. Benedetti, C. B. Schroeder, C. G. R. Geddes, J. van Tilborg, J. Osterhoff, E. Esarey, C. Toth, W. P. Leemans, "Tunable Laser Plasma Accelerator based on Longitudinal Density Tailoring," Nature Physics 7, 862 (2011).
    • A. J. Gonsalves, K. Nakamura, C. Lin, J. Osterhoff, S. Shiraishi, C. B. Schroeder, C. G. R. Geddes, Cs. Toth, E. Esarey, W. P. Leemans, "Plasma Channel Diagnostic Based on Laser Centroid Oscillations," Phys. Plasmas 17, 056706 (2010).
    • G. R. Plateau, N. H. Matlis, C. G. R. Geddes, A. J. Gonsalves, S. Shiraishi, C. Lin, R. A. van Mourik, and W. P. Leemans, "Wavefront-sensor-based electron density measurements for laser-plasma accelerators," Rev. Sci. Instrum. 81 (3), 033108 (2010).

Back to Top

mikhail Mikhail Solon

B.S., Univ. of the Philippines, 2009 (Physics, Summa cum laude)
M.S., University of Chicago, 2010 (Physics)
Ph.D. (2014), Dept. of Physics
Research: Theoretical High-energy Physics
Awards: Bloomenthal Fellowship, Robert R. McCormick Fellowship, Sachs Fellowship (Dept. of Physics), Oblation Scholar, Gawad Chancellor Outstanding Student, BPI-DOST Best Project of the Year (U-Philippines)
Research Advisor: Richard Hill

My research concerns particle physics that involves rich quantum field theory structures such as symmetries and the interplay of multiple scales through effective field theory. This includes the development of tools for controlled theoretical calculations, and their application for making robust predictions within and beyond the Standard Model. These activities are aimed at complementing the wealth of data from experimental frontiers, but often require new understanding of basic ideas in quantum field theory.

Effective field theory is the description of physics in terms of its underlying symmetries, its relevant degrees of freedom, and a power counting expansion based on the scales of the system. These simple ingredients lead to a framework for efficient and precise calculations, that is particularly useful in identifying universal features of a physical process, i.e., factorizing a system into physics at different scales.

One framework I have helped develop is the effective field theory for heavy or nonrelativistic particles. The scale separation between heavy and light degrees of freedom underlies the universality of heavy-particle interactions, as echoed in the predictions of heavy-quark systems, nonrelativistic atomic spectra, and the scattering of heavy dark matter off a nucleon. An important question is how to construct such field theories without matching to a microscopic theory. This is relevant to applications where the microscopic theory is unknown as in the case of dark matter, or may not even exist as in the case of a bound state arising from strong dynamics. The key is in understanding how the symmetries of spacetime are implemented. In work with Prof. Richard Hill and collaborators, I showed that the usual finite dimensional representations of the Lorentz group are not applicable to the case of heavy particles, and one must instead use induced representations. Employing the time-like class of such representations, I developed the formalism for constructing heavy particle effective Lagrangians with constraints enforcing Lorentz invariance. This opened up new questions such as the relation of induced representations to nonlinearly realized subgroups, and the possibility of applying the light-like class of induced representations towards a rigorous analysis of soft-collinear effective theory.

At a practical level, these formal developments have lead to new applications of heavy-particle methods for studying properties of nucleons and the interaction of dark matter with the Standard Model. With Prof. Richard Hill and collaborators, I developed high-order nonrelativistic QED. This provides the rigorous framework for a range of phenomenological analyses such as computing radiative corrections to low-energy lepton-nucleon scattering, analyzing generalized electromagnetic moments of a nucleon, and understanding a sharp discrepancy in proton charge radius measurements through scrutinizing proton structure effects in atomic bound states. In a series of papers with Prof. Richard Hill, I identified universal behavior in the scattering of heavy, weakly interacting dark matter on nuclear targets. The universality emerges when the dark matter is much heavier than the electroweak scale particles, and is motivated in part by the hitherto absence of new states at the LHC. The recent determination of the Higgs boson mass and improvements in lattice studies of nucleon matrix elements allow for definite predictions in the heavy dark matter limit, but demand a robust analysis of dark matter-nucleon interactions. The complete treatment of this phenomenon requires effective field theory to link physics at different scales and to provide a systematically improvable method of computation. The resulting cross section targets have minimal model dependence and may be probed in underground search experiments such as Xenon and LUX.

Selected Publications:

      • R. J. Hill and M. P. Solon, "WIMP-nucleon scattering with heavy WIMP effective theory" arXiv:1309.4092 [hep-th].
      • R. J. Hill, G. Lee, G. Paz, and M. P. Solon, "The NRQED lagrangian at order 1/M4" Phys. Rev. D. 87, 053017 (2013).
      • J. Heinonen, R. J. Hill, and M. P. Solon, "Lorentz invariance in heavy particle effective theories" Phys. Rev. D. 86, 094020 (2012).

Back to Top

cacey Cacey Stevens

B.S., Southern University-Baton Rouge, 2008 (Physics with Honors)
M.S., University of Chicago, 2010 (Physics)
Ph.D. (2015) Dept. of Physics
Research: Experimental Soft Condensed Matter
Awards: Robert Millikan Fellowship (Dept. of Physics), Chairman's Award for Distinguished Service (Dept. of Physics), NSF Graduate Research Fellowship, Chancellor's Award (Southern U), Willie H. Moore Scholarship (NSBP), Minority Scholarship (APS)
Research Advisor: Sidney Nagel

I am a Ph.D. candidate in experimental soft condensed matter physics working with Prof. Sidney Nagel. The Nagel group studies far-from-equilibrium phenomena on a macroscopic scale such as how liquids break apart or coalesce, how drops behave on a really hot surface, how sand flows and how things become jammed.

My research investigates the following question: How does a liquid drop behave when it impacts a dry solid surface? If it hits at a sufficiently high velocity, we expect the drop to splash, breaking apart into many smaller droplets. The splash, of course, depends on surface roughness and liquid properties such as surface tension or viscosity. For many years, researchers only considered these control parameters when studying drop impact. However, there is another surprisingly essential parameter for creating a splash: the ambient gas pressure. Our group has recently shown that one can prevent a drop from splashing by decreasing the ambient pressure below a threshold value that depends on liquid and surface properties. Using high-speed imaging, we have investigated how the surrounding air influences the splash dynamics.

For one of my projects, we developed a criterion for when a low-viscosity liquid drop will splash on smooth, dry glass. Even for this seemingly simple occurrence, it is difficult to define the onset of splashing in terms of all parameters. We determined the splash threshold pressure as a function of impact speed, liquid viscosity, and drop size. We found that by rescaling the axes in terms of dimensionless variables, we could collapse all our data onto a single master curve.

If the drop viscosity is increased to over three times that of water, the splash evolves more slowly than that of very low viscosity liquids.   In this regime we can see more clearly the splash evolution. As the drop spreads, a thin sheet of liquid is ejected and then breaks into droplets. We have shown that decreasing the air pressure acts to delay sheet ejection until, below a critical value, splashing stops entirely.

If the surface is rough, with an average roughness of a few microns, the impacting drop no longer ejects a thin sheet, as it does on a smooth surface, but promptly ejects droplets at contact. This allows a range of intermediate surface roughness for which no splash occurs. However, splashes on rough surfaces are still influenced by air pressure; as the pressure is lowered, droplet ejection is suppressed for prompt splashing. Thus, air pressure effects are robust for drop impact on dry surfaces. My research projects represent only a few directions that our group has taken to understand splashing phenomena.

While in graduate school, I have also been involved in several education and outreach activities. As a student coordinator of University of Chicago MRSEC Science club, an after-school program at a public elementary school near the University, I teach science concepts to young students through simple experiments and demonstrations. The students meet a diverse group of scientists and are encouraged to pursue careers in science. I have also gained experience engaging children in science as a guest lecturer for the Junior Science Cafes of the Museum of Science and Industry. Because of my interest in education, I am also involved in a research project that, using analysis methods from fluid mechanics, allows one to visualize patterns in educational data. In this work, I represent school performance scores on Illinois math and science exams as flow vectors and extract information about educational progress from the flow patterns. These flow charts could ultimately be used to show the most effective math and science educational programs in public schools.


  • C. S. Stevens, A. Latka, and S. R. Nagel, "Comparison of Splashing in High and Low Viscosity Liquids" Phys. Rev. E 89, 063006 (2014).
  • C. S. Stevens, "Scaling of the Splash Threshold for Low Viscosity Fluids" EPL 106, 24001 (2014).
  • A. Latka, A. Strandburg-Peshkin, M. M. Driscoll, C. S. Stevens, and S. R. Nagel, "Creation of Prompt and Thin-Sheet Splashing by Varying Surface Roughness or Increasing Air Pressure" Phys. Rev. Lett. 109, 054501 (2012).
  • M. M. Driscoll, C. S. Stevens, and S. R. Nagel, "Thin Film Formation During Splashing of Viscous Liquids" Phys. Rev. E 82, 036302 (2010).

Back to Top

kyle Kyle Story

B.A., Cornell University, 2007 (Physics) 
B.A., Cornell University, 2007 (Mathematics) 
M.S., University of Chicago, 2009 (Physics) 
Ph.D. (2015) Dept. of Physics 
Research: Experimental Cosmology
Awards: Grainger Graduate Fellowship, Robert R. McCormick Fellowship (Dept. of Physics), William Rainey Harper Dissertation Fellowship (Physical Sciences Division), NSF Graduate Fellowship Honorable Mention
Research Advisor: John Carlstrom

As physicists, we live in a remarkable time in which we can quantitatively study our entire observable universe. Over the past few decades, physicists have made striking progress towards a robust standard model of cosmology; we know we live in a universe that continues to expand from a hot, dense origin, and we have a conceptual framework that successfully describes phenomena from the scale of single atoms (such as the formation of the light elements) up to the largest scales in the observable universe. Yet much remains mysterious. This standard model posits that only ~5% of the total energy density of the universe is comprised of ordinary matter, while poorly understood dark matter and dark energy contribute the remaining ~25% and ~70%, respectively. According to this model, the very early universe underwent a period of super-luminal expansion known as "inflation." We know that neutrinos have mass but have not measured the absolute mass, and there are tantalizing hints that neutrino physics could be more complicated than the standard 3-flavor picture.

Advances in the field of cosmology are driven by measurements and observations. One of the richest sources of information is the Cosmic Microwave Background (CMB). Serendipitously discovered in 1964, the CMB is thermal radiation left over from the hot, dense early universe. The universe cooled as it expanded from its hot, dense origin, and after ~380,000 years the ambient temperature dropped below the ionization energy of hydrogen. At this transition, the universe rapidly became neutral as most of the free electrons and protons paired off to form hydrogen, and the universe quickly became transparent to light. The photons that last scattered at this time have been streaming through the universe ever since, and comprise the CMB. Thus, observations of the CMB can give physicists an incredibly informative snap-shot picture of the infant universe.

In my research, I observe and study the CMB to understand the basic physics of how the universe works. Our group built and continues to operate the South Pole Telescope (SPT), a millimeter-wave telescope located at - can you guess it? - the south pole in Antarctica. In the austral summer of 2011-2012, I helped deploy a new polarization-sensitive camera for the SPT, called SPTpol. As a part of this research I have traveled to the south pole four times, with my fifth trip scheduled for this January.

In my graduate career, I have used data collected with the SPT to study several different science topics. As CMB photons travel through the universe, some will traverse through clusters of galaxies and will scatter off of intra-cluster gas, thus distorting the primary CMB spectrum in a process known as the Sunyaev-Zel'dovich effect (SZE). We use these distortions in SPT data to find and study these clusters of galaxies. Clusters of galaxies are informative since they trace large-scale dark matter structures and are sensitive to the composition and expansion history of the universe. In 2011 the Planck satellite published its first catalog of SZE-selected clusters; I led observations and an analysis which used the SPT to confirm the five previously unconfirmed clusters in the southern hemisphere from that catalog.

In 2012, I led an analysis in which we used data from the full 2500 square-degree SPT-SZ survey to measure the power spectrum of the CMB. The power spectrum is a powerful way to quantify the statistical properties of the anisotropy in the CMB. This anisotropy arises from - and therefore probes - a wealth of interesting physics including the energy composition and expansion history of the universe, particle interactions at early times, and potential gravitational waves from inflation. Additionally, effects closer to today imprint signals in the CMB, including the SZE and gravitational lensing by large-scale dark matter structures. In a pair of papers resulting from the analysis I led, we published the most precise measurement of the CMB power spectrum over angular scales between ~0.06 and 0.25 degrees, and discussed the resulting constraints on models of cosmology.

For my thesis work, I am now focusing on the signal of gravitational lensing in the measurements taken with SPTpol. Gravitational lensing bends the paths of CMB photons as they travel from the surface of last scattering to Earth, thus distorting the anisotropy pattern of the CMB. The strength of this gravitational lensing signal is sensitive to the structure of ordinary and dark matter, cosmic acceleration (dark energy), neutrino physics, and the nature of gravity itself. Finally, gravitational lensing distorts the polarization patterns in the CMB, creating odd-parity swirl-patterns. Similar swirl-patterns could have been created by gravity waves from inflation in the very early universe; thus understanding and removing the lensing signal will be important in the search for these signals of inflation. The information we will gain by studying gravitational lensing in the CMB with SPTpol will shed light on all of these topics.

Selected Publications:

  • Hou,, "Constraints on Cosmology from the Cosmic Microwave Background Power Spectrum of the 2500-square degree SPT-SZ Survey" arXiv:1212.6267 (2012).
  • Story,, "A Measurement of the Cosmic Microwave Background Damping Tail from the 2500-square degree SPT-SZ Survey" arXiv:1210.7231 (2012).
  • Story,, "South Pole Telescope software systems: control, monitoring, and data acquisition" Proceedings of SPIE 8451 (2012); arXiv:1210.4966 (2012).
  • Story,, "South Pole Telescope Detections of the Previously Unconfirmed Planck Early Sunyaev-Zel'dovich Clusters in the Southern Hemisphere" Astrophys Journal Lett 735 L36 (2011).

Back to Top

ibrahim Ibrahim Sulai

B.S., Allegheny College 2004 (Physics)
M.S., University of Chicago 2006 (Physics)
Ph.D. (2011), Dept. of Physics
Research: Experimental Atomic & Nuclear physics
Awards: Nathan Sugarman Award (Enrico Fermi Institute), David W. Grainger Fellowship (Dept. of Physics)
Research Advisor: Zheng-Tian Lu

Working under the supervision of Zheng-Tian Lu, I have been involved in studies whereby the tools of precision atomic physics are applied to atoms with interesting nuclei in order to test nuclear structure theories and to search for the possible violation of discrete symmetries.

For the first couple of years, I worked on a project to measure the nuclear charge radius of the helium-8 isotope. This is a very neutron - rich system, and has a so called neutron halo. Current advances in microscopic nuclear structure theories allow for the description of such few-body nuclear systems with increasing precision. An equally precise determination of the charge radius therefore serves as a test of these theories.

Because of the short half life of helium - 8 (119 ms), a traditional probe of the nuclear charge distribution using electron scattering on a fixed target could not be readily applied. Instead, our approach relied on determining the charge radius by performing precision atomic spectroscopy such that in effect, the bound electrons probed the nucleus--yielding information about the finite nuclear size. The measurements were made on single helium atoms which were trapped in a magneto-optical trap. This was performed at GANIL, a cyclotron facility in Normandy, France where the isotopes were produced.

Back in Chicago, at Argonne National Laboratory, I worked on laser cooling and trapping radium atoms in preparation for a search for the permanent electric dipole moment (EDM) of radium-225. A permanent EDM necessarily vanishes as a consequence of the discrete symmetries of parity (P) and time-reversal symmetry (T). A non-zero EDM would therefore signify the violation of these two symmetries. Radium-225 is believed to be particularly sensitive to interactions which are odd under P and T. This experiment is still underway.


      • Trimble et al. Phys. Rev. A 80, 054501 (2009).
      • Holt et al. Nuc. Phys. A 844, 53c (2010).
      • Sulai et al. Phys. Rev. Lett. 101, 173001 (2008).
      • Mueller et al. Phys. Rev. Lett. 99, 252501 (2007).

Back to Top

arun Arun Thalapillil

B.E., Birla Institute of Technology and Science - Pilani 2005 (Electrical & Electronics Engineering)
M.Sc., Birla Institute of Technology and Science - Pilani 2005 (Physics)
M.S., University of Chicago 2006 (Physics)
Ph.D. (2012), Dept. of Physics
Research: Elementary Particle Theory
Awards: Sidney Bloomenthal Fellowship (Dept. of Physics), Subrahmanyan Chandrasekhar Memorial Fellowship (Dept. of Physics)
Research Advisor: Jonathan L. Rosner

My research interests lie broadly in theoretical particle physics. All phenomena we have encountered to date in nature may ultimately be reduced to four fundamental interactions- gravitational, electromagnetic, weak and strong force. Particle physics deals mainly with the last three of these interactions. We currently have a very successful theory of elementary particles and their interactions, prosaically called the `Standard Model’ (SM). This is based on a quantum field theory and has been quite well tested experimentally over the past many years. In spite of its remarkable success though, there are compelling reasons to suspect that it’s incomplete. Generation of particle masses, the hierarchy among these masses, matter-antimatter asymmetry in the universe, presence of dark matter are among the open questions. My time in graduate school has been spent preparing for the next generation of collider and non-collider experiments, where some of these questions will be probed.

The Large Hadron Collider (LHC) at CERN, Geneva is the world’s highest energy particle accelerator, one aim of which is to discover the Higgs boson which is believed to give masses to all other elementary particles. The production and decay mechanisms of the Higgs boson in a collider has been extensively studied in the context of the SM and the Minimal Supersymmetric Standard Model (MSSM). But if the Higgs boson is relatively light and has some exotic decays, for instance to 4 jets, then the backgrounds would completely swamp the signal and a detection at the LHC would be almost impossible. Along with my collaborators, we recently studied such a case of a relatively light-higgs boson decaying into 4-jets. We were able to show that using jet-substructure techniques we can reduce the background sufficiently to enable detection.

Another aim of the LHC is to look for hints of new physics beyond the SM. A promising candidate in this direction is Supersymmetry, which predicts ‘superpartners’ for all particles in the SM (squarks for quarks, gluinos for gluons etc.). My collaborators and I studied search strategies for associated squark and gluino production at the LHC, using jet-shape variables, in the case when the squark is heavy. The discovery of such a scenario is complicated because heavy squarks decay primarily into a jet and boosted gluino, yielding a dijet-like topology with missing energy (MET) pointing along the direction of the second hardest jet. The result is that many signal events are removed by standard jet/MET anti-alignment cuts designed to guard against jet mismeasurement errors. We suggested in the work that replacing these anti-alignment cuts with a measurement of jet substructure can significantly extend the reach of this channel while still removing much of the background.

The possibility of light scalar/pseudo-scalar particles in the GeV mass-range has received renewed attention recently in the context of certain experimental anomalies and dark matter searches. We explored the consequences of a fermiophobic (i.e. no coupling to fermions) sector in the context of bound states and astrophysics/cosmology. To make our treatment as general and comprehensive as possible we looked at fermiophobic Unparticles (which in the limit of the scaling dimension tending to 1, give scalar and axion-like particles). Apart from pointing out theoretical aspects of the Unparticle-Uehling potential, energy level ordering and astrophysical constraints, we commented that if there is improvement in the Nuclear/QED theory of high-Z muonic-atoms then muonic-atom spectroscopy can potentially be complementary to collider based searches. This is especially pertinent in the context of many upcoming and proposed experiments to look for coherent muon-electron conversion (lepton-flavor violation) in muonic atoms.

Recently, a novel parametrization and framework to study gauge mediation models (GM) was introduced in the literature, termed general gauge mediation (GGM), which showed that the actual space of possibilities in GM are larger than what was once thought. We looked at features (for instance the NLSP topography) and constraints in the MSSM/NMSSM with GGM, in the context of low-energy observables like muon anomalous magnetic moment and flavor observables. It was found that there are strong constraints on the GGM space from these low-energy observables as well as interesting relations among the various quantities.


      • J. Fan, D. Krohn, P. Mosteiro, A. M. Thalapillil, and L. T. Wang, Heavy Squarks at the LHC, JHEP 1103, 07(2011) [arXiv:1102.0302[hep-ph]].
      • A. M. Thalapillil, Low-energy Observables and General Gauge Mediation in the MSSM and NMSSM, JHEP 1106, 059 (2011) [arXiv:1012.4829 [hep-ph]].
      • A. Falkowski, D. Krohn, L. T. Wang, J. Shelton and A. Thalapillil, Unburied Higgs, [arXiv:1006.1650 [hep-ph]].
      • A. M. Thalapillil, Bound states and fermiophobic Unparticle oblique corrections to the photon, Phys. Rev. D 81, 035001 (2010) [arXiv:0906.4379 [hep-ph]].

Back to Top

scott Scott Waitukaitis

B.S., University of Arizona 2007 (Physics)
Ph.D. (2013), Dept. of Physics
Research: Experimental Soft-Condensed Matter Physics
Awards (U-Arizona): Outstanding Senior, Outstanding Research Presentation, Honors Transfer Scholarship
Awards (grad): Bruce Winstein Prize for Instrumentation, Robert A. Millikan Fellow, Robert R. McCormick Fellow, Robert G. Sachs Fellow (Dept. of Physics), Best Speaker (Electrostatics Society of America)
Research Advisor: Heinrich Jaeger

I work in experimental soft-condensed matter physics. As its name suggests, soft-condensed matter physics deals materials that are in some sense "softer" than those studied in hard-condensed matter. Whereas the latter is primarily interested in crystalline solids, we study things like glasses, fluids, foams, gels and granular matter. In a more abstract way, our field is soft in the sense that there is no one solid foundation on which its study is based.

My graduate research in soft-matter has parallels to this description. Rather than being tied to a single thesis topic, I have been able to chase a number of my creative interests, ultimately tying together three major projects. While these three projects are all quite different on the surface, they all are united by the common theme of shedding light on the physics of phenomena to which everyone has some familiarity yet little understanding.

The first project I worked on sought to explain a puzzling observation: grains of sand, when flowing out as a stream from a small hole, will undergo an instability to form droplets, just like water slowly flowing from a faucet and tip-tapping on the bottom of the kitchen sink. This phenomenon is totally unexpected given that macroscopic grains, unlike fluids, have typically been thought to lack surface tension. With the aid of a "freely-falling laboratory", atomic force microscope measurements detailing the nanoscale forces between these grains, and large-scale molecular dynamics simulations, my labmates and I were able to show that streams of grains do in fact have a microscopic surface tension. Although it is orders of magnitudes smaller than that of common fluids, it works in conjunction with gravitational stretching to cause the stream to break up.

In my next project, I became interested in a dark little secret that physicists have known about for a long time. We all know that rubbing two different materials together will lead to tribocharging, i.e. the exchange of electrical charge. Surprisingly, if you rub two identical materials together, more often than not they also exchange charge and often in a systematic way. I studied this phenomenon in the context of a large ensemble of grains, a system where charging can lead to intense displays of lightning in volcanic clouds, huge electric fields in dust storms and cyclones, and terrible disasters such as silo explosions. By building a high-tech, ultra sensitive version of the Millikan Oil Drop experiment for macroscopic grains, I was able to measure the charge of and size of such grains simultaneously. Our work shows conclusively that the charging of grains is related to the grain size; in a binary-sized systems, large grains charge positively and small negatively. As a last step toward understanding this curious phenomenon, we are working on proving or disproving a longstanding model involving the non-equilibrium transfer of trapped-state electrons.

My final project found its inspiration on YouTube, where a short search reveals that a pool of cornstarch grains mixed with water creates an amazing substance. This material is fluid-like when perturbed lightly, but hardens when driven strongly, allowing people to run across as if they were walking on water. Although this type of shear-thickening fluid has been studied extensively in a purely rheological context, I wanted to know exactly how someone can run across its surface. I built a simple experiment to test this. By shooting a rod into a large vat of this oobleck and using an arsenal of high-tech equipment to study the impact, I was able to show that this phenomena is driven by the dynamic solidification of the cornstarch suspension. This impact-activated solidification is totally counterintuitive. Whereas most granular materials are fluidized by a sudden input of energy, this granular suspension is instead solidified. The momentum of the impacting object is quickly taken away as it causes this solid to grow and move, making these types of materials ideal for stress-response applications.


      • Waitukaitis, S.R. et al. Direct measurement of size-dependent, same-material tribocharging in insulating grains, in preparation.
      • Waitukaitis, S.R., Castillo, G.M., & Jaeger, H.M. A Granular Electrometer: Measuring the Charge Distribution of a Pile of Grains, in preparation.
      • Waitukaitis,S.R. & Jaeger,H.M., Solidificación de una suspensión de maicena y agua, Revista Cubana de Física, (2012).
      • Waitukaitis, S.R., & Jaeger, H.M. Impact-activated solidification in dense suspensions via dynamic Jamming Fronts, Nature 487, 205-209 (2012).
      • Waitukaitis, S.R. et al. Droplet and cluster formation in freely falling granular streams, Phys. Rev. E 83, 051302 (2011).
      • Royer, J.R.R. et al. High-speed tracking of rupture and clustering in freely falling granular streams, Nature 459, 1110-1113 (2009).
      • des Jardins, A.C. et al. Reconnection in three dimensions: The role of spines in three eruptive flares, Astrophys. J. 693, 1628-1636 (2009).
      • Carr, A et al. Cover slip external cavity diode laser, Rev. Sci. Instrum. 78, 106108 (2007).

Back to Top

chris Christopher Williams

B.S., Ohio State University 2008 (Physics)
B.S., Ohio State University 2008 (Astronomy)
M.S., University of Chicago 2009 (Physics)
Ph.D. (2013), Dept. of Physics
Research: Astroparticle Physics
Awards: Michelson Fellow, Robert G. Sachs Fellow, Eugene & Niesje Parker Fellow, Gaurang & Kanwal Yodh Prize (Dept. of Physics), Price Prize (Ohio State)
Research Advisor: Paolo Privitera

At the highest energies, sources of cosmic rays should be among the most powerful accelerators in the universe, but even after a century of observation their origin and composition remains a mystery. Large observatories have revealed a flux suppression above a few 10^19 eV, similar to the expected effect of the interaction of ultrahigh energy cosmic rays (UHECRs) with the cosmic microwave background. To answer the question of the origin of UHECRs, the flux suppression must be overcome with even larger instrumented areas to obtain a large sample of high quality data.

Our work at the University of Chicago as a member of the largest of these cosmic ray observatories, the Pierre Auger Observatory, has helped to measure the largest sample of cosmic ray induced extensive air showers at the highest energies. Auger instruments an area of 3000 square kilometers in Mendoza Province, Argentina with an array of surface detectors and fluorescence telescopes. This data has been used to make a precise measurement of the energy spectrum, find hints of spatial anisotropy, and a discover surprising change in the chemical composition at the highest energies.

My research at the University of Chicago has focused on new techniques for detecting extensive air showers that will lead to a larger sample of high quality UHECR data. We are developing new radio detectors which promise 100% duty cycle and measurement quality similar to the fluorescence detection technique. This would allow for collection of much greater amounts of data that could be used to understand the origin and composition of cosmic rays. By combining electronics from high energy physics with commercially sourced radio components, we have built and tested the MIcrowave Detection of Air Showers (MIDAS) experiment, a prototype wide field of view imaging camera deployed on the roof of the Kersten Physics Teaching Center on campus. The imaging camera operates in the commercial C-Band (3.4 to 4.2 GHz) covering a 20 degree by 10 degree field of view. This is instrumented with RF power detectors and 20 MHz flash analog-to-digital converters. The digitized signal is passed through a field programmable gate array which is used to form a trigger looking for topological patterns which match the microsecond tracks crossing the field of view expected from extensive air showers. With 61 days of live time data from this set-up we were able to set new limits on isotropic microwave emission from extensive air showers, ruling out the putative power flux and coherence values at greater than five sigma. The MIDAS detector has now been deployed at the Pierre Auger Observatory where it will run in coincidence with both the surface detector and fluorescence detector, continuing to search for microwave emission from extensive air showers.

Our group has also focused on test beam measurements at Argonne National Laboratory (ANL) seeking to detect isotropic microwave emission from a particles in the laboratory setting which simulate an extensive air shower. The Microwave Air Yield Beam Experiment (MAYBE) passed 3 MeV electrons from the Van de Graaff accelerator of the Chemistry Division of ANL through an RF anechoic chamber to measure the microwave emission. With these tests we were able to measure the flat spectral nature of the emission from 1 to 15 GHz and observe that the emission is unpolarized. This is information that was speculated but not previously measured. The emission was also observed to scale linearly with energy deposit. The results of this experiment will be used to guide the design of microwave detectors for UHECRs.


      • J. Alvarez-Muñiz et al. “Search for microwave emission from ultrahigh energy cosmic rays”. Phys. Rev. D 86, 051104 (2012), arXiv:1205.5785.
      • J. Alvarez-Muñiz et al. “The MIDAS telescope for microwave detection of ultra-high energy cosmic rays”. Submitted to Astroparticle Physics (2012), arXiv:1208.2734.

Back to Top