- Areas of Study
- About A&S
- Faculty & Staff
- Cultural Initiatives
- Research Initiatives
- Degrees and Courses
- Physics Learning Center
- Society of Physics Students
- Frozen Fury Rocket Team (2014)
- Northern Sky Astronomical Society
- PHYS 101: Survey of Physics
- General Physics Lab
- Demonstration Equipment
- Research Equipment
- Undergrad Student Lounge
- Computational Lab
- Equipment list (IdM login required)
- PHYS 101: Survey of Physics
- Astronomy Public Talks
- Native American High School Student Research Opportunities
- Physics at the GF Public Library
- Asteroid named after Physics alumna
- SoTL: What Faculty Say
- Martens-Kraus Telescope Dedication
- Barkhouse at STEM Cafe
- Barkhouse sheds light on dark matter
- New NASA rockets
- Barkhouse on Prairie Public
- UND Physics Day 2014 sparks interest in area students
- Barkhouse speaks to WDAZ about water on Mars
- Barkhouse on Studio One: "The Search for ETs"
- Knowledge Base
- Chester Fritz Library
- Physics Today Job Listings
- Sun, Earth, and Moon System
Dr. David Campbell
Department of Physics
FPU and the Birth of Experimental Mathematics
In the summer of 1953, at Los Alamos Scientific Laboratory, Enrico Fermi, John Pasta, and Stanislaw Ulam initiated a series of studies on the MANIAC-1 digital computer. These studies were aimed at exploring how simple, multi-degree of freedom nonlinear mechanical systems obeying reversible deterministic dynamics evolve in time, presumably to an equilibrium state describable by statistical mechanics. FPU’s goal was to gain insight into the fundamental question of “the arrow of time.” Their expectation was that the approach to equilibrium would occur by mixing behavior among the many linear modes. Their intention was then to study more complex nonlinear systems, with the eventual hope of modeling turbulence computationally.
The results of this first study of the so-called “Fermi-Pasta-Ulam (FPU) problem,” which were published in 1955 and characterized by Fermi as a “little discovery, ” showed instead of the expected mixing of linear modes a striking series of (near) recurrences of the initial state and no evidence of equipartition. This work heralded the beginning of both computational physics and (modern) nonlinear science. In particular, the work marked the first systematic study of a nonlinear system by digital computers (“experimental mathematics”). I will review the consequences of this remark numerical experiment and show how it remains of active interest still today, more than sixty years later.
1-Our Chaotic and Fractal world
For centuries after the time of Newton, the image of a “clockwork” universe, as exemplified by our solar system with its planets moving smoothly through space and time, dominated our view of the world. In the celebrated words of Laplace, “An intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it—an intelligence sufficiently vast to submit these data to analysis—for it, nothing would be uncertain and the future, as the past, would be present to its eyes.” Such was the strict determinism of the clockwork universe.
Beginning in the late 19th century, with insights from Maxwell and Poincaré, a series of developments revealed that this notion of rigid deterministic predictability is in fact false for most classical mechanical systems. Even simple systems (such as the three-body problem in celestial mechanics) for which Newton’s laws apply and all forces are known, can exhibit seemingly random behavior and behave unpredictably. Such behavior is now known as “deterministic chaos,” or more commonly, simply “chaos.”
In this broad overview of chaos, I will recall this early history and then trace the developments that have led to our current understanding of this phenomenon, emphasizing the key roles played by computer studies and visualization techniques. I will describe the remarkable advances in the past fifty years in our understanding of chaos and its many applications in the real world. I will close with a discussion of future directions of research in chaos and with some speculations about the philosophical implications of the death of classical determinism.
2-From the Red Spot of Jupiter to Pulses in Optical Fibers: Coherent structures and Solitons in Nonlinear Systems
From the Red Spot of Jupiter through clumps of electromagnetic radiation in the ionosphere and traveling water waves in canals to temporally localized pulses in optical fibers, localized, wave-like excitations abound in nonlinear systems. These nonlinear “coherent structures” reflect a surprising order in the midst of otherwise complex behavior. Their ubiquitous role in both natural nonlinear phenomena and the corresponding mathematical models has caused coherent structures to emerge as one of the central paradigms of nonlinear science.
In this overview of coherent structures in nonlinear systems, I will show how these objects typically represent the natural “modes” for understanding the time evolution of spatially extended nonlinear systems and often dominate the long-time behavior of the motion. I will then introduce the concept of “solitons,” which can be viewed as the paragons of coherent structures in that they preserve their shapes and velocities in interactions with each other, despite being nonlinear objects. I will reveal the elegant mathematical structure that underlies this remarkable behavior and show that, despite their seeming fragility, solitons can explain a host of natural phenomena in diverse fields. I will close with important practical applications of solitons and coherent structures.
3-Flatland Redux: Graphene and other Two-Dimensional Electronic Materials
Graphene is a remarkable material, consisting of a single layer of carbon atoms bonded to each other in a hexagonal pattern extending in principle in an infinite, flat two-dimensional membrane. Its structure and basic properties were modeled theoretically already in the 1940s, but its isolation in truly single layer form in 2004 by Geim and Novolsev using the “scotch tape” method ignited an explosion of interest and led to the awarding of the 2010 Nobel Prize in Physics to these researchers. Ideal graphene is intrinsically nearly 200 times stronger than steel by weight and has higher electron mobility—essentially a measure of how well graphene conducts electricity—than any other known material at room temperature. This unique combination of physical characteristics makes it a promising candidate for many applications in advanced electronics. Of course, being only one atom thick, isolated graphene is very fragile and is typically supported on a substrate, which can alter its properties substantially. Nonetheless, industrial laboratories around the world are working to create nanoscale electronic devices utilizing graphene.
But graphene is only one of a wide class of novel two-dimensional electronic materials, many of which have recently been isolated in single layer form, also using the “scotch tape” method. In this talk, I will review this emerging field of two-dimensional electronic membranes, beginning with a description of their exotic physical properties and concluding with a discussion of the prospects for their use in future electronic devices.