By applying machine learning to detect anomalies, predict faults, and cut radiation, Jefferson Lab is future-proofing its particle accelerator to deliver more stable experiments and maximize scientific discoveries.

Scientists are developing artificial intelligence and machine learning tools for improving the operations of particle accelerators, such as with Jefferson Lab’s Continuous Electron Beam Accelerator Facility.. Credit: Jefferson Lab photo/Aileen Devlin
One of the things that makes the main particle accelerator at the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility unique is that it was the first linear accelerator to deliver a continuous stream of electrons. Today, the accelerator’s efficiency and stability are critical to groundbreaking experiments by nuclear physicists from around the world to probe the tiniest building blocks of matter.
But when anomalies arise in the accelerator, they can cause these continuous electron beams to switch off automatically, just like flipping a circuit breaker. That’s a costly consequence when every moment of scheduled experiment time, called beamtime, is precious. Now, scientists are using advanced computing techniques to help identify these anomalies early, and perhaps even before the beam shuts off. The end goal is more beamtime for experiments and less time spent tracking down issues.
Enter artificial intelligence
One of the biggest contributors to this downtime is issues that may arise with the underlying technology that powers the accelerator: superconducting radiofrequency (SRF) cavities. SRF cavities propel the powerful electron beam used to reveal the interior world of the nucleus.
And thanks to its more than 400 SRF cavities, the lab’s Continuous Electron Beam Accelerator Facility (CEBAF) is extremely efficient. However, the technology does have the potential to suffer from unique issues that can limit that efficiency.
Recently, Jefferson Lab scientists wrapped up three research projects that demonstrate ways in which artificial intelligence (AI) and machine learning (ML) could be used to make SRF particle accelerators even more efficient. The first project continued a line of research to use ML to identify SRF anomalies in real time so they can be addressed quickly. Another focused on the possibility of predicting the anomalies in advance. And the final project was aimed at lowering the levels of a damaging type of radiation that can develop inside the cavities during operations.
“Each project addresses a different challenge associated with operating a large-scale SRF accelerator like CEBAF,” said Chris Tennant, a senior staff scientist at Jefferson Lab. Tennant is exploring applications of artificial intelligence for CEBAF and served as the overall principal investigator for the three projects.
“Collectively,” he said, “this research presents a path toward a more stable and efficient accelerator, ensuring Jefferson Lab can maximize scientific output and maintain its leadership in nuclear physics research.”
The research was backed by a Funding Opportunity Announcement (FOA) grant issued by the DOE Office of Science. Funding awards are based on merit review and other criteria and are now known as Notice of Funding Opportunities (NOFO).
These projects build on work that a small team at Jefferson Lab began back in 2018 to explore how machine learning techniques could help classify cavity fault data. Their initial work proved so successful that the team developed a proposal not just to classify faults after they occurred, but to predict them in advance.
Tennant’s Jefferson Lab colleagues Dennis Turner, Riad Suleiman, and Adam Carpenter strengthened that proposal by contributing two additional research concepts aligning with the broader objective of optimizing SRF operations through AI.
The projects kicked off in the fall of 2020, persevered through challenges of remote work, supply chain issues and rising costs of hardware, and recently culminated in the results being published by three distinguished peer-reviewed publications that Tennant said represent “a fitting capstone to the project.”
Detecting unstable cavities
CEBAF, a DOE Office of Science user facility, was the world’s first large-scale application of SRF technology. It uses a pair of SRF linear accelerators, or linacs, configured like an underground racetrack, to deliver a high-energy beam of polarized electrons. The beam shoots at near light-speed through super-cooled cryomodules — each containing eight SRF cavities — to blast into a chosen target in one of four experimental halls.
Scientists learn more about the structure of the nucleus by studying the fundamental particles that cascade downstream from the collisions.
Maintaining stability in SRF cavities is a never-ending challenge. Even when instability doesn’t cause the beam to trip, it can have other adverse impacts across the linacs.
“Their erratic behavior can disrupt accelerator operations, reducing beamtime and affecting the quality of research outcomes,” said Hal Ferguson, a graduate student at Old Dominion University (ODU) in Norfolk and Turner’s project assistant for this particular project. “Such events account for approximately 15% of operational time and cause delays in research activities and data collection. An unstable cavity can result in trips that occur several times per hour until identified.”
Ferguson's expertise lies in using machine learning models to detect anomalies in cybersecurity systems. For this project, he developed machine learning techniques to focus on detecting instabilities and anomalous behavior.
As a part of this effort, Jefferson Lab's engineering staff developed a speedier high-frequency data acquisition system that samples cavity behavior at 5 kHz, or 5,000 times per second — significantly faster than the traditional 1 Hz sampling rate of once per second. This allowed them to capture transient events and subtle anomalies in real time that would otherwise go unnoticed.
They then applied principal component analysis, an unsupervised machine learning technique, to analyze this data and help identify anomalous behaviors by learning the normal operational patterns of cavities.
“The big picture is that we use a machine learning model to learn what ‘normal’ looks like for each cavity,” Ferguson said. “Then, by continuously comparing new data to that baseline, the system can identify when a cavity is behaving normally or not.
“The success of our system suggests a strong case for incorporating high-frequency data acquisition and real-time ML analytics into future accelerator design,” Ferguson said.
The system was deployed and used in CEBAF, but it was exercised for only a few weeks during the last operational run in spring 2024. It’s now online for the 2025 scheduled experimental runs.
Their paper, “Detecting Anomalous SRF Cavity Behavior with Unsupervised Learning,” was recently published in Physical Review Accelerators and Beams.
Predicting faults
The ability to predict a cavity fault offers the invaluable chance to intervene and launch mitigation strategies to prevent beam downtime in the first place.
Md Monibor Rahman, a doctoral candidate in the Vision Lab of the Department of Electrical and Computer Engineering at ODU, worked with Tennant on proof-of-concept modeling for this second project.
According to Rahman, successfully predicting faults is tricky. It involves analyzing several different signals from each SRF cavity to distinguish between normal operation and a fault. The signatures of a pre-fault can vary from one cavity to the next, so each monitored cavity must have its own unique model. While some faults develop gradually, others pop up suddenly with no obvious warning signs, so the team had to focus on predicting slow-developing faults. Because mistakenly predicting a fault can trigger unnecessary interventions, it’s critical to strike the right balance on what constitutes a pre-fault and what doesn’t.
Their project involved collecting and curating large datasets of normal and pre-fault cavity signals, preprocessing the data, and training a model to distinguish between stable and pre-fault conditions.
Tennant indicated that the system isn’t deployable at CEBAF currently because of fundamental hardware limitations. For instance, the SRF cavities aren’t designed to provide the type of real-time streaming data that would be needed to use the system during operations.
The team used, instead, data from two different periods of CEBAF running to create a separate dataset intended to mimic what the model would see if deployed in CEBAF.
The results were a triumph.
“The fault prediction model was able to correctly predict 80% of slowly developing SRF cavity faults while maintaining a 99.99% accuracy in identifying normal operating conditions,” said Rahman. “This validated the model’s ability to distinguish between normal and faulty conditions in a highly imbalanced dataset, proving its potential for real-time deployment.”
Managing field emission
One major limiting factor in operating SRF cavities is field emission. Field emission consists of electrons that are inside the accelerator but are not part of the tightly controlled electron beam itself.
These rogue electrons can trigger adverse radiation inside the machine, which may interfere with the electron beam, damage particle accelerator components or infrastructure, or create hot spots that continue to emit radiation. For instance, activation from neutron radiation generated by field-emitted electrons may cause hazards for days, weeks or months after SRF operations have ceased.
A major source of field emission is electrons originating at the SRF cavity walls. When CEBAF is running, operators control how much voltage is provided to each cavity for accelerating the beam. Field-emitted electrons may originate from a cavity when its voltage is raised too high. The voltage that triggers field emission in a cavity varies based on the cavity’s unique history and characteristics.
Currently, operations staff manage field emission by manually adjusting cavity voltages across all 416 cavities in CEBAF and watching the radiation levels. Once they identify which cavities are causing radiation, they lower the voltage in the offending cavity and raise voltages across other cavities to compensate for the energy loss. This continues throughout a scheduled accelerator run, as field emission may pop up unexpectedly at any time.
Carpenter and Suleiman headed this project. Carpenter is an expert in computer science and machine learning who has been with the lab’s Accelerator Operations group for nearly 15 years, while Suleiman is a staff scientist in Jefferson Lab’s Center for Injectors and Sources.
They brought in Steven Goldenberg, now in his third year as a postdoctoral fellow in the lab’s Data Science department focusing on machine learning with applications in accelerators and particle physics.
The team tackled the problem by setting different voltage levels across the linac and measuring the radiation response. Then they trained a collection of machine learning surrogate models for each radiation detector and used an offline optimization algorithm to determine the voltage settings for reducing radiation while still giving experimenters the beam they needed. Then these settings were put into action on the linac.
The models were a success.
In one proof-of-concept demonstration, the team found they could lower radiation by as much as 45% from where CEBAF was currently operating — evidence that well-established machine learning tools can be harnessed to model and minimize field emission without continuous manual intervention.
They are now determining how best to incorporate their work into particle accelerator operations.
Source:
Journal reference: