Physics

Tuning accelerators using machine learning

A computer-generated image based on the production-diffusion process shows a 2D projection of a particle accelerator beam. Starting with pure noise, signals from the accelerator guide the process adaptively. As a result, each version is a little easier to understand. Credit: Alexander Scheinker, Los Alamos National Laboratory

Computer screens are stacked two and three high along the walls. The screen is covered with numbers and graphs that are incomprehensible to laymen. But they tell the story to the operators who staff the particle accelerator control room. The numbers represent how an accelerator accelerates small particles to collide with targets and other particles.

However, even the best operator cannot completely track the minute changes over time that affect the accelerator’s mechanics. Scientists are studying how to use computers to make the fine adjustments needed to keep particle accelerators running at peak performance.

Researchers use accelerators to better understand materials and the particles that make them up. Chemists and biologists use them to study ultrafast processes such as photosynthesis. Nuclear and high-energy physicists smash protons and other particles to learn more about the building blocks of the universe.

Small accelerators are particularly useful for a wide range of applications in society. Medical scientists and doctors use accelerators to treat cancer, and manufacturers use them to make semiconductors for electronics. Other applications include sterilizing medical equipment, analyzing historical artifacts, and curing lightweight materials for automobiles.

Unfortunately, the performance of particle accelerators tends to fluctuate over time. They have hundreds of thousands of components. Some of these components are incredibly complex. External influences such as vibrations and temperature changes can affect the functionality of the machine.

As different parts move, they create a domino effect on subsequent parts in line. By the time the accelerator produces a beam of particles, small changes can add up to large changes. Just like when individual cars slow down, traffic jams occur. Over time, the beam becomes less accurate and useless.

To resolve this issue, the operator must “retune” the accelerator to optimal parameters. These readjustment periods limit the amount of time scientists can use the accelerator. Additionally, engineers cannot adjust the accelerator in real time while scientists are acquiring experimental data.

On top of that, beams are incredibly complex. They exist in spaces that scientists cannot measure quickly or even directly. The operator is limited to a one-dimensional view of the beam position. Given that the beam actually exists in six dimensions (the usual three dimensions plus movement in each dimension), the operator misses out on a lot of data.

To address these issues, scientists have developed complex controls and diagnostics. Special algorithms adapt the way the particle accelerator operates to compensate for changes over time. Many systems use these algorithms, including LCLS (a DOE Science Office of Science user facility located at SLAC National Accelerator Laboratory). However, these methods have major challenges. Because these algorithms are based on feedback from accelerators, the algorithms can get “stuck” without finding the true optimum.

Tuning accelerators using machine learning

At Los Alamos National Laboratory, physicist Alexander Scheinker is developing new ways to use machine learning to improve the performance of particle accelerators. Credit: Alexander Scheinker, Los Alamos National Laboratory

Machine learning, a type of artificial intelligence, has the potential to help. Using machine learning, computers could act as “virtual observers” to support human technicians. Machine learning applications search for patterns in data and make predictions. Scientists “teach” machine learning applications by feeding them a set of training data.

Discover the latest in science, technology and space with over 100,000 subscribers who use Phys.org as their daily source of information. Sign up for our free newsletter to receive daily or weekly updates on breakthroughs, innovations, and important research.

From this data, the application learns how to identify relational data and results. While human operators can recognize problems based on past experience, machine learning applications recognize problems based on what they “see” in training data. Some accelerators at CERN, the Swiss particle physics laboratory, use this type of application.

However, the performance of machine learning applications is determined by the training data. The training data is based on the original characteristics of the accelerator. But unfortunately, when the accelerator’s mechanics change, the data is no longer accurate. To solve this problem, scientists must continually retrain their models. That defeats the whole point. You end up running into another variation of the original problem.

The best solution may lie in combining the two approaches. Researchers and engineers at DOE’s Los Alamos National Laboratory and Lawrence Berkeley National Laboratory are developing new machine learning techniques for small particle accelerators. This technique uses real-time data from accelerator diagnostics to continuously tune the model. This data is then used to guide an advanced generative AI process known as diffusion, which is detailed on the arXiv preprint server.

This process creates a virtual view of the accelerator’s beam that changes over time. Some machine learning tools have the ability to take extremely complex input sets with many dimensions, compress them into simpler representations, and provide complex outputs that reflect the system.

These methods can be applied not only to small accelerators but also to large accelerators such as FACET-II. In SLAC’s FACET-II accelerator system, the model generated 15 different two-dimensional projections of a six-dimensional beam at five different locations. Just thinking about that scale is taxing for the human brain, but machine learning systems need it. This data allows the system to learn not only the potential for change over time, but also the relationships between changes and the fundamental physics.

In a study published in Scientific Reports, the scientists also demonstrated the adaptability of this approach by showing that the same production-diffusion method can be used in European X-ray FELs. They used this method to create a virtual view of a powerful electron beam with megapixel resolution.

So far, this method seems promising. The accelerator allows operators to make complex measurements of the beam during operation, and researchers have been collecting data. Next, compare your application’s predictions and measurements. You can use this information to further train your application.

In the future, human operators of particle accelerators may receive some assistance from computer operators. This support will enable scientists to make more and better discoveries than ever before.

More information: Alexander Scheinker, cDVAE: Multimodal generation conditional diffusion with variational autoencoder latent embeddings for virtual 6D phase space diagnostics, arXiv (2024). DOI: 10.48550/arxiv.2407.20218

Alexander Scheinker, Conditionally stimulated generation diffusion for particle accelerator beam diagnostics, Scientific Reports (2024). DOI: 10.1038/s41598-024-70302-z

Provided by the U.S. Department of Energy

Citation: Adjusting Accelerators Using Machine Learning (November 18, 2024), Retrieved November 18, 2024, from https://phys.org/news/2024-11-adjusting-machine.html

This document is subject to copyright. No part may be reproduced without written permission, except in fair dealing for personal study or research purposes. Content is provided for informational purposes only.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button