In:
Communications Physics, Springer Science and Business Media LLC, Vol. 7, No. 1 ( 2024-03-11)
Abstract:
Large language models, like transformers, have recently demonstrated immense powers in text and image generation. This success is driven by the ability to capture long-range correlations between elements in a sequence. The same feature makes the transformer a powerful wavefunction ansatz that addresses the challenge of describing correlations in simulations of qubit systems. Here we consider two-dimensional Rydberg atom arrays to demonstrate that transformers reach higher accuracies than conventional recurrent neural networks for variational ground state searches. We further introduce large, patched transformer models, which consider a sequence of large atom patches, and show that this architecture significantly accelerates the simulations. The proposed architectures reconstruct ground states with accuracies beyond state-of-the-art quantum Monte Carlo methods, allowing for the study of large Rydberg systems in different phases of matter and at phase transitions. Our high-accuracy ground state representations at reasonable computational costs promise new insights into general large-scale quantum many-body systems.
Type of Medium:
Online Resource
ISSN:
2399-3650
DOI:
10.1038/s42005-024-01584-y
Language:
English
Publisher:
Springer Science and Business Media LLC
Publication Date:
2024
detail.hit.zdb_id:
2921913-9
Bookmarklink