In:
Journal of Instrumentation, IOP Publishing, Vol. 18, No. 04 ( 2023-04-01), p. P04034-
Abstract:
The rapid development of general-purpose computing on
graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle
physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection
chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full
microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge
readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized
algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler
for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude
compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU,
compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC
prototype.
Type of Medium:
Online Resource
ISSN:
1748-0221
DOI:
10.1088/1748-0221/18/04/P04034
Language:
Unknown
Publisher:
IOP Publishing
Publication Date:
2023
detail.hit.zdb_id:
2235672-1
Bookmarklink