Abstract
The Finite-Difference Time-Domain (FDTD) method is a popular numeri-cal modelling technique in computational electromagnetics. The volumetric nature of the FDTD technique means simulations often require extensive computational resources (both processing time and memory). The simula-tion of Ground Penetrating Radar (GPR) is one such challenge, where the GPR transducer, subsurface/structure, and targets must all be included in the model, and must all be adequately discretised. Additionally, forward simulations of GPR can necessitate hundreds of models with different geometries (A-scans) to be executed. This is exacerbated by an order of magnitude when solving the inverse GPR problem or when using forward models to train machine learning algorithms.
We have developed one of the first open source GPU-accelerated FDTD solvers specifically focussed on modelling GPR. We designed optimal ker-nels for GPU execution using NVIDIA’s CUDA framework. Our GPU solver achieved performance throughputs of up to 1194 Mcells/s and 3405 Mcells/s on NVIDIA Kepler and Pascal architectures, respectively. This is up to 30 times faster than the parallelised (OpenMP) CPU solver can achieve on a commonly-used desktop CPU (Intel Core i7-4790K). We found the cost-performance benefit of the NVIDIA GeForce-series Pascal-based GPUs – targeted towards the gaming market – to be especially notable, potentially allowing many individuals to benefit from this work using commodity work-stations. We also note that the equivalent Tesla-series P100 GPU – targeted towards data-centre usage – demonstrates significant overall performance advantages due to its use of high-bandwidth memory. The performance benefits of our GPU-accelerated solver were demonstrated in a GPR environment by running a large-scale, realistic (including dispersive media, rough surface to-pography, and detailed antenna model) simulation of a buried anti-personnel landmine scenario.
We have developed one of the first open source GPU-accelerated FDTD solvers specifically focussed on modelling GPR. We designed optimal ker-nels for GPU execution using NVIDIA’s CUDA framework. Our GPU solver achieved performance throughputs of up to 1194 Mcells/s and 3405 Mcells/s on NVIDIA Kepler and Pascal architectures, respectively. This is up to 30 times faster than the parallelised (OpenMP) CPU solver can achieve on a commonly-used desktop CPU (Intel Core i7-4790K). We found the cost-performance benefit of the NVIDIA GeForce-series Pascal-based GPUs – targeted towards the gaming market – to be especially notable, potentially allowing many individuals to benefit from this work using commodity work-stations. We also note that the equivalent Tesla-series P100 GPU – targeted towards data-centre usage – demonstrates significant overall performance advantages due to its use of high-bandwidth memory. The performance benefits of our GPU-accelerated solver were demonstrated in a GPR environment by running a large-scale, realistic (including dispersive media, rough surface to-pography, and detailed antenna model) simulation of a buried anti-personnel landmine scenario.
Original language | English |
---|---|
Pages (from-to) | 208 - 218 |
Journal | Computer Physics Communications |
Volume | 237 |
Early online date | 22 Nov 2018 |
DOIs | |
Publication status | Published - Apr 2019 |
Keywords / Materials (for Non-textual outputs)
- CUDA
- Finite-Difference Time-Domain
- GPR
- GPGPU
- GPU
- NVIDIA
- gprMax