Hi everyone,
I am simulating the behavior of a Travelling wave tube amplifier (TWTA) using CST Studio Suite 2020. Since simulations were taking quite a while (more than 24h in some cases) I decided to get an acceleration token together with an NVIDIA K6000 GPU.
At the beginning all worked great, achieving over 4x acceleration over an Intel core i9-9900K @3.6 GHz. The problem came when I started to increase the total number of meshcells of my model (to 8M approx) and the simulations would crash before ending, reporting memory problems (insuficient memory). I checked and the application was reaching close to 60GB of memory usage (my machine has 64GB of RAM).
The weird part is that running the same simulation without GPU acceleration the memory usage is 3 GB at most.
Anyone else has experimented this kind of problem? Is there any reason of why CST is using more memory when using GPU acceleration? or is this a bug in the software with some kind of memory leakage?
Thanks,
Marcos Martinez