For a small (~output files <25MB) high frequency time domain simulation the limiting performance factor seems to be disk write speed. Is there a way to remove that limit by having CST work in memory until the end?
The attached image is a chart of the same simulation run on different platforms. The model for the simulation is also built in real time using python before the simulation is run. You can see that all the steps of the process are effected by disk speed whereas CPUs and memory amount have very little correlation with performance.
Launch = opening CST and creating the project using CST Studio Suite python library
Build = adding a bunch of stuff to history list and saving the project using CST python library
Solve = meshing + HF time domain solver