Flow Simulation gets slower over time

Hello,
I have a persistent issue that I can’t find a solution for. I’ll describe the computer specifications and failed troubleshooting attempts further below.

Problem description:
I start a flow simulation, regardless of whether it is steady-state or time-dependent. Meshing works without any issues, all boundary conditions can be defined without problems, and the goals/objectives can also be set normally.

The simulation starts and everything seems fine. However, as the simulation progresses, it becomes increasingly slower. The CPU utilization gradually but continuously decreases for no apparent reason. We’re talking about an initial load of around 80% per processor, and after 24 hours of computation it drops to only about 40% per processor (dual-CPU system).

I’m not hitting any power limits, thermal throttling, or anything else that could be hardware-related. So I suspect the issue lies within the software.

What I’ve tried so far:

  • Logged all temperatures — no abnormalities.
  • Placed the computer directly under an open window in cold outside temperatures — no change.
  • The issue existed on all combinations: Win10 + 2024 Student, Win11 + 2024 Student, Win11 + 2025 Student Edition — always the same behaviour.
  • Completely reinstalled the PC with Win11 and 2025 Student — no change.
  • As a test, I ran other software 24/7 that demands far more power (Prime95) — ran without any problems.
  • Ran a comparison simulation with Autodesk CFD — no performance degradation observed.
  • Pausing the calculation, shutting down the PC, and continuing the simulation the next day doesn’t help either; it resumes at the same reduced speed as at the end of the previous day.

System:

  • Motherboard: Asus Z10PE8D
  • CPU: 2× Xeon 2667 v4
  • CPU Cooling: each with a 240 mm water radiator
  • RAM: 8×32 GB DDR4 ECC 2400MHz (256GB total)
  • Boot SSD: WD Black SN850 1 TB NVMe
  • Simulation SSD: Samsung 980 1 TB NVMe
  • GPU: Nvidia RTX A4000
  • PSU: Corsair HX1000i

I’m running out of ideas, and this problem has existed for about a year now. I can’t run long-term time-dependent simulations because after each timestep I am farther from the solution in terms of compute time than before. In some cases, the time per iteration increases by a factor of 7–8 within 24 hours. Here you have screenshot with logging over close to 6 hours for the big charts, the small ones arround 2 hours shown.

(Top to bottem, left to right)
- Core-Temperture CPU0
- CPU-Temperature CPU0
- Cache-Temperature CPU0
- Core-Temperture CPU1
- CPU-Temperature CPU1
- Cache-Temperature CPU1
- CPU-Usage CPU0
- CPU-Usage CPU1

I would appreciate any ideas, and if you need additional information, please let me know!

Thanks!

Best regards 😊