Calibration App Performance

 

Performance of the material model calibration app

General comments: 

The number of test datasets will affect performance.  In R2023x and prior versions, it is a best practice to import only those datasets needed for calibration.  Importing many repeats of the same test, in order to review the data will result in performance degradation. In 2024x FD01, we added a new feature to 'deactivate' datasets.  One may import all repeats of a test and then use this ‘deactivate’ feature for those datasets not needed for follow-on calibration activities.

The number of test data points in any test dataset will affect performance.  Try to keep the number of test data points to those needed to describe the shape of the stress-strain curve.  Multiple thousands of test data points (and higher) will often show a noticeable performance degradation. 

DMA test data:  Many test points may degrade performance.  When choosing the “conversion parameters” the user should be mindful that the time series created may have many more points.  The “Create Time Series” feature can be used to show the time series that will be used and to see the number of points.   This time series should be deleted if not needed for follow-on calibrations.

Analytical Mode :  Analytical mode is often so fast that performance degradation is not noticeable.

Numerical Mode :  In numerical mode, the user can set the number of cpus to be used.  If cpus=1 then all the test datasets and their responses will be calculated in a single A/Std run.  If cpus=2 then the two A/Std runs will be used, with the test datasets split between the two runs.  For small datasets and simple material models, using multiple cpus may actually slow things down, because of the overhead.  For larger datasets and more complex material models (more time spent in A/Std), then the performance gains become larger.   See testing example below.

FE Mode :  The FE models used in the calibration app should be kept small and fast running.  The user should run these models prior to using them in the calibration app.  Runtimes in the 10s and 100s of seconds are reasonable.  Beyond that, the app calibration times will degrade and be a far less interactive tool.  The calibration app will run all FE models associated with a calibration simultaneously.

There are two locations in the app where the user can set the number of cpus to be used:

Optimization controls: only for Numerical mode.

During import of an FE model: only for specific FE models.

Fast Evaluations in Numerical mode:  All fast evals are performed in a single A/Std run, thus not taking advantage of multiple cores.

Performance testing example, using numerical mode and multiple cpus:

I started this example with our elastomer class data.  It consisted of 4 datasets, but I duplicated each dataset to bring the total to 8 datasets.   Then I tested the performance when running a Yeoh calibration on 1, 2, 4 and 8 cpus.  After the initial testing (left columns in the image below), I then used the regularize feature and made each dataset larger by a factor of 10.  The testing with these larger datasets are shown on the right side of the image below. 

Take-away #1: When using multiple test datasets in numerical mode, even with small datasets there is a appreciable performance speedup from using 2 cpus.

Take-away #2: For small and medium sized datasets, when calibrating a simple material model like the Yeoh model, there is no further performance gains as one moves to more cpus. 

 

This testing was performed using the public cloud, R2023x HotFix 3.11

The zip file below contains two 3dxml files and a summary Excel file.

I had also done some numerical mode testing in which I had 9 DMA storage modulus datasets and calibrated a hyper+Prony+TRS material model.  The DMA datasets were larger than those used above.  In that testing, there were performance gains using cpus=9 (using my laptop that has 6 physical cores and 12 logical cores.)

 

Update, May 29, 2024.  I have rerun this performance study using the public cloud, R2024x HotFix 3.16   This helps to double check the calibration app's performance over time.  Some runs were a bit faster, some a bit slower.  

 

The zip file below is a superset of the zip file above. 

 

 

Back to:  Material Modeling and Calibration - An Overview and Curriculum