Convergence Tolerance in Nonlinear Simulations

Hello all,

I am curious if someone can explain the mathematical role of the “convergence tolerance” parameter, specifcally in regards to the sparse solver during a nonlinear, static simulation with large strain and displacements (perhaps the simulation type ins't necessary for an explaination) ?

My novice understanding is that the higher the convergence tolerance, the larger the allowable errors in solving for the stiffness matrix at each iteration. However, I am curious if anyone has a more sound interpretation or better yet a mathematical definition for the parameter?

I am interested in precisely how this parameter influences the accuracy of the solution. For instance, if I increase the tolerance by an order of magnitude, by what factor can I expect the solution accuracy to change, if at all? I imagine this is predominantly application specific and I may need to simply incrementally change the parameter and observe the resulting changes in the solution.

Thanks for any insight you may have!

-Andy

SolidworksSimulation