I have been running some gas adsorption simulations in Sorption, and have found some odd behaviour with the calculated isosteric heats.
When I do a Henry constant simulation I get a heat of adsorption a few kcal/mol lower (so about half in relative terms) than if I do a fixed pressure loading at only 1 kPa or similarly low pressure. The gas loading from the fixed pressure simulation is well prediceted by the Henry constant, showing that the adsorption is still in the Henry region for this material.
What I suspect is happening is that the "isosteric heat" that Sorption calculates in the Henry constant simulation is a straight average of the energy of all of the allowed gas positions sampled. If so, then this number is physically meaningless, as it depends in the atomic radii used to define the exlusion zone in the material. Including high energy adsorption positions near the atoms lowers the overall average, so in that case the exact average depends on what gas positions you leave out.
If you use a thermally weighted average (ie: average E using weights of exp[E/kT]) this gives a number closer to the isosteric heat reported by the fixed pressure simulation. This number should not depend on the exclusion zone, as every gas configuration in this zone should have an effective weight of 0.
Has anyone else noticed this problem, and is able to tell if the average isosteric energy in the Henry constant simulation is a straight average or a weighted average? If the Henry constant simulation uses a straigt average is there any way to recover a physically meaningful number from that data?
Thanks.
Bradw.