Given the prior discussions about the benchmark scores, some of which didn't seem to line up, I took a little closer look at the "detailed" data. Here are seveal points that I noticed:
- The OS is always listed as x86 even when SW is listed as x64, which is not a valid combination.
- The processor speed, which is obviously a critical component, is not always captured. For example, see gTest1, DGraham_HP9400, purCAB1. They list the model, family, and stepping, but this is not necessarily unique to a specific processor frequency. Additionally reporting the info just from the CPUID will not properly reflect any overclocked machines. I have attached the partial output from PassMark for reference.
- RichardB's score of 34393 has to be incorrect (or there is something very wrong with this machine). This type of data needs to be deleted.
- Is is possible that resr123 with a 3 GB machine that was introduced in 2006 has a score that is 33% less than next best?
- Only gTest1 seems to have any render results. Burke, Ed Laptop, and LynnC have timings of 0.1. I have yet to figure out how to turn this option on.
- Consider the following "anomolies":
- NAME SW BENCHMARK WEI
- Burke 359 5.8
- purCAB-1 369 6.6 14% greater WEI (assuming that this is a linear measure) yields 3% reduction in benchmark
- resr123 164 7.1
- SolidMuse 246 7.6 7% greater WEI yields 50% reduction in benchmark
For the benchmarks to have real value the underlying data must be squeeky clean.
Ken
SolidworksSolidworks 2011 Beta