fredmarlton
Hi all,
I'm wondering if it's possible to work out the preference for a particular refinement solution? I think another way of putting this is that I want to know if it's possible to work out the curvature of chi2 with respect to a parameter (in the example below: composition)?
For example, consider a 2-phase refinement. Now, let's say that the refinement produces the lowest Rwp for 30%/70% mix. Now, what if you tried repeating the refinement with various fixed compositions (increment by 5% between the two in a batch script) and you found that the Rwp was still lowest for the 30/70 mix, but the Rwp was almost the same for each of them (i.e. a plot of Rwp vs comp% would almost be flat line). From this you could say there's no real preference for the best solution (not sure if "preference" is the correct term or not...).
Now let's say for another 2 phase refinement you have the exact same process, but this time there's actually a (upside down) peak when you do a plot of Rwp vs comp%. Hence, you can confidently say that the result with the minimum Rwp is the best result.
Feel free to correct my terminology.
Thanks,
Fred
rowlesmr
Do you mean, lets get some data, get a model, fix the scale factors such that the answer is 70/30, and do a full minimisation of all other parameters, and get the Rwp? and then change the scale factors such that the answer is 75/25, and do a full minimisation, and so on?
That is sort of already accounted for in the C matrix. If you look at the parameter errors, then if you had a prm value of 10 +/- 0.01, then that parameter has a strong preference for being 10. If it were 10 +/- 15000, then there isn't a very strong preference for being 10.
The method that you talk about is one way for deriving prm errors in a non-linear model when using linear minimisation. TOPAS employs a non-minear minimisation, and so this is all done at once.
See:
Hughes, I. G., and T. P. A. Hase. 2013. Measurements and Their Uncertainties: A Practical Guide to Modern Error Analysis: Oxford University Press.
Bevington, P. R., and D. K. Robinson. 2003. Data Reduction and Error Analysis for the Physical Sciences. 3rd ed: McGraw-Hill.
Taylor, J. R. 1997. An Introduction to Error Analysis. 2nd ed: University Science Books.
fredmarlton
Hi Matthew,
Thanks for the response. Yes, you're on the right spot with that method. And essentially I want to avoid doing that batch fitting process. Another example (which I think I've seen somewhere in the topas manual or wiki) is varying occupancy. So, for a particular you site you vary a fixed occupancy between 0 and 1 and compare the fits. Now, let's say hypothetically that the Rwp vs occ is approximately a flat line. Would you say that observing the occ error vs occ would be the best indication of the best result?
Also, with what you said below about "a prm value of 10 +/- 0.01, then that parameter has a strong preference for being 10. If it were 20+/- 15000, then there isn't a very strong preference for being 20." (changed the second number) I completely agree with this, but what I want to know is that difference in the preference. I want to be able to say, this is the best option and the rest are terrible. Or, this is best option, but the rest are also ok. This might not make much sense, but I'm trying to get out some sort of "confidence measure" from Topas.
Thanks,
Fred
alancoelho
Hi Fred
>I'm wondering if it's possible to work out the preference for a particular refinement solution?
As Matthew indicated, the correlation matrix as well as the parameter errors is what you want.
The bootstrap method of error determination (see keyword bootstrap_errors in the Technical_Reference.PDF) is another means of looking at the same thing. Typically bootstraping does the same as do_errors.
You could also use the following to do look at it manually:
chi2_convergence_criteria 0.01 ' in other words all Rwp less than 0.01 will be ignored
continue_after_convergence
out_prm_vals_on_convergence SOME_FILE
With those included then put some val_on_continue keyword to randomize the refinement a little; ie.
a @ 8.58 val_on_continue = 8.58 + Rand(-.05, 0.05);
b @ 4.58 val_on_continue = 4.58 + Rand(-.05, 0.05);
etc..
In the end you would be doing what bootstrapping does.
cheers
alan
cheers
alan
fredmarlton
Thanks for that Alan! Looks like there's some good things to try
Fred