Re: errors (uncertainties) in non-linear least-squares fitting parameters

classic Classic list List threaded Threaded
6 messages Options
Rafael Guerra Rafael Guerra
Reply | Threaded
Open this post in threaded view
|

Re: errors (uncertainties) in non-linear least-squares fitting parameters

Hi Heinz,

 

For the regression errors, I am not an expert but from wikipedia or from reference below, I would risk the following code (at your peril):

https://pages.mtu.edu/~fmorriso/cm3215/UncertaintySlopeInterceptOfLeastSquaresFit.pdf

 

// Note: for degrees of freedom>=6, t-distribution ~2
N = length(MW);
mx = mean(MW);
SSxx = sum((MW -mx).^2);
Ea = diag(2*sig/sqrt(SSxx))  // take Ea diagonals; slope 95% confidence 
Eb = diag(2*sig*sqrt(1/N+mx^2/SSxx)) // take Eb diagonals; intercept 95% confidence

 

Concerning the least squares regression part, it seems the code may be written more compactly using reglin:

 

[a,b,sig]=reglin(MW',Y') // simple least squares linear regression

GG= a.*.xx' + repmat(b,size(xx'))

plot(xx,GG','LineWidth',1);

 

 

Regards,

Rafael

 


_______________________________________________
users mailing list
[hidden email]
http://lists.scilab.org/mailman/listinfo/users
Heinz Nabielek-3 Heinz Nabielek-3
Reply | Threaded
Open this post in threaded view
|

Re: errors (uncertainties) in non-linear least-squares fitting parameters

On 24.08.2020, at 23:08, Rafael Guerra <[hidden email]> wrote:

Hi Heinz,
 
For the regression errors, I am not an expert but from wikipedia or from reference below, I would risk the following code (at your peril):
https://pages.mtu.edu/~fmorriso/cm3215/UncertaintySlopeInterceptOfLeastSquaresFit.pdf
 
// Note: for degrees of freedom>=6, t-distribution ~2
N = length(MW);
mx = mean(MW);
SSxx = sum((MW -mx).^2);
Ea = diag(2*sig/sqrt(SSxx))  // take Ea diagonals; slope 95% confidence 
Eb = diag(2*sig*sqrt(1/N+mx^2/SSxx)) // take Eb diagonals; intercept 95% confidence
 
Concerning the least squares regression part, it seems the code may be written more compactly using reglin:
 
[a,b,sig]=reglin(MW',Y') // simple least squares linear regression
GG= a.*.xx' + repmat(b,size(xx'))
plot(xx,GG','LineWidth',1);


Here is a little misunderstanding (my fault: I had not explained it).
I want all three straight lines to go simultaneously through one point at the negative x-axis. This is why I had to use a non-linear least-squares fit.

Heinz




_______________________________________________
users mailing list
[hidden email]
http://lists.scilab.org/mailman/listinfo/users
Rafael Guerra Rafael Guerra
Reply | Threaded
Open this post in threaded view
|

Re: errors (uncertainties) in non-linear least-squares fitting parameters

In that case, the code can be simplified using backslash left matrix division:

 

// Fixed point (-4,0) solution:

a = (MW+4)\Y;

b = a*4;

GG= a'.*.xx' + repmat(b',1,size(xx,1));

plot(xx,GG','LineWidth',1);

 

Regards,

Rafael


_______________________________________________
users mailing list
[hidden email]
http://lists.scilab.org/mailman/listinfo/users
Denis Crété Denis Crété
Reply | Threaded
Open this post in threaded view
|

Re: errors (uncertainties) in non-linear least-squares fitting parameters

Hello,

If the fixed point has to be optimized as well, it is possible to keep a linear treatment, although the solution that I have found is tedious:

First, notice that because of the fixed point and the set of xk is the same for the 3 lines, all Y coordinates are proportional, I mean

-          y2(xk)=P2/P1*y1(xk)

-          y3(xk)=P3/P1*y1(xk)

It is probably easy to fit the datasets y2 and y3 as a function of y1 to find r=P2/P1 and s=P3/P1. It might even be possible to use r=sum(y2)/sum(y1) and s= sum(y3)/sum(y1)… but the exact solution of the least square method is r=sum(y2.*y1)/sum(y1.*y1), s= sum(y3.*y1)/sum(y1.*y1).

Then the full dataset of the 3 functions y1, y2/r and y3/s can be adjusted to the same function p1*x+A (e.g. using reglin)

However, I did not write the code, yet… There might exist a more elegant solution…

I understand it is not in the focus of the initial question, but it may help anyway.

 

Denis

NB: a more compact algorithm is to fit for i=1…3,  yi/sum(yi.*y1)= f(x)

 

De : users <[hidden email]> De la part de Rafael Guerra
Envoyé : mardi 25 août 2020 01:47
À : Heinz Nabielek <[hidden email]>; Users mailing list for Scilab <[hidden email]>
Objet : Re: [Scilab-users] errors (uncertainties) in non-linear least-squares fitting parameters

 

In that case, the code can be simplified using backslash left matrix division:

 

// Fixed point (-4,0) solution:

a = (MW+4)\Y;

b = a*4;

GG= a'.*.xx' + repmat(b',1,size(xx,1));

plot(xx,GG','LineWidth',1);

 

Regards,

Rafael


_______________________________________________
users mailing list
[hidden email]
http://lists.scilab.org/mailman/listinfo/users
Denis Crété
Denis Crété Denis Crété
Reply | Threaded
Open this post in threaded view
|

Re: errors (uncertainties) in non-linear least-squares fitting parameters

Hello,

Just to finish my suggestion with this code, taking Y(2,:) as “reference”:

// prepare "noisy" data

slope=[0.9;1;1.2];X=1:10;a=4;

Y=slope*(X+a)+0.1*rand(3,10);

 

// solve problem

Z=matrix(Y'*inv(diag(Y(2,:)*Y')),-1,1);

[p,q,sig]=reglin([X,X,X],Z')

 

// compare results with a and slope

q/p, p*Y(2,:)*Y'

 

It should work as long as the same values of X are used. As it is, it assumes that datasets contain the same number of points. This restriction may be suppressed using an average value for the set of Yk taken at equal values of xk and a weight equal to the number of points averaged for this xk.

And “sig” should give you some information on errors…

 

HTH

Denis

 

 

De : users <[hidden email]> De la part de CRETE Denis
Envoyé : mardi 25 août 2020 16:38
À : Users mailing list for Scilab <[hidden email]>; Heinz Nabielek <[hidden email]>
Objet : Re: [Scilab-users] errors (uncertainties) in non-linear least-squares fitting parameters

 

Hello,

If the fixed point has to be optimized as well, it is possible to keep a linear treatment, although the solution that I have found is tedious:

First, notice that because of the fixed point and the set of xk is the same for the 3 lines, all Y coordinates are proportional, I mean

-          y2(xk)=P2/P1*y1(xk)

-          y3(xk)=P3/P1*y1(xk)

It is probably easy to fit the datasets y2 and y3 as a function of y1 to find r=P2/P1 and s=P3/P1. It might even be possible to use r=sum(y2)/sum(y1) and s= sum(y3)/sum(y1)… but the exact solution of the least square method is r=sum(y2.*y1)/sum(y1.*y1), s= sum(y3.*y1)/sum(y1.*y1).

Then the full dataset of the 3 functions y1, y2/r and y3/s can be adjusted to the same function p1*x+A (e.g. using reglin)

However, I did not write the code, yet… There might exist a more elegant solution…

I understand it is not in the focus of the initial question, but it may help anyway.

 

Denis

NB: a more compact algorithm is to fit for i=1…3,  yi/sum(yi.*y1)= f(x)

 

De : users <[hidden email]> De la part de Rafael Guerra
Envoyé : mardi 25 août 2020 01:47
À : Heinz Nabielek <[hidden email]>; Users mailing list for Scilab <[hidden email]>
Objet : Re: [Scilab-users] errors (uncertainties) in non-linear least-squares fitting parameters

 

In that case, the code can be simplified using backslash left matrix division:

 

// Fixed point (-4,0) solution:

a = (MW+4)\Y;

b = a*4;

GG= a'.*.xx' + repmat(b',1,size(xx,1));

plot(xx,GG','LineWidth',1);

 

Regards,

Rafael


_______________________________________________
users mailing list
[hidden email]
http://lists.scilab.org/mailman/listinfo/users
Denis Crété
Dang Ngoc Chan, Christophe Dang Ngoc Chan, Christophe
Reply | Threaded
Open this post in threaded view
|

Re: {EXT} Re: errors (uncertainties) in non-linear least-squares fitting parameters

In reply to this post by Heinz Nabielek-3
Hello Heinz,

> De : Heinz Nabielek
> Envoyé : lundi 24 août 2020 23:59
>
> I want all three straight lines to go simultaneously through one point at the negative x-axis.
> This is why I had to use a non-linear least-squares fit.

The calculation runs quite fats so it might not worth the effort of optimising it.
But if ever you had more points and longer computation times, you might try to first perform a linear regression on each set of data to set the initial guess of parameters po.
The non linear regression should converge faster.

Regards.

--
Christophe Dang Ngoc Chan
Mechanical calculation engineer

General
This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error), please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden.
_______________________________________________
users mailing list
[hidden email]
http://lists.scilab.org/mailman/listinfo/users