David Chèze |
Hi,
I'm running scilab 6.0 on WIN7_64 and I failed to run optimisation based on 'optim' functions (leastsq) while fminsearch performed well. The call in my code are summarized below : ----- function RMSE=evaloptiM1D(X) ... calls to other macros to calculate the "cost"=RMSE returned value endfunction x0= [1.27 ; 0.8] [fopt, xopt]=leastsq(evaloptiM1D,x0) //fail [x, fval]=fminsearch(evaloptiM1D,x0',opt) //run ok ---------- fminsearch returns successfully X = 1.22 0.77 while leastsq starts then fails with : User function 'costf' have not been setted. Any idea about what it means and where it could comes from ? Thanks for your advice David |
Hello David, Do the examples work? I need to know if the problem is related to your scripts or leastsq. Could you please reduce to a minimal reproducible use-case and report it on the bugzilla? Regards, Paul On 03/20/2017 11:48 AM, David Chèze
wrote:
Hi, I'm running scilab 6.0 on WIN7_64 and I failed to run optimisation based on 'optim' functions (leastsq) while fminsearch performed well. The call in my code are summarized below : ----- function RMSE=evaloptiM1D(X) ... calls to other macros to calculate the "cost"=RMSE returned value endfunction x0= [1.27 ; 0.8] [fopt, xopt]=leastsq(evaloptiM1D,x0) //fail [x, fval]=fminsearch(evaloptiM1D,x0',opt) //run ok ---------- fminsearch returns successfully X = 1.22 0.77 while leastsq starts then fails with : User function 'costf' have not been setted. Any idea about what it means and where it could comes from ? Thanks for your advice David -- View this message in context: http://mailinglists.scilab.org/non-linear-optim-tp4035880.html Sent from the Scilab users - Mailing Lists Archives mailing list archive at Nabble.com. _______________________________________________ users mailing list [hidden email] http://lists.scilab.org/mailman/listinfo/users -- Paul BIGNIER Development engineer ----------------------------------------------------------- Scilab Enterprises 143bis rue Yves Le Coz - 78000 Versailles, France Phone: +33.1.80.77.04.68 http://www.scilab-enterprises.com _______________________________________________ users mailing list [hidden email] http://lists.scilab.org/mailman/listinfo/users |
David Chèze |
Hi Paul,
the leastsq examples run properly on my machine, as well as other tests with simple functions inside. I looked at leastsq first because of the minimal request very simple way to express the problem f=costf(X) with functions that are not quickly vectorizable. Otherwise I was used to datafit. I saw in lsqrsolve that we need to provide the numbers of equations to run the solver, maybe it's similar for leastsq ? In my case the costf function evaluates the model over 7 tests, each test is 6 to 7 evaluations of different configurations. I'm limiting to 2 unknowns to calibrate in my model so the problem can be solve a priori but leastsq is not informed of that a priori . and |
Tim Wescott |
On Mon, 2017-03-20 at 08:53 -0700, David Chèze wrote:
> Hi Paul, > > the leastsq examples run properly on my machine, as well as other > tests with > simple functions inside. > > I looked at leastsq first because of the minimal request very simple > way to > express the problem f=costf(X) with functions that are not quickly > vectorizable. Otherwise I was used to datafit. I saw in lsqrsolve > that we > need to provide the numbers of equations to run the solver, maybe > it's > similar for leastsq ? In my case the costf function evaluates the > model over > 7 tests, each test is 6 to 7 evaluations of different configurations. > I'm > limiting to 2 unknowns to calibrate in my model so the problem can be > solve > a priori but leastsq is not informed of that a priori . > > and > > Hey David: I think Paul is concerned that something broke in the transition from 5.x to 6.0. I haven't used fminsearch, but it looks like it uses a significantly different algorithm than leastsq and optim (I believe that optim uses Newton's method, or perhaps a combination of Newton's method and gradient descent). So it could just be that the underlying algorithm in fminsearch works better for your problem than the one for leastsq. Looking at the relevant Wikipedia pages may suggest something: https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method (fminsearch) https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization (optim) -- Tim Wescott www.wescottdesign.com Control & Communications systems, circuit & software design. Phone: 503.631.7815 Cell: 503.349.8432 _______________________________________________ users mailing list [hidden email] http://lists.scilab.org/mailman/listinfo/users |
paul.carrico |
Hi All
I've ever been thinking that fminsearch uses Nelder-Mead algorthm i.e. it is an order 0 method (no gradient is needed to determine the local extremum) ; this is a personnal approach but I uses NeldeMead (with bounds and restarts) instead of fminsearch for a limited number of variables. Nota bene : I'm still using Scilab 5.5.2 release Paul Le 2017-03-20 18:12, Tim Wescott a écrit : > On Mon, 2017-03-20 at 08:53 -0700, David Chèze wrote: >> Hi Paul, >> >> the leastsq examples run properly on my machine, as well as other >> tests with >> simple functions inside. >> >> I looked at leastsq first because of the minimal request very simple >> way to >> express the problem f=costf(X) with functions that are not quickly >> vectorizable. Otherwise I was used to datafit. I saw in lsqrsolve >> that we >> need to provide the numbers of equations to run the solver, maybe >> it's >> similar for leastsq ? In my case the costf function evaluates the >> model over >> 7 tests, each test is 6 to 7 evaluations of different configurations. >> I'm >> limiting to 2 unknowns to calibrate in my model so the problem can be >> solve >> a priori but leastsq is not informed of that a priori . >> >> and >> >> > > Hey David: > > I think Paul is concerned that something broke in the transition from > 5.x to 6.0. > > I haven't used fminsearch, but it looks like it uses a significantly > different algorithm than leastsq and optim (I believe that optim uses > Newton's method, or perhaps a combination of Newton's method and > gradient descent). So it could just be that the underlying algorithm > in fminsearch works better for your problem than the one for leastsq. > > Looking at the relevant Wikipedia pages may suggest something: > > https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method (fminsearch) > https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization (optim) > > -- > > Tim Wescott > www.wescottdesign.com > Control & Communications systems, circuit & software design. > Phone: 503.631.7815 > Cell: 503.349.8432 > > > > _______________________________________________ > users mailing list > [hidden email] > http://lists.scilab.org/mailman/listinfo/users users mailing list [hidden email] http://lists.scilab.org/mailman/listinfo/users |
David Chèze |
In reply to this post by paul
Hi all,
I worked further on this issue and found that it's new (unwanted) behavior in scilab 6 since the script performs well when run under scilab 5.5.1 on the same machine WIN7_64bits. In scilab 6 it fails with the error message : User function 'costf' have not been setted. while in scilab 5.5.1: [fopt, xopt]=leastsq(iprint,evaloptiM1D,x0') ***** enters -qn code- (without bound cstr) dimension= 2, epsq= 0.2220446049250313E-15, verbosity level: imp= 2 max number of iterations allowed: iter= 100 max number of calls to costf allowed: nap= 100 ------------------------------------------------ iter num 1, nb calls= 1, f= 0.4607E+05 iter num 2, nb calls= 3, f= 0.2185E+05 iter num 3, nb calls= 6, f= 0.2138E+05 iter num 4, nb calls= 7, f= 0.2130E+05 iter num 5, nb calls= 8, f= 0.2128E+05 iter num 6, nb calls= 9, f= 0.2128E+05 iter num 7, nb calls= 10, f= 0.2128E+05 iter num 8, nb calls= 11, f= 0.2128E+05 iter num 9, nb calls= 12, f= 0.2128E+05 iter num 10, nb calls= 13, f= 0.2128E+05 iter num 11, nb calls= 14, f= 0.2128E+05 iter num 11, nb calls= 22, f= 0.2128E+05 ***** leaves -qn code-, gradient norm= 0.4410737793778105E-02 Fin de l'optimisation. xopt = 1.2213257 0.7702730 fopt = 21284.818 Then I guess it's not a limitation in the algorithm itself but rather something at the interface that has changed in scilab 6, also because the message is issued early in the execution , about 1 s after launching the script while the successful optimization requires at least 10-20s: probably there's only or not even one full iteration performed before it issues escape message in scilab 6 . I can't provide the detail of the function to optimize however I could provide you with more detailed log from execution. Regards, David |
David Chèze |
Hi Paul(s) and all,
I looked forward to the specific failures I reported for Scilab 6 for leastsq, lsqrsolve, datafit and it looks like it is caused by fsolve call inside my cost function while scilab 5.5 managed it well. I reported a complete test case that shows the bug in bug report 15117. the bug 15117 tilte mentions only lsqrsolve but it's showing also leastsq and datafit failures. Thanks for curing the patient ! David |
In reply to this post by David Chèze
Hello David, As the documentation examples work, I can't do much of this info unless I get a reproducible use-case. Maybe you could send us a minimalistic failing example? Also, could you please fill a bug for this so we can fix it for Scilab 6.0.1? Thanks and regards, Paul On 2017-04-04 16:11, David Chèze wrote: > Hi all, > > I worked further on this issue and found that it's new (unwanted) > behavior > in scilab 6 since the script performs well when run under scilab 5.5.1 > on > the same machine WIN7_64bits. > In scilab 6 it fails with the error message : User function 'costf' > have not > been setted. > > while in scilab 5.5.1: > > [fopt, xopt]=leastsq(iprint,evaloptiM1D,x0') > > ***** enters -qn code- (without bound cstr) > dimension= 2, epsq= 0.2220446049250313E-15, verbosity level: > imp= > 2 > max number of iterations allowed: iter= 100 > max number of calls to costf allowed: nap= 100 > ------------------------------------------------ > iter num 1, nb calls= 1, f= 0.4607E+05 > iter num 2, nb calls= 3, f= 0.2185E+05 > iter num 3, nb calls= 6, f= 0.2138E+05 > iter num 4, nb calls= 7, f= 0.2130E+05 > iter num 5, nb calls= 8, f= 0.2128E+05 > iter num 6, nb calls= 9, f= 0.2128E+05 > iter num 7, nb calls= 10, f= 0.2128E+05 > iter num 8, nb calls= 11, f= 0.2128E+05 > iter num 9, nb calls= 12, f= 0.2128E+05 > iter num 10, nb calls= 13, f= 0.2128E+05 > iter num 11, nb calls= 14, f= 0.2128E+05 > iter num 11, nb calls= 22, f= 0.2128E+05 > ***** leaves -qn code-, gradient norm= 0.4410737793778105E-02 > Fin de l'optimisation. > > xopt = > > 1.2213257 0.7702730 > fopt = > > 21284.818 > > > Then I guess it's not a limitation in the algorithm itself but rather > something at the interface that has changed in scilab 6, also because > the > message is issued early in the execution , about 1 s after launching > the > script while the successful optimization requires at least 10-20s: > probably > there's only or not even one full iteration performed before it issues > escape message in scilab 6 . > I can't provide the detail of the function to optimize however I could > provide you with more detailed log from execution. > > Regards, > > David > > > > > > > -- > View this message in context: > http://mailinglists.scilab.org/non-linear-optim-tp4035880p4036141.html > Sent from the Scilab users - Mailing Lists Archives mailing list > archive at Nabble.com. > _______________________________________________ > users mailing list > [hidden email] > http://lists.scilab.org/mailman/listinfo/users users mailing list [hidden email] http://lists.scilab.org/mailman/listinfo/users |
Free forum by Nabble | Edit this page |