It has been more than 2 weeks since I began working on the project titled
'Machine learning features in Scilab' for Google Summer of Code 2018 and I
think that this is a good time to share my progress with the community.
The coding effort was divided into two streams namely
1. *Development* : Initiative to create a standalone machine learning
toolbox written completely in Scilab
2. *Experimentation* : Initiative to run machine learning scripts already
written in python using a feeder subscriber mechanism which can be called by
a user with scripts residing on a server. It also includes any other effort
made ideate machine learning easier to do in Scilab other than the
The standalone machine learning toolbox presently contains the following
The following pre-processing methods have also been added to the set of
- Scaling (Zero mean, unit variance)
- Train Test split
The work under the experimentation domain began with the setup of a GCP
server with ipython (Jupyter) server with only a set of specific keys able
to log into the machine. This machine would act as our server and would do
the computation for the python scripts that have the machine learning
algorithms pre-written. Our client tries to log into the machine, start up a
kernel and copy the kernel configuration file to its local machine. The
scripts for this can be found on the github sub-repository
. This then can be integrated with the approach used in the project last
year to run the script as an interim to a larger Scilab code.
The next step was to ensure an authentication mechanism so that a user
doesn't have the permission to do anything other than just run a kernel and
copy its script. How to analyse which kernel a user has started still eludes
us, but using the command option with the authorized_keys parameter in the
OpenSSH mechanism we were able to lock a users ability to execute commands
on a server.
Any advice on how to tag a kernel with a user under the present setup or any
other suggestions would be welcome.