This is Diganta Dutta from India. I was a part of GSoC 2017 under Scilab
itself till the 2nd evaluation. I couldn't complete the parts as proposed
and had been workign under my mentors: Clement David and Dhruv Khattar.
So, I would like to continue my work on:
1) Embedding Google Analytics API in Scilab
2) Fix bugs reported by Coverity
Do you want to continue your work on the same projects ? The coverity issue project is now much more
targeted at Java issues; the remaining C++ ones will be probably not enough for a complete GSoC. As
in the previous years, please fix a bug / coverity issue to (re)discover our validation process.
Le mardi 13 février 2018 à 00:12 -0700, Diganta Dutta a écrit :
I had been involved with college projects so got a bit late. So, I will be
solving a bug / coverity issue of Scilab as is the rule.
And about the GSoC projects, I would, then like to continue my work on the
Google Analytics Embedding API module and 1 more project which I will be
updating on the next message in a day.
So, I have decided to continue my work on the Implementation of Google
Analytics API that I was doing last year and also take up the "Providing
Machine Learning" Functions project as I have done 2 major projects using
Binary Search Tree Classification Algorithm and Naive Bayes Algorithm. Those
works are also present in my Github profile. I think that would be enough
for a complete GSoC.
Therefore, I will be submitting a Draft Proposal by tonight.
Waiting for your approval and brief on the things that are being prioritized
for the Machine Learning part.
I went through your Proposal for the Machine Learning project. Just have a
few doubts which need clarification.
You have mentioned "creating the Machine learning functions based on
Decision Binary Tree Model and Random Forest Model using Python and Scilab
and will be integrated into the Scilab system". I wanted to know how you are
planning to do the integration?
Is it going to be through a jupyter integration or through plain Scilab code
for the algorithms mentioned?
Also it would be useful if you could elaborate on your approach to integrate
the "R libraries for Naive-Bayes and Support Vector Models".
I will be following up on the procedure for Machine Learning that I want to
us within an hour or so. Just a doubt before that, is there going to be a
completely new module involved for this or do I commit into one of the
existing modules' directory? If so, can you please specify which is it?
I am planning to use plain Scilab code for the Decision Search Tree Model.
Well, to be honest about my approach I would have chosen JAVA or C++ but
since I have been using Scilab for the past 3 years so it would be pretty
much comfortable for me using plain Scilab code for creating the Machine
Learning model. The reason why I would rather prefer JAVA for this model is
because I would be using almost the similar code for the Google Analytics
Embedding module which is a continuation of last year's GSoC work.
For the Random Forest Model, I am planning to use Python and use a Jupyter
Integration for this.
Coming on to the integration of "R libraries for Naive-Bayes and Support
Vector Models" part, the SVM module is going to be an extension of the
libSVM module and and addition of SVM libraries or functionalities for the
module. For this, I am going to take help of the Scikit-Learn and/or
TensorFlow libraries and use the Jupyter notebook for integration.
The Naive-Bayes would be using a similar approach but that I will create a
whole new library using Python.