Re: data augmentation with scilab for neural networks
From your description the "labelled" image sound like the mask image which indicate the ROI of the original image for training the NN.
Is this is the case, I think the labelled image must be going through the same transform with the original image as well.
However, you might want to consider the type of network that you're going to apply, for example, CNN with image classification such as Alexnet, Googlenet, the target is just the "class label" indicating the object label, regardless what type of transformation/augmentation you went through. (bird is still a bird no matter how u rotate it) :)
On the other hand, the network for image detection such as YOLO, SSD will need the image with labelled (such as the bounding boxes) which the labelled vector must be adjusted accordingly.
Just my 2 cents.
---- On Wed, 24 Jun 2020 22:27:30 +0800 P M <[hidden email]> wrote ----
probably not the correct forum to ask this question, but i do not know better.
Dealing with neural networks and image segmentation I write some Scilab code for data augmentation.
This is to increase my training data set. ( = more images)
Data augmentation (for now) is done by:
- image rotation
- image horizontal shifting
Now a basic question:
- Does one apply the data augmentation only for the input images and keep the label images?
- does one also rotate/shear the label images?
A "label image" in this context is a binary image (black/white).
white --> corresponds to pixels in the input image, which are of interest
black --> corresponds to pixels in the input image, which are of no interest.