'Melanoma', also known as 'Malignant_Melanoma', is a type
of cancer that develops from the pigment-containing cells known
as melanocytes. Melanomas typically occur on the skin, but may rarely
occur in the mouth, intestines, or eye. The primary cause of
melanoma is ultraviolet light (UV) exposure in those with low levels of
skin pigment. The UV light may be from either the sun or from other
sources, such as tanning devices. About 25% develop from moles.
Read more at :-> wikipedia.org/melanoma.
This repo holds the source code for the Melanoma-Detection Application. Given below is the 'Project Structure' :
.
| Main.py
| dataset.npz
| testcase.npz
| README.md
|---featext
| |---physical
| | | __init__.py
| | | Gabor.py
| |---texture
| | | __init__.py
| | | Haralick.py
| | | King.py
| | | Tamura.py
| | __init__.py
|---images
| |---benign
| | | 'img_number'.jpg
| |---malignant
| | | 'img_number'.jpg
| |---negative
| | | 'img_number'.jpg
|---mlmodels
| | Classifiers.py
| | DecisionSurfacePlotter.py
| | Mel_DTC.pkl
| | Mel_DTR.pkl
| | Mel_LinSVM.pkl
| | Mel_LinSVR.pkl
| | Mel_MLPC.pkl
| | Mel_MLPR.pkl
| | Mel_NuSVM.pkl
| | Mel_NuSVR.pkl
| | Mel_RFC.pkl
| | Mel_RFR.pkl
| | Mel_SVM.pkl
| | Mel_SVR.pkl
| | __init__.py
|---preprocessing
| | Prep.py
| | __init__.py
|---results
| |---dataset
| | |---benign
| | | |---'numbers'
| | | | | 'images'.jpg
| | |---malignant
| | | |---'numbers'
| | | | | 'images'.jpg
| | |---negative
| | | |---'numbers'
| | | | | 'images'.jpg
| |---testset
| | |---benign
| | | |---'numbers'
| | | | | 'images'.jpg
| | |---malignant
| | | |---'numbers'
| | | | | 'images'.jpg
| | |---negative
| | | |---'numbers'
| | | | | 'images'.jpg
|---temp
| | 'img_number'.jpg
|---test
| | 'img_number'.jpg
|---util
| Util.py
| __init__.py
This application does not contain any fancy UI, as it is basically a
modular console program, written in Python3. Anyone, with some basic
programming knowledge will be able to run this app easily.
Simply, this console app tries to predict the nature of a 'skin-lesion'
image, served as an input.
To keep things simple, we have trained our machine-learning models, to
classify the input image as one of the three types:
- NEGATIVE - Represents a skin-lesion that is not melanoma.(-1)
- BENIGN - Represents a skin-lesion that is an early-stage melanoma.(0)
- MALIGNANT - Represents a skin-lesion that is highly cancerous.(1)
The application consists of five core modules, namely:
- Main.py (Driver module for the entire application).
- featext ('quatified-features' extraction module for the input_image).
- physical ('physical-features' extraction sub-module for the input_image).
- Gabor.py (Extracts "Gabor's" physical-features from the input_image).
- texture ('textural-features' extraction module for the input_image).
- Haralick.py (Extracts "Haralick's" texture-features from the input_image).
- King.py (Extracts "King's" texture-features from the input_image).
- Tamura.py (Extracts "Tamura's" texture-features from the input_image).
- physical ('physical-features' extraction sub-module for the input_image).
- mlmodels (input_image classification/regression module).
- Classifiers.py (Predicts the class of the input_image).
- DecisionSurfacePlotter.py (Plots the decision surfaces of the various classifiers/regressors, based on the selected features).
- Mel_DTC.pkl (Persistently stores the trained 'Decision Tree Classifier' object).
- Mel_DTR.pkl (Persistently stores the trained 'Decision Tree Regressor' object).
- Mel_LinSVM.pkl (Persistently stores the trained 'Linear-Support Vector Machine Classifier' object).
- Mel_LinSVR.pkl (Persistently stores the trained 'Linear-Support Vector Machine Regressor' object).
- Mel_MLPC.pkl (Persistently stores the trained 'Multi-Layer Perceptron Classifier' object).
- Mel_MLPR.pkl (Persistently stores the trained 'Multi-Layer Perceptron Regressor' object).
- Mel_NuSVM.pkl (Persistently stores the trained 'Nu-Support Vector Machine Classifier' object).
- Mel_NuSVR.pkl (Persistently stores the trained 'Nu-Support Vector Machine Regressor' object).
- Mel_RFC.pkl (Persistently stores the trained 'Random Forest Classifier' object).
- Mel_RFR.pkl (Persistently stores the trained 'Random Forest Regressor' object).
- Mel_SVM.pkl (Persistently stores the trained 'Support Vector Machine Classifier' object).
- Mel_SVR.pkl (Persistently stores the trained 'Support Vector Machine Regressor' object).
- preprocessing (input_image preprocessing module).
- Prep.py (Performs generic image pre-processing operations).
- util (General library utility module).
- Util.py (Performs routine data-structural operations ... insertion, searching, sorting etc).
This application works in the following folds :
- Firstly, a 'training-set' data is generated from a collection of various skin-lesion images placed in their respective
class folders i.e., 'images/benign', 'images/malignant', 'images/negative'. These images are pre-processed and
a set of quantified-features are extracted from them, which comprises the 'training-set' data. - Next, the above generated training data, is then passed on to the various classifier/regressor objects for training/learning.
- Now, the trained models are persistently saved as python objects or pickle units, in individual '.pkl' files.
- Next, a set of input_images in need of classification are placed in the 'temp' folder.
- Next, the program takes each input_image pre-processes it and extracts the necessary features from it.
- The features generated from the pre-processed input_images are then passed on to the various
machine-learning models,which in turn predicts the nature of each input_image accordingly. - Since, the learning process here is supervised, a 'prediction-accuracy' is generated for each model.
- Finally, the results from the model with the highest 'prediction-accuracy' are selected.
-
Python3 ;
[] About 'Python3' π wikipedia.org/PythonProgrammingLanguage.
[] How to install 'Python3'? π python.org/BeginnersGuide.
[] Official 'Python3' documentation π docs.python.org/Python3.
[] GET 'Python3' π python.org/downloads. -
Python Package Manager (any one of the below applications will suffice) ;
- pip π comes along with the executable 'python-installation' package.
[] About 'pip' >>> wikipedia.org/pip_packagemanager.
[] How install packages using 'pip'? >>> docs.python.org/inastallingPythonPackagesUsingPip. - anaconda π anaconda.com/downloads.
[] About 'anaconda' >>> wikipedia.org/conda.
[] How to install 'anaconda'? >>> docs.anaconda.com/installingconda.
[] How to install 'python-packages' with 'conda'? >>> docs.anaconda.com/packages
[] Official 'anaconda' documentation? >>> docs.anaconda.com/official.
- pip π comes along with the executable 'python-installation' package.
-
IDE (optional) ;
- Pycharm π jetbrains.com/getPycharm.
[] About 'Pycharm' >>> wikipedia.org/Pycharm.
[] How to use 'Pycharm'? >>> jetbrains.com/PycharmGuide.
- Pycharm π jetbrains.com/getPycharm.
-
Python Library Dependencies ;
- 'NumPy'.
[] About 'numpy' π wikipedia.org/numpy.
[] Official 'numpy' manuals π docs.scipy.org/numpyManuals.
(Note.- For installing NumPy through pip, typepython -m pip --user install numpy
.) - 'MatPlotLib'.
[] About 'matplotlib' π wikipedia.org/matplotlib.
[] Official 'matplotlib' docs π matplotlib.org/docs.
(Note.- For installing MatPlotLib through pip, typepython -m pip --user install matplotlib
.) - 'SciPy'.
[] About 'scipy' π wikipedia.org/scipy.
[] Official 'scipy' documentations π scipy.org/docs.
(Note.- For installing SciPy through pip, typepython -m pip --user install scipy
.) - 'OpenCV'.
[] About 'opencv' π wikipedia.org/opencv.
[] Official 'opencv-python' online tutorial π opencv-python.org/tutorials.
[] Official 'opencv-python' documentation π docs.opencv.org/python.
(Note.- For installing OpenCV through pip, typepython -m pip --user install opencv-python
.) - 'Scikit-Learn'.
[] About 'scikit-learn' π wikipedia.org/Scikit-Learn.
[] Official 'scikit-learn' documentation π scikit-learn.org/docs.
(Note.- For installing Scikit-Learn through pip, typepython -m pip --user install sklearn
.) - 'Joblib'.
[] Official 'Joblib' documentation π joblib.io/docs.
(Note.- For installing Joblib through pip, typepython -m pip --user install joblib
.)
- 'NumPy'.
- Before you download this application, make sure you have installed 'Python3' along with the dependency modules.
- If you are using any integrated development environment like 'PyCharm', you can simply clone this repository to your
project directory, using the git-commandline tools, just typegit clone https://github.com/Tejas07PSK/Melanoma-Detection.git
. - As this a 'console/commandline/terminal' application you can simply download this repository as
Melanoma-Detection.zip
compressed file and then extract it accordingly. Now, navigate to the extracted folder in terminal
and then run this program by typingpython Main.py
. - As you run this application, you will be greeted with the following text as shown in the screenshot.
- Now, if you select option '1', you will go through the following phases as shown in the screenshots.
- Next, if you select option '2', you will get the following output as shown in the screenshot.
- Next, if you select option '3', the operation phase will be very similar to option _'1', as shown in the following screenshots.
- Next, if you select option '7', the existing ml-models will be iteratively trained with the new test-images
placed in the '/temp' folder, look at this screenshot.
- Now, if you select option '5', you will get the following output, as shown in the screenshots.
- If you select option '8', you'll get the following output as shown in the screenshots.
- Option '9' is used for getting a list of files present in any valid project directory, look at the screenshot.
- Option '6' is used for plotting the decision-surfaces of the various classifiers/regressors as shown below in the screeenshots.
- Option '4' is used for classifying the types of the input-images. For explanation we have considered the following
input images,
,
now look at the screenshots.
- Option '10' is used for displaying the 'R-G-B' color-plates of a selected color-image, as shown in the screenshots.
During the pre-processing phase, the input-image is first converted to 'gray-scale' and then this gray-image is inverted.
Next an 'automatic-thresholding' operation is performed on the 'inverted-gray-scale-image' using 'Otsu's Method'.
A 'threshold-level' is generated, which is then used to binarize the 'inverted-gray-scale-image',i.e., segmentation of the
input-image occurs. Now, some morphological-operations are performed on the binarized-image, to remove holes and contrast inconsistencies.
Finally, a set variations of the original input-image are produced at the end of this phase.
For example if we consider the following image as the input
,
then the pre-processing phase will produce the following resulting images.
The 'Otsu' threshold-level for the 'input-image' above, is 132.
Read more about Otsu's Thresholding Method here π wikipedia.org/Otsu's_Method and at π labbookpages.co.uk/OtsuThresholding.
Read more about Image Segmentation here π wikipedia.org/Image_Segmentation.
Read more about Gray-Scale Images here π wikipedia.org/grayscaleimages.
Read more about RGB Color Images here π wikipedia.org/rgb_colorimages.
Read more about Binary Images here π wikipedia.org/binaryimages.
Read more about Morphological Operations here π wikipedia.org/morphological_operations.
Read more about Image Thresholding here π wikipedia.org/image_thresholding.
For prediction purposes, this application extracts a total of 34 quatified features from the input-image, each having their own computational methods.
- Haralick's texture features;
Haralick introduced Gray Level Co-occurrence Matrix (GLCM), using which various statisctical and differential textural-features of an image are extracted. These features, are primarily based on the 'thermodynamical' aspects of a textured image. GLCM shows how often each gray level occurs at a pixel located at a fixed geometric position relative to another pixel, as a function of the gray level.
Listed below are the set of Haralick's texture features,
|| Angular Second Moment(ASM).
|| Energy.
|| Entropy.
|| Contrast.
|| Homogeneity.
|| Directional-Moment(DM).
|| Correlation.
|| Haralick-Correlation.
|| Cluster-Shade.
|| Cluster-Prominence.
|| Moment-1.
|| Moment-2.
|| Moment-3.
|| Moment-4.
|| Differential-ASM(DASM).
|| Differential-Mean(DMEAN).
|| Differential-Entropy(DENTROPY).
(More articles regarding GLCM and Haralick's features, can be found here,
π hindawi.com/haralick_biolung.
π wikipedia.org/coocurmat.
π ucsd.edu/glcm.pdf.
π uio.no/statistics_glcm.
π github.com/cerr/Cerr/har_feat_fromulas.
π shodhganga.ac.in/texture_features.) - Tamura texture features;
Tamura's texture features, are based on the human visual perception of images. It uses various probabilistic methods on the discrete gray-level intensities to extract more mathematical quatities, as texture-features from the digitized image. It improves on Haralick's features, but is computationally more expensive.
Listed below are the set of Haralick's texture features,
|| Tamura-Coarseness.
|| Tamura-Contrast.
|| Tamura-Kurtosis.
|| Tamura-Linelikeness.
|| Tamura-Directionality.
|| Tamura-Regularity.
|| Tamura-Roughness.
(You can read more about Tamura's-Features here π dirve.google.com/melanoma_tamura_feats.) - King's texture features;
King's texture features are also based on the human visual perception of images. It is computationally more optimal as well as extracts more features. King's method introduces the notion of NGTDM (Neighborhood Gray-Tone Difference Matrix).
Listed below are the set of King's texture features,
|| King's-Coarseness.
|| King's-Contrast.
|| King's-Busyness.
|| King's-Complexity.
|| King's-Strength.
(You can read more about Kings's-Features here π dirve.google.com/melanoma_kings_feats.) - Gabor's physical features;
The Gabor filter is a linear filter, which is used to analyze the existence of a frequency content in a specific direction, in the given digitized image. Basically, it tries to quantify the physical features of a digital image, namely shape, color, edges etc.
Listed below are the set of Gabor physical features,
|| Mean-Edge of covering rectangle.
|| Bounded-Circle radius.
|| Asymmetry-Index of lesion.
|| Fractal-Dimension of lesion.
|| Diameter of lesion.
|| Color-Variance of lesion.
(More articles regarding Gabor physical features, can be found here,
π dirve.google.com/melanoma_gabor_feats.
π wikipedia.org/gabor_filter.)
In machine learning and statistics, classification is the problem of identifying to which of a set of categories
a new observation belongs, on the basis of a training set of data containing observations (or instances) whose
category membership is known.
Read more here π wikipedia.org/statistical_classification.
In statistical-modeling and machine-learning, regression analysis is a set of statistical processes for estimating
the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one
understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables
is varied, while the other independent variables are held fixed.
Read more here π wikipedia.org/regresson_analysis.
- Support Vector Machine(SVM) classifier/regressor ;
In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.
(More articles regarding SVMs can be found here,
π wikipedia.org/SVM.
π scikit-learn.org/SVM.) - Multi Layer Perceptron(MLP or Neural-Network) classifier/regressor ;
A multilayer perceptron (MLP) is a class of feedforward artificial neural network. An MLP consists of at least three layers of nodes. Except for the input nodes, each node is a neuron that uses a nonlinear activation function.
(More articles regarding MLPs and Neural-Networks can be found here,
π wikipedia.org/MLP.
π wikipedia.org/ANN.
π scikit-learn.org/MLP.) - Decision Tree(DT) classifier/regressor ;
Decision tree learning uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining and machine learning.
(More articles regarding DTs can be found here,
π wikipedia.org/DT.
π scikit-learn.org/DT.) - Random Forest(RF) classifier/regressor ;
Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of over-fitting to their training set.
(More articles regarding RFs can be found here,
π wikipedia.org/RF.
π scikit-learn.org/RF.)
By default, this application outputs the result of the learning model having the highest prediction accuracy. In our experimental case of 30 training-images and 12 prediction-images, the 'Random Forest Classifier(RFC)' gave the best results with an accuracy of 91.66%.
class preprocessing.Prep. Prep(path)
:-
- Constructor Parameters :
- path - string text indicating the location of the source color-image.
- Methods :
getColorPlates(src_clrimg, plate)
,- Arguments :
- src_clrimg - 3-dimensional numpy array representing a color-image.
- plate - required color plate code as a char.
(Possible values are,
'R' for red.
'G' for green.
'B' for blue.)
- Returns :
- temp_img - resultant image consisting of the required color-plate, as a 3-dimensional numpy array.
- Arguments :
getActImg()
,- Arguments : <'None'>
- Returns :
- self.__img - 3-dimensional numpy array representing a color-image.
getGrayImg()
,- Arguments : <'None'>
- Returns :
- self.__imgray - 2-dimensional numpy array representing a gray-scale image.
getInvrtGrayImg()
,- Arguments : <'None'>
- Returns :
- self.__invimgray - 2-dimensional numpy array representing an inverted gray-scale image.
getBinaryImg()
,- Arguments : <'None'>
- Returns :
- self.__binimg - 2-dimensional numpy array representing a binarized gray-scale image.
getOtsuThresholdLevel()
,- Arguments : <'None'>
- Returns :
- self.__ottlvl - python primitive number type representing the 'Otsu' threshold-level for binarization.
getSegColImg()
,- Arguments : <'None'>
- Returns :
- self.__seg_col - 3-dimensional numpy array representing a color-image.
getSegGrayImg()
,- Arguments : <'None'>
- Returns :
- self.__seg_gray - 2-dimensional numpy array representing a segmented gray-scale image.
class featext.texture.Haralick. HarFeat(img, offset)
:-
- Constructor Parameters :
- img - 2-dimensional numpy array representing a gray-scale image.
- offset - a python tuple of two numbers, representing the x any y offsets respectively(default offset = (0, 1)).
(Here, offset = (x, y) where [0 <= x <= width_img] & [0 <= y <= height_img].)
- Methods :
getGLCM()
,- Arguments : <'None'>
- Returns :
- self.__glcm - 2-dimensional numpy array representing the 'Gray-Level Co-occurrence Matrix(GLCM)'.
getAngularSecondMomentASM()
,- Arguments : <'None'>
- Returns :
- self.__asm - python primitive number type representing 'ASM'.
getEnergy()
,- Arguments : <'None'>
- Returns :
- self.__energy - python primitive number type representing 'Energy'.
getEntropy()
,- Arguments : <'None'>
- Returns :
- self.__entropy - python primitive number type representing 'Entropy'.
getContrast()
,- Arguments : <'None'>
- Returns :
- self.__contrast - python primitive number type representing 'Contrast'.
getHomogeneity()
,- Arguments : <'None'>
- Returns :
- self.__idm_homogeneity - python primitive number type representing 'Homogeneity'.
getDm()
,- Arguments : <'None'>
- Returns :
- self.__dm - python primitive number type representing 'DM'.
getCorrelation()
,- Arguments : <'None'>
- Returns :
- self.__correlation - python primitive number type representing 'Correlation'.
getHarCorrelation()
,- Arguments : <'None'>
- Returns :
- self.__har_correlation - python primitive number type representing 'Haralick-Correlation'.
getClusterShade()
,- Arguments : <'None'>
- Returns :
- self.__cluster_shade - python primitive number type representing 'Cluster-Shade'.
getClusterProminence()
,- Arguments : <'None'>
- Returns :
- self.__cluster_prominence - python primitive number type representing 'Cluster-Prominence'.
getMoment1()
,- Arguments : <'None'>
- Returns :
- self.__moment1 - python primitive number type representing '1st Moment'.
getMoment2()
,- Arguments : <'None'>
- Returns :
- self.__moment2 - python primitive number type representing '2nd Moment'.
getMoment3()
,- Arguments : <'None'>
- Returns :
- self.__moment3 - python primitive number type representing '3rd Moment'.
getMoment4()
,- Arguments : <'None'>
- Returns :
- self.__moment4 - python primitive number type representing '4th Moment'.
getDasm()
,- Arguments : <'None'>
- Returns :
- self.__dasm - python primitive number type representing 'D-ASM'.
getDmean()
,- Arguments : <'None'>
- Returns :
- self.__dmean - python primitive number type representing 'D-Mean'.
getDentropy()
,- Arguments : <'None'>
- Returns :
- self.__dentropy - python primitive number type representing 'D-Entropy'.
class featext.texture.Tamura. TamFeat(img)
:-
- Constructor Parameters :
- img - 2-dimensional numpy array representing a gray-scale image.
- Methods :
getCoarseness()
,- Arguments : <'None'>
- Returns :
- self.__coarseness - python primitive number type representing 'Tamura-Coarseness'.
getContrast()
,- Arguments : <'None'>
- Returns :
- self.__contrast - python primitive number type representing 'Tamura-Contrast'.
getKurtosis()
,- Arguments : <'None'>
- Returns :
- self.__kurtosis - python primitive number type representing 'Tamura-Kurtosis'.
getLineLikeness()
,- Arguments : <'None'>
- Returns :
- self.__linelikeness - python primitive number type representing 'Tamura-LineLikeness'.
getDirectionality()
,- Arguments : <'None'>
- Returns :
- self.__directionality - python primitive number type representing 'Tamura-Directionality'.
getRegularity()
,- Arguments : <'None'>
- Returns :
- self.__regularity - python primitive number type representing 'Tamura-Regularity'.
getRoughness()
,- Arguments : <'None'>
- Returns :
- self.__roughness - python primitive number type representing 'Tamura-Roughness'.
getPrewittHorizontalEdgeImg()
,- Arguments : <'None'>
- Returns :
- self.__img_hor_x - 2-dimensional numpy array representing a gray-scale image.
getPrewittVerticalEdgeImg()
,- Arguments : <'None'>
- Returns :
- self.__img_vert_y - 2-dimensional numpy array representing a gray-scale image.
getCombinedPrewittImg()
,- Arguments : <'None'>
- Returns :
- (self.__delg_img).astype(np.uint8) - 2-dimensional numpy array representing a gray-scale image.
class featext.texture.King. KingFeat(img)
:-
- Constructor Parameters :
- img - 2-dimensional numpy array representing a gray-scale image.
- Methods :
getNGTDM()
,- Arguments : <'None'>
- Returns :
- self.__ngtdm - 2-dimensional numpy array representing the 'Neighbourhood Gray Tone Difference Matrix(NGTDM)'.
getKingsCoarseness()
,- Arguments : <'None'>
- Returns :
- self.__coarseness - python primitive number type representing 'King's-Coarseness'.
getKingsContrast()
,- Arguments : <'None'>
- Returns :
- self.__contrast - python primitive number type representing 'King's-Contrast'.
getKingsBusyness()
,- Arguments : <'None'>
- Returns :
- self.__busyness - python primitive number type representing 'King's-Busyness'.
getKingsComplexity()
,- Arguments : <'None'>
- Returns :
- self.__complexity - python primitive number type representing 'King's-Complexity'.
getKingsStrength()
,- Arguments : <'None'>
- Returns :
- self.__strength - python primitive number type representing 'King's-Strength'.
class featext.physical.GAbor. Gabor(img, corr_colimg)
:-
- Constructor Parameters :
- img - 2-dimensional numpy array representing a gray-scale image.
- corr_colimg - 3-dimensional numpy array representing a color-image.
- Methods :
getGaussianBlurredImage()
,- Arguments : <'None'>
- Returns :
- self.__gblurimg - 2-dimensional numpy array representing a 'gaussian-blurred' gray-scale image.
getListOfContourPoints()
,- Arguments : <'None'>
- Returns :
- self.__contours - python list type, containing numpy arrays which represent the corresponding contours detected in the image. Each array contains a list of [[x, y]] co-ordinates representing the specific contour.
getHierarchyOfContours()
,- Arguments : <'None'>
- Returns :
- self.__hierarchy - multi-dimensional numpy array representing hierarchical relationships between different contours detected in the image.
getListOfMomentsForCorrespondingContours()
,- Arguments : <'None'>
- Returns :
- self.__momLstForConts - python list of dictionaries, wherein each dictionary contains the various moments corresponding to various contours detected in the image.
getListOfCentroidsForCorrespondingContours()
,- Arguments : <'None'>
- Returns :
- self.____centroidLstForConts - python list of tuples denoting the (x, y) co-ordinates of the centroids, of the detected contours in the image.
getListOfAreasForCorrespondingContours()
,- Arguments : <'None'>
- Returns :
- self.__arLstForConts - python list of numbers denoting the areas of various contours in the image.
getListOfPerimetersForCorrespondingContours()
,- Arguments : <'None'>
- Returns :
- self.__periLstForConts - python list of numbers denoting the perimeters of the various contours in the image.
getSelectedContourImg()
,- Arguments : <'None'>
- Returns :
- self.__selecCntImg - 3-dimensional numpy array, representing a color-image.
getBoundingRectImg()
,- Arguments : <'None'>
- Returns :
- self.__imgcovrect - 3-dimensional numpy array, representing a color-image.
getMeanEdgeOfCoveringRect()
,- Arguments : <'None'>
- Returns :
- self.__meanEdge - python primitive number type, representing the edge-length of a rectangle.
getBoundedCircImg()
,- Arguments : <'None'>
- Returns :
- self.__imgcovcirc - 3-dimensional numpy array, representing a color-image.
getBoundedCircRadius()
,- Arguments : <'None'>
- Returns :
- self.__rad - python primitive number type, representing the radius of a circle.
getAsymmetryIndex()
,- Arguments : <'None'>
- Returns :
- self.__asyidxofles - python primitive number type, representing 'Asymmetry-Index'.
getCompactIndex()
,- Arguments : <'None'>
- Returns :
- self.__cmptidx - python primitive number type, representing 'Compact-Index'.
getFractalDimension()
,- Arguments : <'None'>
- Returns :
- self.__fracdimen - python primitive number type, representing 'Fractal-Dimension'.
getDiameter()
,- Arguments : <'None'>
- Returns :
- self.__diameter - python primitive number type, representing the diameter of a circular-region in the image.
getColorVariance()
,- Arguments : <'None'>
- Returns :
- self.__colorvar - python primitive number type, representing 'Color-Variance'.
class mlmodels.Classifiers. Classifiers(featureset, target, mode)
:-
- Constructor Parameters :
- featureset - 2-dimensional numpy array representing the list of features extracted for each training image.
- target - numpy array representing the class of training images, due to supervision.
- mode - python string type, representing the mode of operation,i.e, either mode = "train" or mode = "predict". Default mode is "predict" and for that you don't need 'featureset' and 'target' parameters and by default they are set to 'None'.
- Methods :
predicto(extfeatarr, supresults)
,- Arguments :
- extfeatarr - 2-dimensional numpy array representing the list of features extracted for each input image.
- supresults - numpy array representing the class of input images, due to supervision.
- Returns :
python dictionary type, representing the results of prediction, for each classifier and regressor.
- Arguments :
module mlmodels.DecisionSurfacePlotter.
:-
- Methods :
plotForAll(X, Y, ftup, feats)
,- Arguments :
- X - 2-dimensional numpy array representing features extracted for each input image.
- Y - numpy array representing the supervised classes of the input images.
- ftup - python list of indexes, for the features to be plotted.
- feats - python list of string names, for the features to be plotted.
- Returns : <'None'>
- Arguments :
module util.Util.
:-
- Methods :
search(arr, ins_val, low, high)
,- Arguments :
- arr - numpy array, to be searched.
- ins_val - python primitive number type representing the key to be searched in the numpy array.
- low - python primitive number type representing the lower index of the numpy array.
- high - python primitive number type representing the upper index of the numpy array.
- Returns :
- fnd_idx - python primitive number type representing the index at which the key is found in the numpy array.
- Arguments :
quickSort(arr, low, high)
,- Arguments :
- arr - numpy array, to be sorted.
- low - python primitive number type representing the lower index of the numpy array.
- high - python primitive number type representing the upper index of the numpy array.
- Returns : <'None'>
- Arguments :
getArrayOfGrayLevelsWithFreq(gray_img)
,- Arguments :
- gray_img - 2-dimensional numpy array, representing a gray-scale image.
- Returns :
- aryoflst - numpy array of python tuples, representing the various gray-levels present in the image, along with their corresponding frequency of occurrence in the image.
- Arguments :
module Main.
:-- Methods :
main_menu()
,- Arguments : <'None'>
- Returns : <'None'>
- Methods :
This application is still experimental and is moderately accurate in predicting the type of
skin-lesion, present in the image. However, on increasing the quantity of training-set images, the
prediction accuracy of the various machine-learning models used in this application, will
significantly increase. Now, most importantly before you start using this application use
option-'2' first, if you want to test the default experimental data or else the '.pkl' files
might generate some error based on your system architecture. Lastly, it is recommended that for
testing purposes, you use images with smaller dimensions, preferably '120x120' pixels.
Good-Luck!!! π