Abstract— Skin lesions are significant in determining dermatological medical conditions globally. Early diagnosis of malignant melanoma by dermoscopy imaging considerably will increase the survival rate. Throughout this paper, we tend to present deep learning-based approaches to resolve 2 issues in skin lesion analysis employing a dermoscopic image containing the tumor. Estimation of these biomarkers are wont to give some insight, whereas detecting cancerous cells and classifying the lesion as either benign or malignant. This paper presents groundwork for the detection of skin lesions with cancerous inclination by segmentation and subsequent application of Convolution Neural Network on dermoscopy pictures. The proposed models are trained and evaluated on normal benchmark datasets from the International Skin Imaging Collaboration (ISIC) 2016 challenge, which consists of 2000 training samples and 600 testing samples. pictures with skin lesions were segmental based mostly on individual channel intensity thresholding. The resultant pictures were fed into CNN for feature extraction. The extracted features were then used for classification by associate ANN classifier. Previously, many approaches are used for subject diagnostic with variable degree of success. As compared to a previous best of ninety-seven, the methodology presented in this paper yielded associate accuracy of ninety eight.32%.
Keywords— skin lesion, segmentation, convolutional, neural network (CNN), artificial neural network (ANN), ReLU, sensitivity, specificity, accuracy
Melanoma could be a kind of skin cancer that has verified itself to be quite fatal. It’s the reason behind seventy-five of the deaths caused by skin connected diseases, and these numbers are becoming worse with the passage of your time . It’s annually calculable that five million lives are suffering from skin cancer within the U.S [2, 3], and 9,000 lives are annually claimed by skin cancer making it a serious threat and cause for rising concern . Malignant melanoma diagnosed in early stages is the necessary key to prevent this disease, otherwise, it becomes life-threatening if not cured in early stages. This characteristic of cancer emphasizes on the importance of early and correct diagnosing of skin cancer. Though malignant melanoma cancer shows visual symptoms on skin and can be known by dermatologist, rather than using expensive large machine depends on their expertise and an inexperienced dermatologist will confuse melanoma with scars or non-lethal disease of the skin . Another side to the problem statement is the increasing population of world and fewer dermatologist per capita to deal with this problem. Increasing the ratio of experienced dermatologists per capita may be a far-fetched task and compared to, that introducing an automatic software-based technique to help during this war against skin cancer looks a lot of viable .
As pigmented lesions occurring on the surface of the skin, melanoma is amenable to early detection by expert visual inspection. It is also amenable to automated detection with image analysis. Given the widespread availability of high-resolution cameras, algorithms that can improve our ability to screen and detect troublesome lesions can be of great value. Dermoscopy is an imaging technique that eliminates the surface reflection of skin. By removing surface reflection, visualization of deeper levels of skin is enhanced. Prior research has shown that when used by expert dermatologists, dermoscopy provides improved diagnostic accuracy, in comparison to standard photography. Dermoscopy image is a different methodology to simply examine skin diseases. The skin images are enlarged and lit within the affected region of skin so as to make sure clarity and discard any skin reflection . However, lack of progress during this methodology and this method depends on human vision and experience to detect disease, introducing a person’s error that has LED to poor efficiency in malignant melanoma detection. Efficiency of dermoscopy imagination technique may be improved by adding an automatic tool to identify the anomaly of skin.[7, 8].to enhance dermoscopy image, there are several works tried to boost the effectiveness of image segmentation technique. Multispectral imaging and confocal microcopy are wide used to address the issue of malignant melanoma detection. These machines are very expensive and large. Also, special training is needed to utilize this method. Significantly, well- trained and experienced dermatologists will yield higher results from this methodology .
Segmentation of affected skin image could be a crucial method for many detection algorithms. Accurate segmentation is the primary key for getting high accuracy of succeeding steps within the process. Many works are tried to extract lesion portion from the images. Garnavi et al. worked on segmentation of pictures using optimal color channels and hybrid thresholding technique for skin lesion analysis . Schaefer developed segmentation on the pictures of lesion space by auto border detection technique  and extracted features (i.e. color, shape, and texture, were used for detection of melanoma . Codella et al. combined support vector machine (SVM), and convolution neural network (CNN) for identification of skin cancer . The planned methodology works on the combination of automated segmentation and CNN module. The presented methodology focuses on up segmentation of an image, so applying CNN technique specifically for enhance skin cancer detection. We use 3 totally different strategies of automatic segmentation supported thresholding, morphology functions and active contours. In the proposed system, segmentation and classification of skin lesion as cancerous or normal based on the texture features. The proposed segmentation framework is tested by comparison lesion segmentation results and malignant melanoma classification results to results using different state-of-art algorithms. The proposed framework has higher segmentation accuracy compared to all or any different tested algorithms.
Figure 1. Implemented system flow.
Dataset used for this study has been obtained from International Skin Imaging Collaboration (ISIC) . 900 images (1024×767 pixels) acquired from ISIC 2016 were used for training. Further for training, dataset of 379 images were short-listed and were labeled as training images. These images were additionally classified on the idea of their characteristics into 3 varieties, Melanoma, Seborrheic skin disorder, and Nevus. skin cancer is associate degree image containing symptoms of melanoma cancer and we have a tendency to classify it in malignant class, whereas keratosis is a picture of non-lethal skin disorder, and birthmark is a picture of birthmark. The last 2 pictures are listed in benign class. Classifying of malignant and benign will facilitate skin doctor to change and help deduction of the results.
Figure 2. Dataset
Figure 2: Examples of lesion images from ISIC 2016 and their masks. The first row shows the original images of different lesions. The second row shows the segmentation masks. The third row shows the superpixel mask for dermoscopic feature extraction. The scales for the lesion images are 1022 pixels × 767 pixels, 3008 pixels × 2000 pixels and 1504 pixels × 1129 pixels, respectively.
III. Proposed Methodology
Automated tools for image processing were used to tackle the given problem. These tools generally works in following steps.
- Accurate Segmentation
- Feature Extraction
- Classification of Lesion
The flow of overall methodology is shown in Fig. 1. The images were accurately segmented for subsequent steps, and CNN was then used for feature extraction and classification. CNN can be divided into two categories: convolution layer which extract features, and ANN classifier which classifies an image. These steps will be discussed in detail in the following sections.
The dataset contains multiple pictures of malignant and benign pigmented skin lesions. A pigmented skin lesion, once stated in dermatoscopy, could be a tiny abnormal space on skin that is typically darker tone and incorporates a distinguishable texture on the image, compared to the image of traditional skin. Generalized statistical distribution (GGD) is that the technique that is being used for image segmentation. All coaching images, were divided into its R, G and B color channels to one by one confirm the extent of involvement in a malignant.
Intensities (I) of the malignant space was obtained, and GGD model was established. Initially, 900 coaching pictures were computed by equation one and a couple of for acquire GGD model.
TABLE I GGD STATISTICS.
Table one shows the datum results for developing GGD model. These values were used to substitute in (3), then Generalized normal distribution (GGD) will be obtained. Fig. two shows the distribution of GGD models of R, G and B channels.
(3) Figure 3. Proposed GGD model.
After applying GGD model on an picture, morphological operation has to be performed to get rid of unwanted components. Given the actual fact that the image of skin lesions is darker in tone color, compared to the image of traditional skin around it. Mean and standard deviation of the latter known, a generated mask should satisfy equation (4) (4) µ − I ≥ σ
The original training set contains 2000 skin lesion images of different resolutions. The resolutions of some lesion images are above 1000 × 700, which require a high cost of computation. It is necessary to rescale the lesion images for the deep learning network. As directly resizing images may distort the shape of the skin lesion, we first cropped the center area of lesion image and then proportionally resize the area to a lower resolution. The size of the center square was set to be 0.8 of the height of the image, and automatically cropped with reference to the image center. This approach not only enlarges the lesion area for feature detection, but also maintains the shape of the skin lesion.
1.2. Data Augmentation
The dataset contains three categories of skin lesion, i.e., Melanoma, Seborrheic keratosis and Nevus. As the number of images of different categories varies widely, we accordingly rotated the images belonging to different categories.
B. Convolution Neural Networks
CNN has established to be quite in in classification issues of pictures. it’s a superb tool for learning native and world information by combining easy options like edges and curves to convey a lot of complicated features like corners and shapes. CNN was enforced for detective work malignant melanoma cancer. Since the image of malignant melanoma cancer has no distinct feature, therefore, deep layer CNN cannot perform well for malignant melanoma cancer detection due to overfitting drawback. This drawback arises once the model is trained too well. Consequently, it starts to own harmful result on the results. it’s steered that CNN design is a lot of appropriate for distinguishing texture- primarily based pictures and it will avoid overfitting issues.
CNN Architecture : the layout of the network employed in this study is illustrated in Fig. 3. RGB channel input of skin image was normalized with zero mean and unit variance. This normalized matrix was fed into the convolution layer. Convolution layer is that the 1st layer that convolves sixteen totally different kernel of 7×7 pixels to provide 16 different output channels. The extracted feature channels were fed into pooling layer for reducing the dimension of these channels, or it may be referred as sampling. These sampled channels were used as inputs for the following layers, referred to as absolutely connected layers.
We used three-layer connected model for image classification. Every consequent layer will cut back the amount of connected neurons (i.e. 100, 50, five respectively). In distinction to the DCNN, we tend to have used single convolution layer since there are few options to be learned, thus it will cut back the complexness of the CNN and avoid overfitting downside. Summary of every layer of CNN is mentioned as follows
Skin Lesions Based on Convolutional Neural Networks:
In this section, the individual CNN methods used to classify skin lesions are presented. CNNs can be used to classify skin lesions in two fundamentally different ways. On the one hand, a CNN pre trained on another large dataset, such as ImageNet, can be applied as a feature extractor. In this case, classification is performed by another classifier, such as k-nearest neighbors, support vector machines, or artificial neural networks. On the other hand, a CNN can directly learn the relationship between the raw pixel data and the class labels through end-to-end learning. In contrast with the classical workflow typically applied in machine learning, feature extraction becomes an integral part of classification and is no longer considered as a separate, independent processing step. If the CNN is trained by end-to-end learning, the research can be additionally divided into two different approaches: A basic requirement for the successful training of deep CNN models is that sufficient training data labeled with the classes are available. Otherwise, there is a risk of overfitting the neural network and, as a consequence, an inadequate generalization property of the network for unknown input data. There is a very limited amount of data publicly available for the classification of skin lesions.
2) Convolution Based Feature Extraction:
Convolution layer is that the most significant layer in CNN and was ordinarily used for feature extraction from the image. One or quite one 2nd channels were treated as inputs to the convolution layers. These channels were convolved with completely different kernels. Every kernel has its own weights and represents a neighborhood feature extractor. Kernel is employed to extract output options that will or might not match with the dimension of the inputs. The feature outputs
Figure 4. Segmentation methodology: illustrates the proposed process for the segmentation of images.
contain the needed options of the input image. Pooling layer plays a necessary role in reducing the scale of options. We are applied pooling layer with the kernel of 2×2 pixels. This kernel down samples the input by choosing most worth from each consecutive 2×2 pixels of the inputs. These output channels can have 1/2 the samples, and our computation can become easier. Fully-connected layers incorporates somatic cells that connect every neuron from previous layers to each neuron within the next layer. This approach we tend to deduced results since every somatic cell is connected to each end in previous layer, we tend to get collective assessment of each feature extracted from the image.
3) ANN Classification: CCN will not need any further classifier like SVM, KNN since three fully-connected layers were used for coaching the classification model. Three- layer ANN classifier was employed in our methodology. This sort of classification brings its own distinctive advantages, like it is feasible to use a back-propagation algorithmic program, that adjusts the parameters of neurons in all layers to get higher classification model. For nerve cell activation, nonlinear functions were employed in ANN .
In this CNN model, non-linear ReLU was used as activation operate. ReLU could be a straightforward operate as shown in equation (5). Compared to alternative activation functions like Sigmoids and tanh, ReLU doesn’t have gradient vanishing drawback, that is a very important issue to contemplate a gradient dependent machine learning method, like enforced by this study. corrected linear operate (ReLU) due to its simplicity and gradient pre-service improves each learning speed and performance of our CNN by two.3. CNNs are neural networks with a specific architecture that have been shown to be very powerful in areas such as image recognition and classification.
The performance of planned methodology were evaluated in terms of sensitivity (6), specificity (7), and accuracy (8). Testing dataset of ISIC 2016 was used for this purpose, where 379 pictures were obtainable for testing.
TP = True Positive TN = True Negative
COMPARISON WITH LATEST OTHER TECHNIQUES
FN = False Negative FP = False Positive
Figure 5. Proposed CNN architecture
As shown in Table II, the proposed methodology has achieved higher results on the basis of selected criteria, compared with the previous methodology [16-18]. The proposed model has 98.15% of sensitivity to classify malignant images, and 98.41% of specificity representing the correct rejection of benign images. Overall accuracy achieved by the proposed methodology is 98.32%, which is higher than the previous attempts [16,17,18]. This may be that the methodology proposed in this study emphasizes on both segmentation and CNN accuracy. The accuracy of whole methodology is based on accurate image segmentation, as a result, an increase of classification accuracy by CNN is obtained.
In this paper, the methodology was proposed to detect melanoma cancer using CNN architecture. Dataset acquired from ISBI2016 was divided into two categories (melanoma and non-melanoma images). Custom-created automated segmentation was applied for this specific problem, also new approach was devised for the implementation of the CNN methodology. CNN was used to extract the image features and ANN was also used to classify those extracted features. ANN consisted for three-fully-connected layers. Results acquired from the proposed methodology yielded a sensitivity of 98.15%, specificity of 98.41%, and accuracy of 98.32%. These figures showed improved results as compared to previous methodologies. Higher results in this study were noninheritable as the proposed methodology emphasizes on each segmentation and CNN accuracy. Accuracy of whole methodology is based totally on accurate segmentation of image. This results in high classification accuracy. The Lesion Feature Network was proposed to address the task of dermoscopic feature extraction and is a CNN-based framework trained by the patches extracted from the dermoscopic images. To the best of our knowledge, we are not aware of any previous work available for this task. Hence, this work may become a benchmark for subsequent related research.
VI . References
- A. F. Jerant, J. T. Johnson, C. Demastes Sheridan, and T. J. Caffrey, “Early detection and treatment of skin cancer.” American family physician, vol. 62, no. 2, 2000.
- H. W. Rogers, M. A. Weinstock, S. R. Feldman, and B. M. Coldiron, “Incidence estimate of nonmelanoma skin cancer (keratinocyte carcinomas) in us population, 2012,” JAMA Dermatology, vol. 151, no. 10 , pp. 1081–1086, 2015.
- R. L. Siegel, K. D. Miller, S. A. Fedewa, D. J. Ahnen, R. G. Meester, A. Barzi, and A. Jemal, “Colorectal cancer statistics, 2017,” CA: a cancer journal for clinicians, vol. 67, no. 3, pp. 177–193, 2017.
- H. Kittler, A. A. Marghoob, G. Argenziano, C. Carrera, C. CurielLewandrowski, R. Hofmann-Wellenhof, J. Malvehy, S. Menzies, S. Puig, H. Rabinovitz et al., “Standardization of terminology in dermoscopy/dermatoscopy: Results of the third consensus conference of the international society of dermoscopy,” Journal of the American Academy of Dermatology, vol. 74, no. 6, pp. 1093–1106,2016.
- J. L. G. Arroyo and B. G. Zapirain, “Detection of pigment network in dermoscopy images using supervised machine learning and structural analysis,” Computers in biology and medicine, vol. 44, pp. 144–157 , 2014.
- C. Barata, J. S. Marques, and J. Rozeira, “A system for the detection of pigment network in dermoscopy images using directional filters, IEEE transactions on biomedical engineering, vol. 59, no. 10, pp. 2744–2754 , 2012.
- L. Bi, J. Kim, E. Ahn, D. Feng, and M. Fulham, “Automatic melanoma detection via multi-scale lesion-biased representation and joint reverse classification,” in Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on. IEEE, 2016, pp. 1055–1058.
- E. Ahn, J. Kim, L. Bi, A. Kumar, C. Li, M. Fulham, and D. D. Feng, “Saliency-based lesion segmentation via background detection in dermoscopic images,” IEEE journal of biomedical and health informatics, vol. 21, no. 6, pp. 1685–1693, 2017.
- J. March, M. Hand, A. Truong, and D. Grossman, “Practical application of new technologies for melanoma diagnosis: Part ii. molecular approaches,” Journal of the American Academy of Dermatology, vol. 72 , no. 6, pp. 943–958, 2015.
- R. Garnavi, M. Aldeen, M. E. Celebi, G. Varigos, and S. Finch, “Border detection in dermoscopy images using hybrid thresholding on optimized color channels,” Computerized Medical Imaging and Graphics, vol. 35 , no. 2, pp. 105–115, 2011.
- M. E. Celebi, H. Iyatomi, G. Schaefer, and W. V. Stoecker, “Lesion border detection in dermoscopy images,” Computerized medical imaging and graphics, vol. 33, no. 2, pp. 148–153, 2009.
- G. Schaefer, B. Krawczyk, M. E. Celebi, and H. Iyatomi, “An ensemble classification approach for melanoma diagnosis,” Memetic Computing, vol. 6, no. 4, pp. 233–240, 2014.
- N. Codella, J. Cai, M. Abedini, R. Garnavi, A. Halpern, and J. R.Smith, “Deep learning, sparse coding, and svm for melanoma recognition in dermoscopy images,” in International Workshop on Machine Learning in Medical Imaging. Springer, 2015, pp. 118–126.
- D. Gutman, N. C. Codella, E. Celebi, B. Helba, M. Marchetti, N.Mishra, and A. Halpern, “Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging ( isbi ) 2016, hosted by the international skin imaging collaboration ( isic),” arXiv preprint arXiv:1605.01397, 2016.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
- . Pathan, P. Siddalingaswamy, L. Lakshmi, and K. G. Prabhu, “Classification of benign and malignant melanocytic lesions: A cad tool,” in Advances in Computing, Communications and Informatics ( ICACCI), 2017 International Conference on. IEEE, 2017, pp. 1308–1312.
- F. K. Nezhadian and S. Rashidi, “Melanoma skin cancer detection using color and new texture features,” in Artificial Intelligence and Signal Processing Conference (AISP), 2017. IEEE, 2017, pp. 1 – 5.
- Z. Ge, S. Demyanov, B. Bozorgtabar, M. Abedini, R. Chakravorty, A.Bowling, and R. Garnavi, “Exploiting local and generic features for accurate skin lesions classification using clinical and dermoscopy imaging,” in Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on. IEEE, 2017, pp. 986–990.
- Ma Z., Tavares J. A novel approach to segment skin lesions in dermoscopic images based on a deformable model. IEEE J. Biomed. Health Inform. 2017;20:615–623. doi: 10.1109/JBHI.2015.2390032. [PubMed] [CrossRef]
- Yu L., Chen H., Dou Q., Qin J., Heng P.A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans. Med. Imaging. 2017;36:994–1004. doi: 10.1109/TMI.2016.2642839. [PubMed] [CrossRef]
- Celebi M.E., Kingravi H.A., Uddin B., Iyatomi H., Aslandogan Y.A., Stoecker W.V., Moss R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007;31:362–373. doi: 10.1016/j.compmedimag.2007.01.003. [PMC free article] [PubMed] [CrossRef]
- Celebi M.E., Iyatomi H., Schaefer G., Stoecker W.V. Lesion border detection in dermoscopy images. Comput. Med. Imaging Graph. 2009;33:148–153. doi: 10.1016/j.compmedimag.2008.11.002. [PMC free article] [PubMed] [CrossRef]
- Schaefer G., Krawczyk B., Celebi M.E., Iyatomi H. An ensemble classification approach for melanoma diagnosis. Memet. Comput. 2014;6:233–240. doi: 10.1007/s12293-014-0144-8. [CrossRef]