Short on time?

Get essay writing help

Identifying Salt Segmentation Using Deep Convolutional Neural Network

Words: 2611 |
Pages: 6
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.


TheEarthis rich in oil and natural gas also have huge deposits of salt-below the surface. Recognizing if a subsurface objective is salt or not consequently and precisely is of indispensable significance to oil penetrating. Be that as it may, sadly, acquiring the exact situation of enormous salt stores is troublesome. Proficient seismic imaging despite everything requires the understanding of salt bodies by specialists. This prompts exceptionally abstract, profoundly factor renderings. All the more alarmingly, it prompts conceivably risky circumstances for drillers in oil and gas organizations. Right now, Squeeze-Extraction Feature Pyramid Networks (alluded to as Se-FPN) was proposed to handle the undertaking of picture division of salt stores. In particular, we used SeNet as spine in order to verifiably figure out how to smother superfluous districts in an info picture while featuring notable highlights helpful for the errand. Considering the significance of multi-scales data, we proposed an improved FPN to incorporate data of various scales. So as to additionally combine the data from different scales, the Hypercolumns module was embedded toward the finish of the system. The proposed Se-FPN has been applied to the TGS Salt Identification Challenge and accomplished top notch division impact. The Mean Intersection over Union worth can arrive at 0.86.


The salt deposits seismic examination issue was known over hundred years back and even influenced advancement of the reflection seismic strategy. The salt examination is considered particularly intriguing because of the nearby contact with hydrocarbon stores which prompts extra issues in the investigating and extraction process. Since the surface of salt stores is fairly riotous the salt division issue is entangled and still significant these days. The first way to deal with this issue was the manual seismic pictures translation by geophysics pros. Throughout the years there were built up some numerical strategies to computerize this procedure in any case, the precision of those, particularly in some mind boggling cases, was not sufficient, in this manner some mixture techniques were introduced.

As of late, individuals’ examination on PC vision has kept on expanding. Specifically, the rise of convolutional neural systems (CNN) has added a great deal of splendor to the advancement of PC vision. CNNs have made incredible progress in an assortment of PC vision errands. Reliable image segmentation is one of the significant errands in PC vision. Position precisely get from the salt stores plays a vital impact on the misuse of oil and gas. With the improvement of science and innovation, seismic imaging innovation can be utilized to reflect salt and other stone developments under oil and gaseous petrol into the picture, clarifying that the stone mass can all the more viably separate oil and flammable gas. Be that as it may, at this stage, it must be physically set apart by experts. It automatically and precisely distinguishes if a subsurface objective is salt or not will without a doubt carry incredible accommodation to the handling of geographical pictures, and has sweeping essentialness for the misuse of oil and gaseous petrol. For the land salt stores picture, the abnormality of the salt layer circulation, the decent variety of the encompassing topographical layer and the distinction of the salt layer profundity will influence the situation of the salt stores in the seismic imaging, which is the test for the division of the salt stores pictures. In view of this test, we have proposed our answer.

The fundamental commitments of this paper are outlined as be follows. Considering the one of a kind properties of land salt stores, this paper proposed a productive information dividing strategy. The technique ensured that each overlay of information offered thought to profundity and veil pixel. Moreover, it encourages the preparation of the model .The paper proposed Squeeze-Extraction Feature Pyramid Networks (alluded to as Se-FPN) to distinguish if a subsurface objective is salt or not. We applied the Se-FPN to the TGS Salt Identification Challenge and accomplished top notch division impact.


Mikhail Karchevskiy et al[1] demonstrate the great performance of several novel deep learning techniques merged into a single neural network which achieved the 27th place (top 1%) in the mentioned competition. Using a U-Net with ResNeXt-50 encoder pre-trained on Image Net as our base architecture, we implemented Spatial-Channel Squeeze & Excitation, Lovasz loss, CoordConv and Hypercolumn methods. In our approach to the stated problem we showed the high efficiency of the deep learning methods. The predictions provided even by a single DL model were able to achieve the 27th place. Several novel techniques like CoordConv or Squeeze-and-Excitation networks showed great performance in real-world problems as well as ResNeXtlike architectures. Additionally, there were some optimizations and tuning tricks presented

Yunzhi Shi et al[2] designed a data generator that extracts randomly positioned sub volumes from large-scale 3D training data set followed by data augmentation, then feed a large number of sub-volumes into the network while using salt/nonsalt binary labels generated by thresholding the velocity model as ground truth labels. We test the model on validation data sets and compare the blind test predictions with the ground truth. Our results indicate that our method is capable of automatically capturing subtle salt features from the 3D seismic image with less or no need for manual input. We further test the model on a field example to indicate the generalization of this deep CNN method across different data sets. the model can take multichannel input with multiple seismic attributes as additional information. We design a general method to randomly generate training samples according to the receptive field size with several data augmentation methods, and we train the proposed network with effectively 10,000 random sub-volumes. The prediction results show that the trained model can not only generalize to the synthetic validation data set, and the field data set with noisy salt boundaries as well, thus demonstrating the future potential of this efficient and effective tool for automatic geo-body interpretations.

Aleksandar Milosavljević[3]to locate salt bodies, professional seismic imaging is needed. These images are analyzed by human experts which leads to very subjective and highly variable renderings. To motivate automation and increase the accuracy of this process, TGSNOPEC Geophysical Company (TGS) has sponsored a Kaggle competition that was held in the second half of 2018. The competition was very popular, gathering 3221 individuals and teams. Data for the competition included a training set of 4000 seismic image patches and corresponding segmentation masks. The test set contained 18,000 seismic image patches used for evaluation (all images are 101 x 101 pixels). Depth information of the sample location was also provided for every seismic image patch. The method presented in this paper is based on the author’s participation and it relies on training a deep convolutional neural network (CNN) for semantic segmentation. The architecture of the proposed network is inspired by the UNet model in combination with ResNet and DenseNet architectures. To better comprehend the properties of the proposed architecture, a series of experiments were conducted applying standardized approaches using the same training-framework. The result showed that the proposed architecture is comparable and in most cases, better that the segmentation models.

Benjamin Graham et al[4] introduced a new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called sub-manifold sparse convolutional networks (SSCNs), on two tasks involving semantic segmentation of 3D point clouds. In-particular, our models out perform all prior state-of-the-art on the test set of a recent semantic segmentation competition. The introduced sub-manifold sparse convolutional networks (SSCNs) for the efficient processing of high-dimensional, sparse input data . We demonstrated the efficacy of SSCNs in a series of experiments on semantic segmentation of three-dimensional point clouds. Specifically, our empirical evaluation of SSCN networks shows that they outperform a range of state-of-the-art approaches for this problem, both when identifying parts within an object and when recognizing objects in a larger scene. Moreover, SSCNs are computationally efficient compared to alternative approaches.

Mohamed Samy et al [4] illustrated that Semantic Segmentation of satellite images is one of the most challenging problems in computer vision as it requires a model capable of capturing both local and global information at each pixel. Current state of the art methods are based on Fully Convolutional Neural Networks (FCNN) with mostly two main components: an encoder which is a pre-trained classification model that gradually reduces the input spatial size and a decoder that transforms the encoder’s feature map into a predicted mask with the original size. We change this conventional architecture to a model that makes use of full resolution information. NU-Net is a deep FCNN that is able to capture wide field of view global information around each pixel while maintaining localized full resolution information throughout the model. We evaluate our model on the Land Cover Classification and Road Extraction tracks in the Deep Globe competition.

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place Order


Right now, depict our total encoder-decoder arrange engineering, intended for picture division of salt stores task. It very well may be seen from the figure that the general structure of the system incorporates the encoder module, the decoder module and profound directed advancement module.

We use SENet-154 to extract dense features. Then we perform an improved top-down pathway of FPN to extract precise pixel prediction and localization details. Followed with Hypercolumn module. The black and green lines represent the downsample and upsample operators respectively. The yellow arrow top of the rectangle C5 represent the global average pooling operation. The deep supervisied optimization is composed of the six loss.

A. Encoder

Past U-Net legitimately utilized normal convolution activities to extricate highlights from inputs. Be that as it may, considering the demonstrable skill of topographical salt stores pictures . We included SeNet with solid arrangement capacity in the encoder. By including this square, our instinct is to get progressively agent includes in various channels, which can improve the general division precision. The Se-FPN system utilizes a profound learning system SENet-154 at the encoder organize, which incorporates a sum of five layers of down-examining. The principal layer is a typical convolutional layer. The other four layers are SE squares. All convolutional layers have added cluster standardization [14] to forestall the overfitting issue, and exploits ReLU [15] as the enactment work. It is important that ReLU and Sigmoid are both utilized in the SE square. The encoder module contains the worldwide normal pooling layer, and the yield of this layer will be associated with the accompanying Hypercolumn module through the skip association. Simultaneously, we embed a completely associated layer at the highest point of encoder to get the characterization result.

B. Decoder

Motivated by Features Pyramid Network, the pyramid structure can separate distinctive size of highlight data. Disregarding that, layer-by-layer aggregation without a doubt builds the measure of calculation and memory utilization. With above thought, we propose an improved top-down way of FPN. Explicitly, the adjoining layers are up sampled by a factor of 2 and included, at that point went through the 3×3 convolution to the consequent.

This is to at first incorporate multi-scale data, and put all layers combination into the ensuing Hyper-column module, with the goal that the combination of multi-scale data is increasingly refined. The red line demonstrates that the highlights of the neigh boring layer are up sampled by a factor of 2, the in addition to sign shows the including activity of each anticipated guide, and the dark bolt shows the progression of the element map, followed with piece size of 3 × 3 convolution and ReLU enactment activity.

Next we present the general work of the Decoder part. Initially, the element maps got at the encoder arrange are associated with the FPN organize by utilizing a bit size of 1×1, 64×4 channels convolution, and get include maps of the comparing measurement. In the interim, the element maps got from the high layer upsampled by a factor of 2, and included with the element maps which through the 1×1 convolution. The outcomes are named L2, L3, L4, and L5 as indicated by the request. A short time later, we perform 3×3 convolution rather than the pooling activity since it can hold more detail and dispense with the associating impact of upsampling. Include the ReLU work as the initiation work. So also, we additionally named the anticipated maps as S2, S3, S4, and S5 as indicated by the request. At the Hypercolumn module, S1 is the component map from the finish of the encoder which has rich classification data. S2, S3, S4, and S5 are include maps from the improved topdown way of FPN module. These element maps are exposed to the first multi-scale highlight combination activity. To additionally combine multi-scale data, we straightforwardly upsample the component maps incorporate S1, S2, S3, S4, S5 to get a similar size element as the first element map through bilinear interjection. At last, various degrees of highlights are connected as the final division result.

C. Deep Supervised Optimization

Profound convolution system can bring great execution. In any case, expanding system profundity may present extra enhancement troubles, as appeared in Image Classification . ResNet takes care of this issue with the leftover learning in the square. PSPNet misuses ResNet as benchmark, and decay the enhancement into two sections. Every one of which is simpler to comprehend, by producing beginning outcomes by supervision with an extra misfortune, and learning the buildup subsequently with the last misfortune .We have embraced all the more profound supervision here, notwithstanding the principle branch misfortune, the option of five extra profound supervision. These incorporate extra misfortunes to S1 toward the end of the encoder and S2, S3, S4, S5 at the decoder arrange. Extra misfortunes help to advance the learning procedure. The misfortune on the principle branch bears the essential duty, so we add loads to adjust the assistant misfortunes. The extra misfortune weight at S2, S3, S4, and S5 is 0.1, and the weight at S1 is 0.01.

This paper examines the profundity data and the comparing cover zone of the open dataset. All preparation information are isolated into 50 interims as per the profundity data. It tends to be seen instinctively from the figure that the dispersion of the preparation information moves toward an ordinary dissemination. It tends to be obviously observed that the veil pixels of the preparation informational index are for the most part amassed in the interim of 0~2000, and generally uniform in different interims. For this reason to acquire a dependable and stable model, cross-approval is used right now. We proposes a technique dependent on a limited quantity of topographical salt stores information so as to guarantee that each crease information contains both distinctive profundity and diverse cover pixels data. The particular technique for division is as per the following:

  • Each interim is equivalent partitioned into 5 sections as per the profundity data of the preparation information.
  • Each interim is equivalent isolated into five sections on each bit of information got from above activity as per the quantity of comparing veil pixels.
  • Finally, we can get 25 folds information. Clearly, each overlay data contains both different depth and different mask pixels information.


Right now, proposed the Se-FPN to show signs of improvement division of land salt stores picture. In the information pre-processing stage, this paper thought about the one of a kind properties of pictures and proposed a dividing strategy appropriate for a modest quantity of preparing information. In the system structure, the fundamental structure of the Encoder-Decoder is received. At the encoder module,the SeNet is used as spine. At the decoder module, an improved top-down pathway is proposed and synergistic with Hyper-column module to intertwine multi-scale highlight maps. The profound regulated streamlining module is applied to advance the learning procedure. The test results show that the Se-FPN is powerful and achievable. Also, it has the functional application benefit of helping manual division.

Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this Page

Identifying Salt Segmentation Using Deep Convolutional Neural Network. (2022, February 17). Edubirdie. Retrieved March 30, 2023, from
“Identifying Salt Segmentation Using Deep Convolutional Neural Network.” Edubirdie, 17 Feb. 2022,
Identifying Salt Segmentation Using Deep Convolutional Neural Network. [online]. Available at: <> [Accessed 30 Mar. 2023].
Identifying Salt Segmentation Using Deep Convolutional Neural Network [Internet]. Edubirdie. 2022 Feb 17 [cited 2023 Mar 30]. Available from:
Join 100k satisfied students
  • Get original paper written according to your instructions
  • Save time for what matters most
hire writer

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via

Check it out!
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.