Abstract—
Capturing photos in low light conditions is a difficult situation for photographers to get optimal image results. Low light conditions can occur at night, in the forest or in a dark room. The image quality in low light conditions is not good because the objects in the picture are lack of light which makes some objects blur and hard to identify. Factors that affect low light conditions such as brightness, contrast and noise can cause poor image quality. Poor image quality can be improved by doing image enhancement and noise removal technique. Image enhancement is a way to improve image quality by using various methods. While noise removal is removing noise that can damage image quality.
Keywords— Image Enhancement, Noise Removal, Denoising, Low Light Image, Image Processing.
Save your time!
We can take care of your essay
- Proper editing and formatting
- Free revision, title page, and bibliography
- Flexible prices and money-back guarantee
Place an order
Introduction
Digital images play an important role in the era of information and communication technology. Digital images make people easy to share information, but some problems usually occur to digital images like noise and lighting that can affect image quality. Noise removal and image enhancement technique are needed to improve image quality that affects image quality.
Image enhancement is one of the initial processes in image processing. Image enhancement is a process to improve the appearance of an image or to convert the image to a form better suited for analysis by a machine or human. Some of the processes in the image enhancement section include image brightness change, contrast enhancement, contrast stretching, image histogram conversion, image softening, sharpening, edge detection, histogram equalization, and geometric alteration.
Enhancement of images in low light environment is a very challenging problem in in various research case studies of improve image quality. Images that take in low-light environment usually have low visibility, unwanted noise, poor image sharpness, and various other issues. In the most recent decade, various techniques and methods have been proposed to improve the quality of low light images such as histogram-based methods [1], [2], [3], Retinex-based methods [4], [5], Logarithmic Image Processing methods [6], [7], filtering-based methods [8], [9], and neural network-based methods [10], [11], [12], [13], [14]. In this paper, we compare several methods and techniques to improve images with low light conditions.
Literature Review
Various techniques or methods used for image enhancement and noise removal in a low light image are given below
De-hazing
Pixel-wise inversion histograms of low-light images or HDR images is very similar to the histogram of hazy images. That can make de-hazing techniques can enhance low-light images. For low light image input (I), invert it to make hazy condition using :
(1)
where c is the color channel (RGB). is the intensity of a color channel of pixel x of the low light image input I. is the same intensity of inverted image R. Apply the haze removal algorithm to the inverted image using :
(2)
where A is the global atmospheric light. R(x) is the intensity of pixel x the camera catches. J(x) is the intensity of the original objects or scene. t(x) describes how much percent of the light emitted from the objects or scene reaches the camera. After applying the haze removal algorithm, invert again to obtain the enhanced image [15].
Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images (NPE)
NPE propose an enhancement algorithm for non-uniform illumination images. In general, NPE makes the following three major contributions. First, a lightness-order-error (LOE) measure is proposed to access naturalness preservation objectively. Second, a bright-pass filter is proposed to decompose an image into reflectance and illumination, which, respectively, determine the details and the naturalness of the image. Third, NPE propose a bi-log transformation, which is utilized to map the illumination to make a balance between details and naturalness [16].
A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement (BIMEF)
BIMEF propose a multi-exposure fusion framework inspired by the human visual system (HVS). There are two stages in this framework: Eye Exposure Adjustment and Eye Exposure Adjustment. The first stage simulates the human eye to adjust the exposure, generating an multi-exposure image set. The second stage simulates the human brain to fuse the generated images into the final enhanced result. Based on this framework, BIMEF propose a dual-exposure fusion method. First employ the illumination estimation techniques to build the weight matrix for image fusion. Then derives camera response model based on observation. Next, find the optimal exposure for camera response model to generate the synthetic image that is well-exposed in the regions where the original image is under-exposed. Finally, obtain the enhanced results by fusing the input image with the synthetic image using the weight matrix [17].
A weighted variational model for simultaneous reflectance and illumination estimation (SRIE)
SRIE propose a weighted variational model to estimate both the reflectance and the illumination from an observed image. SRIE goal is to develop an objective function that outputs a usable illumination and reflectance. To this end, SRIE observe that conventional methods use an objective function along the following lines:
(3)
In this objective function, the logarithmic illumination l uses a squared penalty to enforce spatial smoothness and the logarithmic reflectance r is encouraged to be piece-wise constant using L1-norm. The fidelity term is the squared error term between the log-transformed image and its breakdown in illumination and reflectance [18].
Low-light Image Enhancement via Illumination Map Estimation (LIME)
LIME is built based on retinex teory and using the following (Retinex) model [19], [20], which explains the formation of a low-light image :
(4)
where L and R are the captured image and the desired recovery, respectively. Furthermore, T represents the illumination map, and the operator ◦ means element-wise multiplication. LIME assume that, for color images, three channels share the same illumination map and use T () to represent one-channel and threechannel illumination maps interchangeably. LIME propose to simultaneously preserve the over-all structure and smooth the textural details based on the initial illumination map with the following optimization problem :
(5)
where α is the coefficient to balance the involved two terms and, and designate the Frobenious and norms, respectively. Further, W is the weight matrix, and ∇T is the first order derivative filter. In this work, it only contains ∇hT (horizontal) and ∇vT (vertical). The first term takes care of the fidelity between the initial map and the refined one T, while the second term considers the (structure-aware) smoothness [4] .
RetinexNet
RetinexNet construct a deep-learning image decomposition based on Retinex model. The enhancement process is divided into three steps: decomposition, adjustment and reconstruction. In the decomposition step, a subnetwork Decom-Net decomposes the input image into reflectance and illumination. In the following adjustment step, an encoder-decoder based Enhance-Net brightens up the illumination. Multiscale concatenation is introduced to adjust the illumination from multi-scale perspectives. Noise on the reflectance is also removed at this step. Finally, RetinexNet reconstruct the adjusted illumination and reflectance to get the enhanced result [5].
A Deep Autoencoder approach to Natural Low-light Image Enhancement (LLNet)
LLNet proposes a deep autoencoder-based approach to identify signal features from lowlight images and adaptively brighten images without over-amplifying/saturating the lighter parts in images with a high dynamic range. LLNet propose a training data generation method by synthetically modifying images available on Internet databases to simulate low-light environments. Two types of deep architecture are explored : (i) for simultaneous learning of contrast-enhancement and denoising and (ii) sequential learning of contrast-enhancement and denoising using two modules [10].
A Convolutional Neural Network for Low-light Image Enhancement (LLCNN)
LLCNN propose a CNN-based method to enhance low-light images. LLCNN learns to adaptively enhance image contrast and increase image brightness. In LLCNN, a special module is designed to help training and improve the performance. The architecture of LLCNN is described as follows: one convolutional layer is used to do pre-processing to produce uniform input, and another convolutional layer is used to generate enhanced image, several special-designed convolutional modules are placed between those two layers. The network takes low-light images as input and do processing to make the output image appear to be captured in normal light conditions. All input images are generated using nonlinear method to simulate low light condition [11].
Low-light Image/Video Enhancement Using CNNs (MBLLEN)
This method proposed fully convolutional neural network, namely the multi-branch low-light enhancement network (MBLLEN). The MBLLEN consists of three types of modules, i.e., the feature extraction module (FEM), the enhancement module (EM) and the fusion module (FM). The idea is to learn to 1) extract rich features up to different levels via FEM, 2) enhance the multi-level features respectively via EM and 3) obtain the final output by multi-branch fusion via FM [12].
Learning to See in the Dark (Chen et al)
Chen et al. introduce See-in-the-Dark (SID) dataset of raw short-exposure low-light images, with corresponding long-exposure reference images. Using the presented dataset, Chen et al. develop a pipeline for processing low-light images, based on end-to-end training of a fully convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. Chen et al. use BM3D [21] as the reference denoising algorithm [13].
Comparison Table Analysis
TABLE 1
Reference
Method
Strength
Limitation
[15]
De-hazing
The method is simple and fast enhancement algorithm
The quality of image and brightness is still not good
[16]
NPE
Preserves the naturalness and increase brightness
Image sharpness is lacking and there is still noise
[17]
BIMEF
Slight image distortion and less noise
Image brightness still lacking
[18]
SRIE
Image clarity is more satisfying
There is still noise and slow computing time
[4]
LIME
Brightness and object clarity in the image is better
There is still noise and image sharpness still lacking
[5]
RetinexNet
High-quality restored result with reduce halo effect and color distortion
Image contrast is bad and slow computing time
[10]
LLNet
Less noise and better brightness
Cannot process the entire high resolution image due to its MLP structure
[11]
LLCNN
Less noise and good image brightness
High image distortion and less sharpness
[12]
MBLLEN
Less noise and better image clarity
High image distortion and less sharpness
[13]
Chen et al.
Image clarity with good brightness and sharpness also less noise
Slow computing time
Conclusions
Improving the quality of low light images has several methods. Each method has advantages and disadvantages. A good method is one that can enhance images by increasing brightness, sharpness, clarity and minimizing distortion also eliminating existing noise. Based on this research, the best technique or method for enhancement low-light images is Chen et al. where this method can increase the brightness and sharpness of low light images to be of good quality.
References
- H. Cheng and X. Shi, “A simple and effective histogram equalization approach to image enhancement,” Digit. Signal Process. A Rev. J., vol. 14, no. 2, pp. 158–170, 2004.
- M. A. Al Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A dynamic histogram equalization for image contrast enhancement,” IEEE Trans. Consum. Electron., vol. 53, no. 2, pp. 593–600, 2007.
- U. K. Nafis, K. V. Arya, and M. Pattanaik, “Histogram statistics based variance controlled adaptive threshold in anisotropic diffusion for low contrast image enhancement,” Signal Processing, vol. 93, no. 6, pp. 1684–1693, 2013.
- X. Guo, Y. Li, and H. Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, 2017.
- C. Wei, W. Wang, W. Yang, and J. Liu, “Deep Retinex Decomposition for Low-Light Enhancement,” Aug. 2018.
- K. A. Panetta, E. J. Wharton, and S. S. Agaian, “Human visual system-based image enhancement and logarithmic contrast measure,” IEEE Trans. Syst. Man, Cybern. Part B Cybern., vol. 38, no. 1, pp. 174–188, 2008.
- K. Panetta, S. Agaian, Y. Zhou, and E. J. Wharton, “Parameterized logarithmic framework for image enhancement,” IEEE Trans. Syst. Man, Cybern. Part B Cybern., vol. 41, no. 2, pp. 460–473, 2011.
- Y. Lu and S. Jian, “Automatic exposure correction of consumer photographs,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7575 LNCS, no. PART 4, pp. 771–785, 2012.
- Y. Wang, S. Zhuo, D. Tao, J. Bu, and N. Li, “Automatic local exposure correction using bright channel prior for under-exposed images,” Signal Processing, vol. 93, no. 11, pp. 3227–3238, 2013.
- K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recognit., vol. 61, pp. 650–662, 2017.
- L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: A convolutional neural network for low-light image enhancement,” in 2017 IEEE Visual Communications and Image Processing, VCIP 2017, 2017.
- F. Lv, F. Lu, J. Wu, and C. Lim, “MBLLEN: Low-light Image/Video Enhancement Using CNNs,” in British Machine Vision Conference (BMVC), 2018.
- C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to See in the Dark,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1–10, 2018.
- F. Lv and F. Lu, “Attention-guided Low-light Image Enhancement,” pp. 1–12, 2019.
- X. Dong et al., “Fast efficient algorithm for enhancement of low lighting video,” in Proceedings - IEEE International Conference on Multimedia and Expo, 2011.
- S. Wang, J. Zheng, H. M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process., vol. 22, no. 9, pp. 3538–3548, 2013.
- Z. Ying, G. Li, and W. Gao, “A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement,” Nov. 2017.
- X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016.
- E. H. Land, “The Retinex Theory of Color Vision,” Sci. Am., vol. 237, no. 6, pp. 108–128, 1977.
- S. Park, S. Yu, B. Moon, S. Ko, and J. Paik, “Low-light image enhancement using variational optimization-based retinex model,” IEEE Trans. Consum. Electron., vol. 63, no. 2, pp. 178–184, 2017.
- K. Davob, A. Foi, V. Katkovnik, and K. Egiazarian, “Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering,” IEEE Trans. Image Process., vol. 16, no. 8, pp. 1–2, 2007.