Progressive Residual Fusion Dense Network for Efficient Image Denoising

In a paper published in Scientific Reports, researchers introduced a novel image-denoising approach to address computational burdens effectively. The lightweight network combined progressive residual and attention mechanisms to handle Gaussian and real-world noise, significantly reducing parameters while preserving essential image features. Empirical analyses demonstrated superior performance across diverse datasets, marking a notable advancement in image processing.

Graphical representation of dense block. Image Credit: https://www.nature.com/articles/s41598-024-60139-x
Graphical representation of dense block. Image Credit: https://www.nature.com/articles/s41598-024-60139-x

Related Work

Past work in image denoising has encompassed model-based and learning-based approaches. Model-based methods, such as non-local means (NLM) and block-matching and 3D filtering (BM3D), rely on prior modeling of image or noise distributions. While effective, these methods often require manual parameter tuning and computationally expensive algorithms.

In contrast, deep learning (DL) methods like denoising convolutional neural networks (DnCNN) and convolutional blind denoising (CBDNet) have shown promise due to their flexibility and powerful learning capabilities. However, challenges persist in balancing between preserving spatial details, maintaining high-level context and edge preservation, and leveraging feature information from shallow models within deeper networks for optimal denoising outcomes.

Denoising Network Methodology

The proposed methodology draws inspiration from DenseNet's structure and prior denoising networks to enhance feature extraction while reducing computational complexity. The network adopts a progressive approach to merge shallow convolutional features with deep, dense network-extracted features through three residual blocks. It ensures the comprehensive utilization of shallow features for learning noise distribution.

A concatenation layer is introduced before the reconstruction output layer to consolidate features extracted by preceding dense networks and input them into an attention mechanism. This mechanism facilitates the assimilation of local and global features, enhancing denoising outcomes.

The network structure comprises three dense block (DB) modules, each consisting of convolutional layers followed by rectified linear unit (ReLU) activations. The core network architecture includes residual blocks and Transition and ReLU+conv layers, facilitating intricate feature learning across layers 3 to 12.

Within each DB module, batch normalization (BN) layers utilize varying kernel sizes and filters to adaptively discern noise distribution through fused features. The network culminates in a reconstruction output layer, maintaining input dimensional congruence while amalgamating forged global features.

The attention mechanism enhances feature fusion in lightweight convolutional neural networks while ensuring a small computational load. The convolutional attention feature fusion module (CAFFM) captures pairwise relationships between channel, height, and width dimensions through a three-branch structure, generating a complete three-dimensional attention map to adapt to changes in feature information of different sizes. CAFFM merges feature planes of different scales through weighted averaging, further decreasing computational load.

The image denoising algorithm based on residual fusion entails training the network with cropped image blocks added with noise. During testing, noisy images are input into the converged network to output predicted denoised images. The algorithm iterates to reduce the loss function, minimizing the error between the estimated and standard residuals, thereby achieving better denoising effects.

Image Denoising Experiment

The selected images from the ImageNet and Berkeley segmentation dataset 400 (BSD400) datasets were employed for training, covering a diverse range of subjects. The training set included grayscale and colored images affected by noise, each sized at 180 x 180 pixels.

A cropping technique was applied to enhance training efficiency, resulting in 215,552 smaller images of 40 x 40 pixels. Additionally, the analysts used a validation set of 20 images to assess the network's performance. The test set comprised randomly selected images from the Set12, Set68, and Darmstadt noise (DND) datasets.

The experiment employed a batch size 64, with 33,725 sample data per epoch and 150 epochs. Throughout the training process, a fixed learning rate of 0.001 was utilized. The PyTorch DL framework was used for training and testing. PyCharm, a software environment running on Python version 3.9, was used.

Evaluation metrics included both subjective and objective measures. The subjective evaluation involved visual inspection of denoised images, while the objective assessment utilized peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity index for color images (FSIMc). The team calculated PSNR and SSIM to quantitatively evaluate image quality, while FSIMc extended the assessment to measure color image similarity.

The experimental results encompassed qualitative and quantitative evaluations across various datasets and noise levels. Visual comparisons were made between the proposed method and prior approaches, highlighting the denoising efficacy in image quality, noise reduction, and detail preservation.

Quantitative metrics such as PSNR and SSIM further validated the superior performance of the proposed model compared to established techniques. Additionally, computational complexity analysis demonstrated the efficiency of the proposed model, emphasizing its practicality for real-world applications.

Denoising Innovation Summary

To summarize, the team addressed challenges in current DL image denoising methods by combining dense block architectures and residual learning (RL) frameworks. The Sequential Residual Fusion Dense Network efficiently mitigated Gaussian and real-world noise by progressively integrating shallow and deep features.

Dense blocks mapped noise distributions, reducing network parameters while extracting local image attributes. Sequential fusion merged convolutional features, culminating in a robust denoising process: a tripartite attention mechanism, CAFFM, enhanced feature fusion. Empirical studies demonstrated significant performance improvement across various datasets, outperforming over 20 existing PSNR, SSIM, and FSIMc metrics methods.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2024, May 03). Progressive Residual Fusion Dense Network for Efficient Image Denoising. AZoAi. Retrieved on May 18, 2024 from https://www.azoai.com/news/20240503/Progressive-Residual-Fusion-Dense-Network-for-Efficient-Image-Denoising.aspx.

  • MLA

    Chandrasekar, Silpaja. "Progressive Residual Fusion Dense Network for Efficient Image Denoising". AZoAi. 18 May 2024. <https://www.azoai.com/news/20240503/Progressive-Residual-Fusion-Dense-Network-for-Efficient-Image-Denoising.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Progressive Residual Fusion Dense Network for Efficient Image Denoising". AZoAi. https://www.azoai.com/news/20240503/Progressive-Residual-Fusion-Dense-Network-for-Efficient-Image-Denoising.aspx. (accessed May 18, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2024. Progressive Residual Fusion Dense Network for Efficient Image Denoising. AZoAi, viewed 18 May 2024, https://www.azoai.com/news/20240503/Progressive-Residual-Fusion-Dense-Network-for-Efficient-Image-Denoising.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Real-Time Safety Helmet Detection with Improved YOLOv5 Algorithm