A GAN-Based Technique for Realistic Image Inpainting and Restoration


Authors : Utsha Sarker; Karan Pratap Singh; Archy Biswas; Saurabh

Volume/Issue : Volume 10 - 2025, Issue 9 - September


Google Scholar : https://tinyurl.com/ww9j37uv

Scribd : https://tinyurl.com/4nkf73c6

DOI : https://doi.org/10.38124/ijisrt/25sep1444

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 30 to 40 days to display the article.


Abstract : Missing image regions can be restored using inpainting techniques which have numerous applications in computer vision such as image restoration and content replacement. This research presents an improved model of Image Inpainting with context-aware completion using Generative Adversarial Network (GAN). This research takes the approach of a two stage GAN model where the generator learns to make plausible pixel level details from geometric and contextual information, and a discriminator that differentiates between real and fake content to improve realism. In this scenario, the generator first does a coarse paint of the image and then enhances the details subsequently. To enhance spatial and semantic consistency, contextual loss functions are used during the inpainting model training. To furthermore increase the network’s effectiveness of generating realistic inpainting outcomes under extensive missing regions, the authors add perceptual loss, adversarial loss, and style loss to fuse texture and structural integrity. Many datasets used as benchmarks lead to the conclusion that the proposed GAN-based inpainting model is more efficient in achieving high visual and structural coherence in images as compared to other approaches.

Keywords : Image Inpainting, Image Restoration, Image Completion, Digital Image Reconstruction, Content Manipulation, Generative Adversarial Networks (GANs).

References :

  1. P. D. S. Prasad, R. Tiwari, M. L. Saini, and Savita, “Digital Image Enhancement using Conventional Neural Network,” 2023 2nd Int. Conf. Innov. Technol. INOCON 2023, 2023, doi: 10.1109/INOCON57975.2023.10100995.
  2. Yeh, Chia-Hung, et al. "Image inpainting based on GAN-driven structure-and texture-aware learning with application to object removal." Applied Soft Computing 161 (2024): 111748.
  3. D. Gupta, M. L. Saini, S. P. K. Mygapula, S. Maji, and V. Prabhas, “Generating Realistic Images Through GAN,” in Proceedings - 4th International Conference on Technological Advancements in Computational Sciences, ICTACS 2024, 2024, pp. 1378–1382. doi: 10.1109/ICTACS62700.2024.10841324.
  4. Rama, P., et al. "Advancement in Image Restoration Through GAN-based Approach." 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024.
  5. S. A. K. Ali, M. L. Saini, E. G. Kumar, and B. B. Teja, “Upscaling Images Resolution Using Generative Adversarial Networks,” in Proceedings - 4th International Conference on Technological Advancements in Computational Sciences, ICTACS 2024, 2024, pp. 1344–1349. doi: 10.1109/ICTACS62700.2024.10840902.
  6. Hedjazi, Mohamed Abbas, and Yakup Genc. "Efficient texture-aware multi-GAN for image inpainting." Knowledge-Based Systems 217 (2021): 106789.
  7. C. Sasidhar, M. L. Saini, M. Charan, A. V. Shivanand, and V. M. Shrimal, “Image Caption Generator Using LSTM,” in Proceedings - 4th International Conference on Technological Advancements in Computational Sciences, ICTACS 2024, 2024, pp. 1781–1786. doi: 10.1109/ICTACS62700.2024.10841294.
  8. Mohod, Aishwarya, and Piyoosh Purushothaman Nair. "GAN-based Image Inpainting Techniques: A Survey." 2024 International Conference on Advancements in Power, Communication and Intelligent Systems (APCI). IEEE, 2024.
  9. Jain, M. L. Saini, K. Bansal, and A. Padhi, “MRI and CT Scan Images Quality Enhancement Using Generative Adversarial Network,” in 8th IEEE International Conference on Computational System and Information Technology for Sustainable Solutions, CSITSS 2024, 2024. doi: 10.1109/CSITSS64042.2024.10816979.
  10. T. Loganayagi, J. Jeneetha Jebanazer, V. V. Rani, and M. L. Saini, “Spinal-QDCNN: advanced feature extraction for brain tumor detection using MRI images,” Eur. Spine J., 2025, doi: 10.1007/s00586-025-09147-7.
  11. M. L. Saini, A. R. Satish, T. V. M. Rao, J. Mandala, S. Das, and C. Rajan, “A novel multigrade classification in FL using brain MRI images based on FHAT_EfficientNet,” Int. J. Ad Hoc Ubiquitous Comput., vol. 49, no. 4, pp. 251–269, 2025, doi: 10.1504/IJAHUC.2025.147753.
  12. Nguyen, Ngoc-Thao, et al. "An improved GAN-based approach for image inpainting." 2021 RIVF International Conference on Computing and Communication Technologies (RIVF). IEEE, 2021.
  13. S. Sharma and M. L. Saini, “Analyzing the Need for Video Summarization for Online Classes Conducted During Covid-19 Lockdown,” Lect. Notes Electr. Eng., vol. 907, pp. 333–342, 2022, doi: 10.1007/978-981-19-4687-5_25
  14. Chen, Yuantao, et al. "Research on image inpainting algorithm of improved GAN based on two-discriminations networks." Applied Intelligence 51.6 (2021): 3460-3474.
  15. K. Lal and M. L. Saini, “A study on deep fake identification techniques using deep learning,” AIP Conf. Proc., vol. 2782, 2023, doi: 10.1063/5.0154828
  16. Zhang, Xian, et al. "Face inpainting based on GAN by facial prediction and fusion as guidance information." Applied Soft Computing 111 (2021): 107626.

Missing image regions can be restored using inpainting techniques which have numerous applications in computer vision such as image restoration and content replacement. This research presents an improved model of Image Inpainting with context-aware completion using Generative Adversarial Network (GAN). This research takes the approach of a two stage GAN model where the generator learns to make plausible pixel level details from geometric and contextual information, and a discriminator that differentiates between real and fake content to improve realism. In this scenario, the generator first does a coarse paint of the image and then enhances the details subsequently. To enhance spatial and semantic consistency, contextual loss functions are used during the inpainting model training. To furthermore increase the network’s effectiveness of generating realistic inpainting outcomes under extensive missing regions, the authors add perceptual loss, adversarial loss, and style loss to fuse texture and structural integrity. Many datasets used as benchmarks lead to the conclusion that the proposed GAN-based inpainting model is more efficient in achieving high visual and structural coherence in images as compared to other approaches.

Keywords : Image Inpainting, Image Restoration, Image Completion, Digital Image Reconstruction, Content Manipulation, Generative Adversarial Networks (GANs).

CALL FOR PAPERS


Paper Submission Last Date
31 - December - 2025

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe