Image Inpainting Using Stable Diffusion Model


Authors : Pradeep Rao K. B.; Prajwal P.; Rithik B. R.; Sandeep; Shreya B. S.

Volume/Issue : Volume 10 - 2025, Issue 11 - November


Google Scholar : https://tinyurl.com/55shdprr

Scribd : https://tinyurl.com/32jj9kfj

DOI : https://doi.org/10.38124/ijisrt/25nov1318

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : Image inpainting refers to the process of reconstructing missing, occluded, or degraded regions of an image in a way that is visually coherent with the surrounding content. Traditional inpainting techniques relied on interpolation, texture propagation, or patch-based synthesis, and largely lacked semantic awareness. Recent advancements in generative diffusion models have enabled high-quality, context-aware inpainting guided by natural-language prompts. This research presents an AI-driven inpainting system built using the Stable Diffusion Inpainting Pipeline, integrated with a user-friendly interface via Gradio. The proposed system combines image masking, CLIP-based text conditioning, and latent diffusion to produce realistic and semantically aligned reconstructions. Experimental results demonstrate strong qualitative performance and robust segmentation behavior, supported by evaluation metrics generated using SAM (Segmentation Anything Model). This study highlights the effectiveness of diffusion-based inpainting in restoration, object removal, and creative visual editing tasks.

Keywords : Image Inpainting, Diffusion Models, Variational Autoencoder, Segmentation Anything Model, Latent Diffusion Model.

References :

  1. Salem, N. M. (2021). A Survey on Various Image Inpainting Techniques. Future Engineering Journal, 2(2).
  2. Wang, H., Yang, J., & Zhou, J. (2025). Harmony score-guided inpainting: Iterative refinement for seamless image inpainting. Neurocomputing, 131001.
  3. Zhang, N., Ji, H., Liu, L., & Wang, G. (2019). Exemplar-based image inpainting using angle-aware patch matching. EURASIP journal on image and video processing, 2019(1), 70.
  4. Kim, S., Suh, S., & Lee, M. (2025). Rad: Region-aware diffusion models for image inpainting. In Proceedings of the Computer Vision and Pattern Recognition Conference (pp. 2439-2448).
  5. Parida, S., Srinivas, V., Jain, B., Naik, R., & Rao, N. (2023, April). Survey on diverse image inpainting using diffusion models. In 2023 2nd International Conference on Paradigm Shifts in Communications Embedded Systems, Machine Learning and Signal Processing (PCEMS) (pp. 1-5). IEEE.
  6. Zhao, L., Yang, T., Shao, W., Zhang, Y., Qiao, Y., Luo, P., Zhang, K., & Ji, R. (2024). Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model.
  7. Froch, T., Wysocki, O., Xia, Y., Xie, J., Schwab, B., Cremers, D., & Kolbe, T. H. (2025). FacaDiffy: Inpainting unseen facade parts using diffusion models. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, X-G-2025, 261–268.
  8. Li, X., Xue, H., Ren, P., & Bo, L. (2025). DiffuEraser: A Diffusion Model for Video Inpainting.
  9. Jiang, H., Sun, H., Li, R., Tang, C.-K., Tai, Y.W.(2023). Inpaint4DNeRF: Promptable Spatio-Temporal NeRF Inpainting with Generative Diffusion Models.
  10. Xu, S., Xiang, W., Lv, C., Wang, S., & Liu, G. (2024). Diversified Image Inpainting with Transformers and Denoising Iterative Refinement. IEEE Access, 1.
  11. Pan, L., Zhang, T., Chen, B., Zhou, Q. Y., Ke, W., Süsstrunk, S., & Salzmann, M. (2024). Coherent and Multi-modality Image Inpainting via Latent Space Optimization.
  12. Hsieh, T.-C., Zhao, Q., Pan, F., Danzeng, P., Gao, D., & Dorji, G. (2024). Text and Edge Guided Thangka Image Inpainting with Diffusion Model. 1–10.
  13. Zhu, S., Fang, P., Zhu, C., Zhao, Z., Xu, Q., & Xue, H. (2024). Text Image Inpainting via Global Structure-Guided Diffusion Models. arXiv.Org, abs/2401.14832.
  14. Manukyan, H., Sargsyan, A., Atanyan, B., Wang, Z., Navasardyan, Sh., & Shi, H. (2023). HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models. arXiv.Org, abs/2312.14091.
  15. Zhang, C., Yang, W., Li, X., & Han, H. (n.d.). MMGInpainting: Multi-Modality Guided Image Inpainting Based On Diffusion Models. IEEE Transactions on Multimedia.

Image inpainting refers to the process of reconstructing missing, occluded, or degraded regions of an image in a way that is visually coherent with the surrounding content. Traditional inpainting techniques relied on interpolation, texture propagation, or patch-based synthesis, and largely lacked semantic awareness. Recent advancements in generative diffusion models have enabled high-quality, context-aware inpainting guided by natural-language prompts. This research presents an AI-driven inpainting system built using the Stable Diffusion Inpainting Pipeline, integrated with a user-friendly interface via Gradio. The proposed system combines image masking, CLIP-based text conditioning, and latent diffusion to produce realistic and semantically aligned reconstructions. Experimental results demonstrate strong qualitative performance and robust segmentation behavior, supported by evaluation metrics generated using SAM (Segmentation Anything Model). This study highlights the effectiveness of diffusion-based inpainting in restoration, object removal, and creative visual editing tasks.

Keywords : Image Inpainting, Diffusion Models, Variational Autoencoder, Segmentation Anything Model, Latent Diffusion Model.

CALL FOR PAPERS


Paper Submission Last Date
31 - January - 2026

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe