We undoubtedly have a chunk of images on
our computer. The problem with having a lot of pictures
is that you tend to accumulate duplicates along the way.
It would be prudent to manage space efficiently.
Detecting duplicate images from a set of images is a timeconsuming task that can be automated, and duplicate
data can be removed to save space. As we use our phones
more, the number of unwanted duplicate photo and
picture files grows in the device at random, ideally in
every folder. Duplicate photos/pictures consume a lot of
phone memory and slow down the phone's performance.
Finding and removing them manually is difficult. Since
human visual ability is not well developed enough to
extract structure similarity from the naked eye, we
propose a novel approach based on structural
information degradation. As a practical solution to this
problem, we create a structural similarity index and
demonstrate it with a set of images from our database.
Finding similar and duplicate photos from these samples
can be a time-consuming task. Duplicate photo finders
come in handy in this situation. Finally, we will compare
the computation time and power required by processing
on multiple cores vs. single core threads, as well as
provide benchmarks and graphical representations for
Keywords : Single core ; Multithreading ; Multiprocessing; RGB ; Luminance ; Contrast ; Structure ; Similarity Index