Is there any way to save storage on similar images?
from pe1uca@lemmy.pe1uca.dev to selfhosted@lemmy.world on 05 Sep 2024 15:34
https://lemmy.pe1uca.dev/post/1606306

So, I’m selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I’m wondering if there’s any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

#selfhosted

threaded - newest

just_another_person@lemmy.world on 05 Sep 2024 15:37 next collapse

Well how would you know which ones you’d be okay with a program deleting or not? You’re the one taking the pictures.

Deduplication checking is about files that have exactly the same data payload contents. Filesystems don’t have a concept of images versus other files. They just store data objects.

cizra@lemm.ee on 05 Sep 2024 15:39 next collapse

You could store one “average” image, and deltas on it. Like Git stores your previous version + a bunch of branches on top.

WIPocket@lemmy.world on 05 Sep 2024 15:45 collapse

Note that Git doesnt store deltas. It will reuse unchanged files, but stores a (compressed) version of every file that has existed in the whole history, under its SHA1 hash.

cizra@lemm.ee on 05 Sep 2024 18:45 collapse

Indeed! Interesting! I made an experiment now with a non-compressible file (strings < /dev/urandom | head -n something) and it shows you’re right. 2nd commit, where I added a tiny line to that file, increased repo size by almost the size of the whole file.

Thanks for this bit.

pe1uca@lemmy.pe1uca.dev on 05 Sep 2024 15:42 collapse

I’m not saying to delete, I’m saying for the file system to save space by something similar to deduping.
If I understand correctly, deduping works by using the same data blocks for similar files, so there’s no actual data loss.

WhatAmLemmy@lemmy.world on 05 Sep 2024 16:25 next collapse

I believe this is what some compression algorithms do if you were to compress the similar photos into a single archive. It sounds like that’s what you want (e.g. archive each day), for immich to cache the thumbnails, and only decompress them if you view the full resolution. Maybe test some algorithms like zstd against a group of similar photos vs individually?

FYI file system deduplication works based on file content hash. Only exact 1:1 binary content duplicates share the same hash.

Also, modern image and video encoding algorithms are already the most heavily optimized that computer scientists can currently achieve with consumer hardware, which is why compressing a jpg or mp4 offers negligible savings, and sometimes even increases the file size.

dgriffith@aussie.zone on 07 Sep 2024 12:32 collapse

I don’t think there’s anything commercially available that can do it.

However, as an experiment, you could:

  • Get a group of photos from a burst shot
  • Encode them as individual frames using a modern video codec using, eg VLC.
  • See what kind of file size you get with the resulting video output.
  • See what artifacts are introduced when you play with encoder settings.

You could probably/eventually script this kind of operation if you have software that can automatically identify and group images.

cizra@lemm.ee on 05 Sep 2024 15:37 next collapse

Cool idea. If this doesn’t exist, and it probably doesn’t, it sounds like a worthy project to get one’s MSc or perhaps even PhD.

smpl@discuss.tchncs.de on 05 Sep 2024 18:40 next collapse

The first thing I would do writing such a paper would be to test current compression algorithms by create a collage of the similar images and see how that compares to the size of the indiviual images.

simplymath@lemmy.world on 06 Sep 2024 09:36 collapse

Compressed length is already known to be a powerful metric for classification tasks, but requires polynomial time to do the classification. As much as I hate to admit it, you’re better off using a neural network because they work in linear time, or figuring out how to apply the kernel trick to the metric outlined in this paper.

a formal paper on using compression length as a measure of similarity: arxiv.org/pdf/cs/0111054

a blog post on this topic, applied to image classification:

jakobs.dev/solving-mnist-with-gzip/

smpl@discuss.tchncs.de on 06 Sep 2024 11:50 collapse

I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don’t know, the savings might be neglible, but I’d assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.

I think you’re overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)

smpl@discuss.tchncs.de on 06 Sep 2024 12:14 next collapse

Wait… this is exactly the problem a video codec solves. Scoot and give me some sample data!

simplymath@lemmy.world on 06 Sep 2024 15:52 collapse

Yeah. That’s what an MP4 does, but I was just saying that first you have to figure out which images are “close enough” to encode this way.

smpl@discuss.tchncs.de on 07 Sep 2024 04:46 collapse

It seems that we focus our interest in two different parts of the problem.

Finding the most optimal way to classify which images are best compressed in bulk is an interesting problem in itself. In this particular problem the person asking it had already picked out similar images by hand and they can be identified by their timestamp for optimizing a comparison of similarity. What I wanted to find out was how well the similar images can be compressed with various methods and codecs with minimal loss of quality. My goal was not to use it as a method to classify the images. It was simply to examine how well the compression stage would work with various methods.

simplymath@lemmy.world on 07 Sep 2024 09:27 collapse

and my point was explaining that that work has likely been done because the paper I linked was 20 years old and they talk about the deep connection between “similarity” and “compresses well”. I bet if you read the paper, you’d see exactly why I chose to share it-- particularly the equations that define NID and NCD.

The difference between “seeing how well similar images compress” and figuring out “which of these images are similar” is the quantized, classficiation step which is trivial compared to doing the distance comparison across all samples with all other samples. My point was that this distance measure (using compressors to measure similarity) has been published for at least 20 years and that you should probably google “normalized compression distance” before spending any time implementing stuff, since it’s very much been done before.

simplymath@lemmy.world on 06 Sep 2024 15:54 collapse

Yeah. I understand. But first you have to cluster your images so you know which ones are similar and can then do the deduplication. This would be a powerful way to do that. It’s just expensive compared to other clustering algorithms.

My point in linking the paper is that “the probe” you suggested is a 20 year old metric that is well understood. Using normalized compression distance as a measure of Kolmogorov Complexity is what the linked paper is about. You don’t need to spend time showing similar images will compress more than dissimilar ones. The compression length is itself a measure of similarity.

just_another_person@lemmy.world on 05 Sep 2024 19:05 next collapse

The problem is that OP is asking for something to automatically make decisions for him. Computers don’t make decisions, they follow instructions.

If you have 10 similar images and want a script to delete 9 you don’t want, then how would it know what to delete and keep?

If it doesn’t matter, or if you’ve already chosen the one out of the set you want, just go delete the rest. Easy.

As far as identifying similar images, this is high school level programming at best with a CV model. You just run a pass through something with Yolo or whatever and have it output similarities in confidence of a set of images. The problem is you need a source image to compare it to. If you’re running through thousands of files comprising dozens or hundreds of sets of similar images, you need a source for comparison.

cizra@lemm.ee on 05 Sep 2024 21:32 next collapse

OP didn’t want to delete anything, but to compress them all, exploiting the fact they’re similar to gain efficiency.

just_another_person@lemmy.world on 05 Sep 2024 21:36 collapse

Using that as an example. Same premise.

WhyJiffie@sh.itjust.works on 06 Sep 2024 12:57 collapse

No, not really.

The problem is that OP is asking for something to automatically make decisions for him. Computers don’t make decisions, they follow instructions.

The computer is not asked to make decisions like “pick the best image”. The computer is asked to optimize, like with lossless compression.

just_another_person@lemmy.world on 06 Sep 2024 13:15 collapse

That’s not what he’s asking at all

WhyJiffie@sh.itjust.works on 07 Sep 2024 12:00 collapse

yes, they are. reread the post, I just did so and I’m still confident

simplymath@lemmy.world on 06 Sep 2024 09:44 collapse

computers make decisions all the time. For example, how to route my packets from my instance to your instance. Classification functions are well understood in computer science in general, and, while stochastic, can be constructed to be arbitrarily precise.

en.wikipedia.org/…/Probably_approximately_correct…

Human facial detection has been at 99% accuracy since the 90s and OPs task I’d likely a lot easier since we can exploit time and location proximity data and know in advance that 10 pictures taken of Alice or Bob at one single party are probably a lot less variant than 10 pictures taken in different contexts over many years.

What OP is asking to do isn’t at all impossible-- I’m just not sure you’ll save any money on power and GPU time compared to buying another HDD.

just_another_person@lemmy.world on 06 Sep 2024 10:18 collapse

Everything you just described is instruction. Everything from an input path and desired result can be tracked and followed to a conclusory instruction. That is not decision making.

Again. Computers do not make decisions.

simplymath@lemmy.world on 06 Sep 2024 10:54 collapse

Agree to disagree. Something makes a decision about how to classify the images and it’s certainly not the person writing 10 lines of code. I’d be interested in having a good faith discussion, but repeating a personal opinion isn’t really that. I suspect this is more of a metaphysics argument than anything and I don’t really care to spend more time on it.

I hope you have a wonderful day, even if we disagree.

just_another_person@lemmy.world on 06 Sep 2024 11:07 collapse

It’s Boolean. This isn’t an opinion, it’s a fact. Feel free to get informed though.

simplymath@lemmy.world on 06 Sep 2024 11:18 collapse

Then it should be easy to find peer reviewed sources that support that claim.

I found it incredibly easy to find countless articles suggesting that your Boolean is false. Weird hill to die on. Have a good day.

scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=c…

just_another_person@lemmy.world on 06 Sep 2024 11:43 collapse

LITERALLY from a “baby’s first 'puter” course: …mheducation.com/…/reading_selection_quiz.html#:~….

Or if you need something more ELI5: reddit.com/…/eli5_where_exactly_do_computers_make…

simplymath@lemmy.world on 06 Sep 2024 15:59 collapse

You seem very upset, so I hate to inform you that neither one of those are peer reviewed sources and that they are simplifying things.

“Learning” is definitely something a machine can do and then they can use that experience to coordinate actions based on data that is inaccesible to the programmer. If that’s not “making a decision”, then we aren’t speaking the same language. Call it what you want and argue with the entire published field or AI, I guess. That’s certainly an option, but generally I find it useful for words to mean things without getting too pedantic.

just_another_person@lemmy.world on 06 Sep 2024 16:27 collapse

🙄

“Pedantic Asshole tries the whole ‘You seem upset’ but on the Internet and proceeds to try and explain their way out of being embarrassed about being wrong, so throws some idiotic semantics into a further argument while wrong.”

Great headline.

Computers also don’t learn, or change state. Apparently you didn’t read the CS101 link after all.

Also, another newsflash is coming in here, one sec:

“Textbooks and course plans written by educators and professors in the fields they are experts in are not ‘peer reviewed’ and worded for your amusement, dipshit.”

Whoa, that was a big one.

simplymath@lemmy.world on 06 Sep 2024 20:16 collapse

I think there’s probably a difference between an intro to computer science course and the PhD level papers that discuss the ability of machines to learn and decide, but my experience in this is limited to my PhD in the topic.

And, no, textbooks are often not peer reviewed in the same way and generally written by graduate students. They have mistakes in them all the time. Or grand statements taken out of context. Or are simplified explanations because introducing the nuances of PAC-learnability to somebody who doesn’t understand a “for” loop is probably not very productive.

I came here to share some interesting material from my PhD research topic and you’re calling me an asshole. It sounds like you did not have a wonderful day and I’m sorry for that.

Did you try learning about how computers learn things and make decisions? It’s pretty neat

simplymath@lemmy.world on 06 Sep 2024 09:39 collapse

Definitely PhD.

It’s very much an ongoing and under explored area of the field.

One of the biggest machine learning conferences is actually hosting a workshop on the relationship between compression and machine learning (because it’s very deep). neurips.cc/virtual/2024/workshop/84753

ptz@dubvee.org on 05 Sep 2024 15:40 next collapse

Not sure if a de-duplicating filesystem would help with that or not. Depends, I guess, on if there are similarities between the similar images at the block level.

Maybe try setting up a small, test ZFS pool, enabling de-dup, adding some similar images, and then checking the de-dupe rate? If that works, then you can plan a more permanent ZFS (or other filesystem that supports de-duplication) setup to hold your images.

cizra@lemm.ee on 05 Sep 2024 15:45 collapse

Highly unlikely to succeed. The tiny differences are spread out all over the image.

ptz@dubvee.org on 05 Sep 2024 15:50 collapse

That’s what I was thinking, but wasn’t sure enough to say beyond “give it a shot and see”.

There might be some savings to be had by enabling compression, though it would depend on what format the images are in to start with. If they’re already in a compressed format, it would probably just be a waste of CPU to try compressing them further at the filesystem level.

NeoNachtwaechter@lemmy.world on 05 Sep 2024 16:19 next collapse

we can have 5~10 photos which are basically duplicates

Have any of you guys handled a similar situation?

I decide which one is the best and then delete the others. Sometimes I keep 2, but that’s an exception. I do that as early as possible.

I don’t mind about storage space at all (still many TB free), but keeping (near-)duplicates costs valuable time of my life. Therefore I avoid it.

[deleted] on 05 Sep 2024 16:26 next collapse

.

bizdelnick@lemmy.ml on 05 Sep 2024 17:18 next collapse

No, it is impossible to solve this on filesystem level. In theory, it would be possible to adopt some video codec for compression of such photo series, but it would be a lot of work to integrate it into immich.

beeng@discuss.tchncs.de on 05 Sep 2024 18:24 next collapse

When do you do the choosing? Try move that left in the process. Saving storage.

GravitySpoiled@lemmy.ml on 05 Sep 2024 18:59 next collapse

Storage is cheap. You suggest combining the images and storing the difference.

You can’t separate the images anymore. You have to store them in a container such that you have one common base image. You can then later on decide which image to look at.

You could also take a short video and only display one image.

Avif uses a video compression algorithm, meaning it’s basically one frame of a video.

Btw, I wouldn’t care about your problem. Storage is cheap. Try saving 10 4k videos and you’ll laugh about your image library

tehnomad@lemm.ee on 05 Sep 2024 21:37 next collapse

Not sure if you’re aware, but Immich has a duplicate finder

Bakkoda@sh.itjust.works on 05 Sep 2024 21:55 next collapse

And immich-go can run one via cli

lemmyvore@feddit.nl on 05 Sep 2024 23:16 collapse

From what I understand OP’s images aren’t the same image, just very similar.

tehnomad@lemm.ee on 06 Sep 2024 09:30 next collapse

Yeah, the duplicate finder uses a neural network to find duplicates I think. I went through my wedding album that had a lot of burst shots and it was able to detect similar images well.

ShortN0te@lemmy.ml on 06 Sep 2024 11:03 collapse

Would be surprised if there is any AI involved. Finding duplicates is a solved problem.

AI is only involved in object detection and face recognition.

tehnomad@lemm.ee on 06 Sep 2024 14:41 collapse

I wasn’t sure if it was AI or not. According to the description on GitHub:

Utilizes state-of-the-art algorithms to identify duplicates with precision based on hashing values and FAISS Vector Database using ResNet152.

Isn’t ResNet152 a neural network model? I was careful to say neural network instead of AI or machine learning.

ShortN0te@lemmy.ml on 06 Sep 2024 16:44 collapse

Thanks for that link.

AI is the umbrella term for ML, neural networks, etc.

ResNet152 seems to be used only to recognice objects in the image to help when comparing images. I was not aware of that and i am not sure if i would classify it as actuall tool for image deduplication, but i have not looked at the code to determine how much they are doing with it.

As of now they still state that they want to use ML technologies in the future to help, so they either forgot to edit the readme or they do not use it.

Bakkoda@sh.itjust.works on 06 Sep 2024 10:23 collapse

You can also adjust the threshold however that’s probably not a great idea unless you manually want to accept/reject the duplicates.

Lodra@programming.dev on 05 Sep 2024 21:46 next collapse

That basic idea is roughly how compression works in general. Think zip, tar, etc. files. Identify snippets of highly used byte sequences and create a “map of where each sequence is used. These methods work great on simple types of data like text files where there’s a lot of repetition. Photos have a lot more randomness and tend not to compress as well. At least not so simply.

You could apply the same methods to multiple image files but I think you’ll run into the same challenge. They won’t compress very well. So you’d have to come up with a more nuanced strategy. It’s a fascinating idea that’s worth exploring. But you’re definitely in the realm of advanced algorithms, file formats, and storage devices.

That’s apparently my long response for “the other responses are right”

31337@sh.itjust.works on 05 Sep 2024 23:35 collapse

Yeah, the image bytes are random because they’re already compressed (unless they’re bitmaps, which is not likely).

possiblylinux127@lemmy.zip on 05 Sep 2024 22:52 next collapse

Compression?

Showroom7561@lemmy.ca on 05 Sep 2024 22:58 next collapse

I went through the same dilemma. The old Synology photo software had a duplicate finder, but they removed that feature with the “new” version. But even with the duplicate finder, it wasn’t very powerful and offered no adjustability.

In the end, I ended up paying for a program called “Excire Foto”, which can pull images from my NAS, and can not only find duplicates in a customized and accurate way. It also has a localAI search that bests even Google Photos.

It runs from windows, saves its own database, and can be used as read-only, if you only want to make use of the search feature.

To me, it was worth the investment.

Side note: if I only had <50,000 photos, then I’d probably find a free/cheaper way to do it. At the time, I had over 150,000 images, going back to when the first digital cameras were available + hundreds of scanned negatives and traditional (film) photos, so I really didn’t want to spend weeks sorting it all out!

Oh, the software can even tag your photos for subjects so that it’s baked into the EXIF data (so other programs can make use of it).

Decronym@lemmy.decronym.xyz on 05 Sep 2024 23:05 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
NAS Network-Attached Storage
ZFS Solaris/Linux filesystem focusing on data integrity

3 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #953 for this sub, first seen 5th Sep 2024, 23:05] [FAQ] [Full list] [Contact] [Source code]

carl_dungeon@lemmy.world on 06 Sep 2024 01:51 next collapse

No that’s really not possible. I’d recommend tossing the similar ones after you pick the “best”.

rollerbang@lemmy.world on 06 Sep 2024 05:58 next collapse

File system deduplication might be best bet, though I don’t know what’s the potential.

Nibodhika@lemmy.world on 06 Sep 2024 07:18 collapse

This will be almost impossible. The short answer is that those pictures might be 95% similar but their binary data might be 100% different.

Long answer:

Images are essentially a long list of pixels, each pixel is 3 numbers for Red, Green and Blue (and optionally Alpha if you’re dealing with a transparent image, but you’re talking pictures so I’ll ignore that). This is a simple but very stupid way to store the data of an image, because it’s very likely that the image will use the same color in multiple places, so you can instead list all of the colors a image uses, and then represent the pixels as the number in that list, this makes images occupy a LOT less space. Some formats add to that, because your eye can’t see the difference between two very close colors, they group all colors that are similar into one only color, making their list of colors used on the image WAY smaller, thus having the entire image be a LOT more compressed (but you might noticed we lost information in this step). Because of this it’s possible that one image choose color X in position Y, while the other choose Z in position W, the binaries are now completely different, but an image comparison tool can tell you that color X and Z are similar enough to be the same, and they account for a given percentage of the image depending on the amount minimum of the values Y and W. But outside of image software, nothing else knows that these two completely different binaries are the same. If you hadn’t loss data by compressing get images in the first place you could theoretically use data from different images to compress (but the results wouldn’t be great, since even uncompressed images won’t be as similar as you think), but images can be compressed a LOT more by losing unimportant data so the tradeoffs are not worth it, which is why JPEG is so ubiquitous nowadays.

All of that being said, a compression algorithm specifically designed for images could take advantage of this, but no general purpose compression can, and it’s unlikely someone went to the trouble of building a compression for this specific case, when each image is already compressed there’s little to be gained by writing something that takes colors from multiple images in consideration, needing to decide if an image is similar enough to be bundled in together with that group or not, etc. This is an interesting question, and I wouldn’t br surprised to know that Google has one such algorithm to store all images you snap together that it can already know will be sequential. But for home NAS I think it’s unlikely you’ll find something.

Besides all of this, storage is cheap, just buy an extra disk and move over some files there, that’s likely to be your best way forward anyways.