Except LLMs tend to be very big compared to standard decompression programs and often requires GPU with adequate VRAM in order to work reasonably fast enough. This is a very big usability issue IMO. If decompression can be done with a smaller and faster program (maybe also generated by the LLM?), it can be very useful and see pretty wide adoption (e.g. for future game devs who want to reduce their game size from 150GB to 130GB).
I don’t know how this would apply to decompression models in actuality, but in general, deep learning is VRAM intensive only during the training process, that’s because they train multiple batches of data at once for generalization, and all those batches of data need to be stored in ram.
But once the model is trained, the end user is only going to input data one by one, so VRAM usually is not an issue. There are also light weight models that are designed to be run on lower end hardware.
Training tends to be more compute intensive while inference is more likely to be able to be ran on a smaller hardware foot print.
The neater idea would be a standard model or set of models, so that a 30G program can be used on ~80% of target case, games and video seem good canidates for this.
Except LLMs tend to be very big compared to standard decompression programs and often requires GPU with adequate VRAM in order to work reasonably fast enough. This is a very big usability issue IMO. If decompression can be done with a smaller and faster program (maybe also generated by the LLM?), it can be very useful and see pretty wide adoption (e.g. for future game devs who want to reduce their game size from 150GB to 130GB).
I don’t know how this would apply to decompression models in actuality, but in general, deep learning is VRAM intensive only during the training process, that’s because they train multiple batches of data at once for generalization, and all those batches of data need to be stored in ram.
But once the model is trained, the end user is only going to input data one by one, so VRAM usually is not an issue. There are also light weight models that are designed to be run on lower end hardware.
Training tends to be more compute intensive while inference is more likely to be able to be ran on a smaller hardware foot print.
The neater idea would be a standard model or set of models, so that a 30G program can be used on ~80% of target case, games and video seem good canidates for this.