Hey
I like to suggest an improvement to MakeMKV for moment you run into Slow HDD messages
eg my Laptoop must use WiFI to the NAS and regular its ok, but sometimes i run into slow copy messages
I recognized my RAM has not been used as a buffer, in my case RAN for nearly half UHD has been free for buffering, but did not got used
Also MakeMKV could flush its buffer after finishing a file, so the used buffer, would get emptied out, before the nect file, instead only when backup is finished
for cosmetic reason a little message about...flushing buffer would also nice
i saw in taskmanager my wifi was still active, so i new its the buffer
hopefully not to confusing
- use existing RAM for buffering
and/or
- flush buffer before next file
This also would help in other situations than wifi, like slow external USB disk, and so on
makemkv feature request slow hdd
Re: makemkv feature request slow hdd
- How do you know the drive isn't fast enough to write to unless you try to write to it?
- If the data is already being written as fast as possible, what performance changes are possible?
Increasing buffer size won't speed up your rips, so maybe you should request an option so that the message can be ignored... but why?
- If the data is already being written as fast as possible, what performance changes are possible?
Increasing buffer size won't speed up your rips, so maybe you should request an option so that the message can be ignored... but why?
-
ChaosEnergy
- Posts: 14
- Joined: Sat Aug 30, 2025 6:51 pm
Re: makemkv feature request slow hdd
the messages would be avoided, if the avaible ram would be used, or if you already run into the "limiter" buffer flushed before next mts
the bluray drive seems to read slower after you run into this warning message,, makes sense, else it would read data which cant be stored..cause not using all ram, and not being able to flush mid file
so it would be a way, to reduce the situations where drive could need to be slowed down, by using..available ram and or at least flush buffer between files to stay longer in full speed reading
the bluray drive seems to read slower after you run into this warning message,, makes sense, else it would read data which cant be stored..cause not using all ram, and not being able to flush mid file
so it would be a way, to reduce the situations where drive could need to be slowed down, by using..available ram and or at least flush buffer between files to stay longer in full speed reading
Re: makemkv feature request slow hdd
Personally, I use a local external SSD as my work area when I'm ripping discs and making .mkv files. Only when I'm satisfied with my files do I bother to push them to my NAS. Reasonably sized reasonably fast external SSDs are reasonably priced these days.
-
ChaosEnergy
- Posts: 14
- Joined: Sat Aug 30, 2025 6:51 pm
Re: makemkv feature request slow hdd
surely, everyone wil lhave some way
I just suggested something what can be done within MakeMKV itself
I just suggested something what can be done within MakeMKV itself
Re: makemkv feature request slow hdd
Apparently not.
If the drive slows down and does not speed up again, then I can see wanting a larger buffer, but that doesn't change the point of the message. However, you need to prove that the drive slows down and never speeds backup to ever warrant any buffer changes.
I rip directly to a 104GB tmpfs on one of these boards (cheap 20x HDD storage): https://www.supermicro.com/en/products/ ... V-4C-7TP4F Of course if I ripped to a USB 2.0 flash drive I'd rightfully get the message.
-
standforme
- Posts: 20
- Joined: Wed Oct 08, 2025 6:44 pm
Re: makemkv feature request slow hdd
The problem with a large buffer is that then you have to wait a long time after the reading is done, to flush it out. A very large buffer could also get in the way of other programs working correctly, if we're talking going into the gigabytes. How do you then balance it against available RAM, when running multiple instances? How do you keep people from keeping it full, ripping disc after disc, and then sleeping or turning off their computers long before it's done flushing?
I wouldn't mind being able to suppress the messages, when I'm doing parallel rips, or remuxing while ripping, on a slow external HDD. But, the above is only the tip of the iceberg, for this problem, in general. Optimal buffering, with variable read and write speeds, is a genuinely difficult problem, and limiting reads by actual writes is a very safe and simple way to deal with it.
Your best bet is to either use some local storage for the rip, then copy it over later, or to use wired networking (USB NICs are a thing, and they work well).
I wouldn't mind being able to suppress the messages, when I'm doing parallel rips, or remuxing while ripping, on a slow external HDD. But, the above is only the tip of the iceberg, for this problem, in general. Optimal buffering, with variable read and write speeds, is a genuinely difficult problem, and limiting reads by actual writes is a very safe and simple way to deal with it.
Your best bet is to either use some local storage for the rip, then copy it over later, or to use wired networking (USB NICs are a thing, and they work well).
Re: makemkv feature request slow hdd
Get a local drive that is fast enough and copy to your NAS. Mike will not accomodate your crippled hardware / software.standforme wrote: ↑Mon Nov 17, 2025 9:35 pmThe problem with a large buffer is that then you have to wait a long time after the reading is done, to flush it out. A very large buffer could also get in the way of other programs working correctly, if we're talking going into the gigabytes. How do you then balance it against available RAM, when running multiple instances? How do you keep people from keeping it full, ripping disc after disc, and then sleeping or turning off their computers long before it's done flushing?
I wouldn't mind being able to suppress the messages, when I'm doing parallel rips, or remuxing while ripping, on a slow external HDD. But, the above is only the tip of the iceberg, for this problem, in general. Optimal buffering, with variable read and write speeds, is a genuinely difficult problem, and limiting reads by actual writes is a very safe and simple way to deal with it.
Your best bet is to either use some local storage for the rip, then copy it over later, or to use wired networking (USB NICs are a thing, and they work well).
Re: makemkv feature request slow hdd
I'm not advocating anything about the slow disk message but if the optical drive keeps vying against the destination volume then you're putting needless wear on your optical drive. Also, OOM race conditions, post-processing times, multi-threaded, etc. are all on the host. Babysitting the host is not for the user application as certain guarantees expected. Buffer management can easily be sorted out with any given example from just about any data structure/algorithm text book, so I'm sure MakeMKV's buffer size is already capable of being user defined.standforme wrote: ↑Mon Nov 17, 2025 9:35 pmThe problem with a large buffer is that then you have to wait a long time after the reading is done...
Of course a faster destination makes this thread deletable.
Again though, if you have the RAM why not create a tmpfs like I do, it's effectively the same thing as defining the buffer size in MakeMKV (and if it's not then MakeMKV is having a hard time reading the disc anyways). I can see a 8GB buffer helping but then that is still a race to the bottom if you drive is too slow.
-
standforme
- Posts: 20
- Joined: Wed Oct 08, 2025 6:44 pm
Re: makemkv feature request slow hdd
Speeding up and slowing down a lot only seems to happen with excessively DRMed, poorly balanced, or scratched up, discs, regardless of the resulting storage speed, IME. When mine get slow, generally from overloading a 5400 RPM HDD, they stay at a consistent slower speed.
Only with the complete world knowledge of the text book writer's ideal problem universe. No real large applications, including any major OS, has yet managed to generally figure it out in a way that maintains optimal performance and safety for everybody. It comes up on OS and DBMS MLs/forums all the time, and has for decades. Ever see those big file transfers that start out at GBps, then slow down to the actual device speeds of MBps? Same thing going on there. Not enough is absolutely known about each side of the task to make a solution that's truly better, and always works correctly. But, too little buffering guarantees that things will be slower than they aught. Every seemingly and intuitively clear general solution gets thwarted by real-world details and risk factors. In the end, some task-specific solution, that won't suit everyone's general needs, gets used, or you get to tune it as a user. MakemKV clearly is designed to have as few user options as reasonable, and takes a very safe route, either with a small fixed buffer, or comparing write and read speeds (I haven't tried looking deeply into it), to keep dirty memory time and space minimized.Buffer management can easily be sorted out with any given example from just about any data structure/algorithm text book
I only have 64GB RAM, FI, so tmpfs is out of the question. If you have more, only rip one disc at a time, or don't do UHD, it could work, though.
If you only have WiFi to your NAS, on a laptop, a local external drive would make the most sense, IMO, to later push the results to the NAS from, unless you are physically close enough to add wired networking. WiFi and consistency just do not go together, and the best solution to WiFi performance problems is almost always to find a way to bypass using WiFi.