Performance bottleneck

The place to discuss Mac OS X version of MakeMKV
Post Reply
FayeP
Posts: 3
Joined: Sat Oct 17, 2009 3:06 pm

Performance bottleneck

Post by FayeP »

Hi,

I've only just joined the forums, but I've been using MakeMKV for a little while and I have to say it's my favourite application. I've been through my modest library and submitted a dozen and some BD+ dumps for the magic to be done. I have to say that the procedure works really well and means only a few days of not being able to watch a certain disc. I noticed that in my collection it only seems to be (the majority of) Fox and MGM titles which are BD+ protected. Everyone else seems to be using AACS only. That might save people some time when determining which titles to try in their drives to aid the BD+ decryption cause.

Unlike some people, I have no interest in keeping a permanent archive of my BDs on the hard drive... I'd need at least 5TB so far and it's just not practical. It's much more convenient to just pick up the disc and play it... and maybe convert it to mp4 for iPhone playback when there isn't a 'digital copy' included.

Of course because I'm only interested in watching the titles, it means a decode cycle every time I watch, which is why I have been really pleased with AnyDVDHD when booting to Windows to watch BDs. I'm not a fan of booting out of MacOS though and that's why I'm falling in love with MakeMKV. I think it could be even better though.

The profile of the app is that it will read (and the output file size is shown as increasing) reaching about 4x speeds on my Mac Mini but then all processing will stop, the CPU usage will fall to 0 and the write buffers purge at just over 30Mb/s before the app continues to process. It repeats this pattern until the file is completely processed. If it could continue its reading and processing while the write buffers were flushing then we'd be away and running and might get up to the 6x speeds my reader is capable of.

The app is using blocking write calls and writes are in the same thread as the processing, this is why when the buffers fill, the app pauses (waiting for write to return) rather than keeping those buffers filled and writing out at maximum speed all the time.

Active app profile: (# calls)

Code: Select all

        semaphore_wait_trap        15351
        kevent        2291
        sem_wait$NOCANCEL$UNIX2003        2289
        mach_msg_trap        1554
        AES_decrypt        951
        AES_cbc_encrypt        112
        __memcpy        57
        memchr        40
        write        36
        0x6730        29
        semaphore_signal_trap        26
        pthread_setspecific        19
        0x5c531        15
        AES_set_decrypt_key        10
        AES_set_encrypt_key        6
        io_connect_method        6
Inactive (synchronous write) app profile:

Code: Select all

        semaphore_wait_trap        5796
        kevent        828
        sem_wait$NOCANCEL$UNIX2003        828
        write        828
Stream workflows really benefit from queued processing - one task wakes when there are encrypted blocks in the buffer from the disk read and puts them in the queue to be decrypted, another wakes when there is decryption to be done and places decrypted objects into the queue to be demultiplexed, another wakes when there is demuxing to be done and so on through multiplexing and writing mkv streams.

You're already using Grand Central Dispatch (whether you know it or not) for part of the job (I think just for the disc read and UI though?) the rest of it is using old school unix blocking IO. If you rearrange your workflow you could really improve this app's performance (at the moment, my CPUs are only about 40% used during a rip and writes are bursting for around 10s at a time every 30s.

In my day job, I help diagnose performance issues with internet apps written in Java on giant server farm installations... I hope this has been helpful and not too much like teaching granny to suck eggs.

Thanks again for a really useful application.
Claudio
Posts: 41
Joined: Sat Sep 12, 2009 4:21 am

Re: Performance bottleneck

Post by Claudio »

By any chance are you the same Faye as the one on the 'Anyone interested in a WUXGA MacBook Pro' Macrumors forum?
FayeP wrote:If you rearrange your workflow you could really improve this app's performance (at the moment, my CPUs are only about 40% used during a rip and writes are bursting for around 10s at a time every 30s.
If the workflow was indeed rearranged, could MakeMKV work more efficiently to use both processors like say Handrake? which will max out my Core 2 Duo to 195%
FayeP
Posts: 3
Joined: Sat Oct 17, 2009 3:06 pm

Re: Performance bottleneck

Post by FayeP »

That would be me, yes :D My, the internet is a small place ;)

The bottleneck would likely become the speed of the input device (BD read), unless you had a ~8x reader which might start to max out a C2D 2Ghz (at a guesstimate)
Claudio
Posts: 41
Joined: Sat Sep 12, 2009 4:21 am

Re: Performance bottleneck

Post by Claudio »

Thanks for all the help on that forum! I now have a perfectly working WUXGA MacBook Pro for these 1080p .mkv's :D

Except for these Blu-Ray movies, everything else on this screen looks so small that know I want to go back to my 1440x900 screen to be easier on my eyes.

Relating to MakeMKV, do you connect your laptop to an external monitor or watch on your laptop screen? I'm having a nightmare trying to get a good calibration with DVI--TV. Movies on my PS3 always look sooo much better :(
mike admin
Posts: 4075
Joined: Wed Nov 26, 2008 2:26 am
Contact:

Re: Performance bottleneck

Post by mike admin »

Thank you fpr reporting this - it will be corrected in a next version. To answer another question MakeMKV already uses multiple processors for CPU-hungry tasks like decryption.
FayeP
Posts: 3
Joined: Sat Oct 17, 2009 3:06 pm

Re: Performance bottleneck

Post by FayeP »

Didn't mean to imply that you weren't multithreaded but the writes block those same threads :) You're aware and I can't wait for a fix!
Post Reply