For decoding, this assumes well-captured LDFs (no observed player skips while capturing, capture looks to be about 40-50 gigabytes in size):
1. Gather a set of LDF files. You usually want anywhere from 5 to 10 discs to capture, from the same pressing of the disc. This will allow them to be stacked, with the hope of hitting a stacked capture that has zero areas automatically marked as "dropout'. Let's call these captures dl1.ldf
2. Decode the LDF files into TBCs. Depending on the exact toolchain you're using - Oyvind's compile for Windows, or the native Linux version - you'll do either:
ld-decode -t X -s S <path and name for input TBC> <path and name for output files>
decode ld -t X -s S <path and name for input TBC> <path and name for output files>
In both cases, X is the number of threads to use when decoding, and S is the number of frames to start into the file. You usually want to start the decode with a few frames of lead-in before going into sequential picture numbers. Having too much of the player seeking around before the actual start of the capture can throw off the disc-mapping process.
3. Pre-process the VBI data for the ld-decode
toolchain's purposes:ld-process-vbi <path and name for input TBC>
4. Spend a much more time mapping each disc:ld-discmap <path and name for input TBC> <path and name for mapped TBC>
If the initial scan aborts early, try running with the -s
option to make it less picky with NTSC pull-down content. In the worst-case scenario, you can also try -r
to reverse the field order, but that should only be used if you know exactly what you're doing, and have been decoding discs for a while.
5. Load each mapped capture into ld-analyse
and make sure they look sane. Make sure the same fields appear on the same picture numbers across the mapped captures.
6. Stack them:ld-disc-stacker <path and name for first input TBC> ... <path and name for last input TBC> <path and name for output TBC>
7. Look at the PCM files in a wave editor of your choice. Choose the one that has the best quality - PCM stacking isn't yet a thing.
8. Create your raw AVI:ld-chroma-decoder <path and name of stacked TBC> -qf ntsc2d --ffll 1 --ffrl 1 --pad 1 - | ffmpeg -f rawvideo -r 30000/1001 -pix_fmt rgb48 -s 760x524 -i - -f s16le -r 44.1k -ac 2 -i <path and name of best PCM file> -vcodec huffyuv -pix_fmt yuv422p -acodec pcm_s16le output.interlaced.avi -y
9. Put the active video into a separate container:ffmpeg -i output.interlaced.avi -vcodec huffyuv -pix_fmt yuv422p -acodec copy -vf "crop=760:480:0:44" output.interlaced.active.avi -y
10. Crop the VBI area of the video into a separate container:ffmpeg -i output.interlaced.avi -vcodec huffyuv -pix_fmt yuv422p -an -vf "crop=760:41:0:0" output.vbi.avi -y
11. Pad in an additional three lines, as the VBI lines are not really intended to be visible in the video output, but I added that functionality to ld-chroma-decoder
for MAME purposes. So it is
a bit of an afterthought.ffmpeg -i output.vbi.avi -vcodec huffyuv -pix_fmt yuv422p -an -vf "pad=760:44:0:3" output.vbi.padded.avi -y
12. Put the adjusted VBI and active video together again:ffmpeg -i output.vbi.padded.avi -i output.interlaced.active.avi -vcodec huffyuv -pix_fmt yuv422p -acodec copy -filter_complex "vstack=inputs=2" output.fixedvbi.avi -y
13. Scale it to the expected 720x524 framing expected by MAME. This could be extended in a future CHD update.ffmpeg -i output.fixedvbi.avi -vcodec huffyuv -pix_fmt yuv422p -acodec copy -vf "scale=720:524,setsar=1:1" output.avi -y
14. Create the CHD using chdman
. This part requires a bit of finesse. When you were dealing with ffmpeg up above, take note of the total number of video fields that end up being processed, subtract 20 from it, then use that frame count here:chdman createld -i <output AVI> -if <frame count> -o <output CHD name>
If compressing with chdman fails, take whatever you used for "frame count", subtract another 20, and try again. If it succeeds, add 10, and try again. Use this to narrow down the exact number of frames that can be encoded.
There's some slight drift due to AVI not handling an uneven number of samples per field well, which is also something that will eventually be addressed.