{"id":274,"date":"2015-01-27T13:00:54","date_gmt":"2015-01-27T12:00:54","guid":{"rendered":"https:\/\/www.skeptic.de\/blog\/kv\/?p=274"},"modified":"2016-11-18T19:40:42","modified_gmt":"2016-11-18T18:40:42","slug":"the-dark-room","status":"publish","type":"post","link":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/?p=274","title":{"rendered":"The Dark Room"},"content":{"rendered":"<p class=\"opener\">Pictures were stacking up, and now some solution had to be found to process them further into films, and to remove as much of the extraneous motion as possible.<\/p>\n<p>Up until now I had been using <span class=\"prog\">FFmpeg<\/span> to assemble image sequences into films. Documentation is very poor, but something can be made to work at times without pixellating the whole result. <!--more--><\/p>\n<p>Not many video processing programs provide digital image stabilisation, but I did find that <span class=\"prog\">Virtual Dub<\/span> had a filter that could be used. The only problem with this program was that it was virtually (get it!) impossible to have the program use a codec to compress the video, and, as a result, uncompressed video had to be produced, and then recoded (entirely unnecessary step).<\/p>\n<p>Oh, and before we start there was the problem of timestamps to be solved. It would be nice if the video had timestamps on it, just to give a feeling for time as the kayak tour proceeds. Now, you could try placing the timestamps (read out from the exif data) directly onto the images as they were being processed into the video, or shortly before that step. This means that transparency, colour, size, position, etc. could be controlled to a large degree. The only problem is that subsequent stabilisation would lead to the timestamps finishing up all over the place like the dog\u2019s breakfast.<\/p>\n<p>As it turns out, <span class=\"prog\">Virtual Dub<\/span> also possesses a filter for placing timestamps onto the finished film by means of <span class=\"prog\">SubStation Alpha <em>(SSA)<\/em><\/span>. Instead of placing the timestamps onto the image at the beginning, the data for the timestamps is collected and written to a project file for <span class=\"prog\">SSA<\/span>, the video is processed, and the timestamps are added in the final stage. <span class=\"prog\">SSA<\/span>\u2019s syntax is highly redundant, but basically you define a style (font, colour, size, shadow, etc.), then a sequence of events with starting and finishing times, the style and the text you want in the subtitle. Save as a text file. The only drawback is that the ability to control transparency is lost, so that you end up with some pretty determined timestamps on the images, but that\u2019s as good as it gets.<\/p>\n<p>The whole sequence is not easy and moderately error-prone, so for documentation&#8217;s sake, here goes. First, <span class=\"prog\">FFmpeg<\/span> assembles the images into a good quality (24 MB\/s) film in a two-pass process. In the course of this processing, the frames are scaled from 1600*1200 to 1360*1020 and deshaken with what little <span class=\"prog\">FFmpeg<\/span> can offer (<em>edge=3<\/em>), the timestamps read out, and initial and final images extracted. The video thus produced is loaded into <span class=\"prog\">VirtualDub<\/span> and the <span class=\"prog\">Deshaker<\/span> filter (by Gunnar Thalin) is called (Video &rarr; Filters &rarr; Add &rarr; Deshaker). This is a two-pass filter, meaning it has to be run once to analyse the motion in the sequence. In <em>Pass 1<\/em>, just two parameters are changed: <em>Deep analysis<\/em> for blocks where < 8% of vectors is OK (instead of 0%), and <em>Skip frame<\/em> if < 16% of all blocks are OK (instead of 8%). Then click on \u201cOK\u201d in two windows to close them, and start a dummy run by pressing the <em>play0<\/em> button (third from the left at the bottom). Unless you have a Hubble Space Telescope handy you won\u2019t be able to read the output from <span class=\"prog\">Deshaker<\/span>, but it doesn\u2019t matter all that much.<\/p>\n<p>For <em>Pass 2<\/em>, reset the slider to Frame 0, go back to the filters, choose <span class=\"prog\">Deshaker<\/span> and \u201cConfigure\u201d, and now click on \u201cPass 2\u201d and change the following parameters:<\/p>\n<ul>\n<li>Edge compensation: Adaptive zoom average (some borders);\n<li>Use previous and future frames to fill in borders: Activate (also activates \u201cSoft borders\u201d);\n<li>Extrapolate colors into border: Activate;\n<li>Motion smoothness: Zoom: set to 0.<\/ul>\n<p>There is no need to remove the filter and replace it as is claimed in some Youtube videos on the subject. Just configure the same old filter for Pass 2.<\/p>\n<p>Click on \u201cOK\u201d to return to the filters. Now add two further filters. First \u201cResize\u201d, but set the resize to 100% (i.e. no change). When this filter is configured, click on \u201cCropping\u201d and crop the video to its final destination size, 1280*720. Crop left and right by 40 pixels, top by perhaps 100 and bottom by 200 to achieve the desired result. This removes most of the edges produced by Pass 2 of <span class=\"prog\">Deshaker<\/span>.<\/p>\n<p>Secondly, add the subtitler filter (has to be downloaded separately from Avery Lee\u2019s site) and add the subtitle file that was created from the timestamps. OK everything and then save the video as uncompressed frames.<\/p>\n<p>The final step is to hand the video back to <span class=\"prog\">FFmpeg<\/span> for suitable compression, and extraction of the first and final frames. Then it\u2019s on to AVS\u2019s disastrous Video Editor for assembling into the final product.<\/p>\n<p>Some videos that have been produced in this way can be found on my Youtube channel:<br \/>\n[youtube=https:\/\/www.youtube.com\/watch?v=aDzwWTC5ipU]<br \/>\n[youtube=https:\/\/www.youtube.com\/watch?v=vb4sgr2pO1U]<br \/>\n[youtube=https:\/\/www.youtube.com\/watch?v=UBGKTkizerk]<br \/>\n[youtube=https:\/\/www.youtube.com\/watch?v=oW1xP2fZVQ0]<\/p>\n<p>I\u2019d like to thank Gunnar Thalin for the advice he gave on configuring Deshaker, and all the brain donors for transplants.<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/vg07.met.vgwort.de\/na\/f5b06051a5af4150b25a190da4c38843\" width=\"1\" height=\"1\" alt=\"No 1\"><script>blog_id = 'K5_Video-processing';<\/script><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Pictures were stacking up, and now some solution had to be found to process them further into films, and to remove as much of the extraneous motion as possible. Up until now I had been using FFmpeg to assemble image sequences into films. Documentation is very poor, but something can be made to work at &hellip; <a href=\"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/?p=274\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">The Dark Room<\/span> <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5,54],"tags":[110,112,113,107,111,109,114,108],"class_list":["post-274","post","type-post","status-publish","format-standard","hentry","category-photography","category-youtube-videos","tag-avery-lee","tag-deshaker","tag-digital-video-stabilisation","tag-ffmpeg","tag-gunnar-thalin","tag-substation-alpha","tag-timestamping-video","tag-virtualdub"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=\/wp\/v2\/posts\/274","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=274"}],"version-history":[{"count":37,"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=\/wp\/v2\/posts\/274\/revisions"}],"predecessor-version":[{"id":2280,"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=\/wp\/v2\/posts\/274\/revisions\/2280"}],"wp:attachment":[{"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=274"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=274"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/leetraynor.com\/blog.multi\/kiwi5\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}