i have been using Darktable (
http://www.darktable.org/) as a color grading tool to do the grading for my videos, but these videos are limited to very few shots (2 to less than 20)
here's an example
Click to view
what I usually do is split the frames of the video, process them via darktable-cli and then combining them again. i uise ffmpeg to split and combine frames. these are all done manually. each frame generated is an 8mb tiff file, that's large.
my latest project will demand a faster, automatic and a more space saving method. I'm doing a music video for a song I covered and it is composed of different shots from different takes (different camera angles)
I managed to make use of openshot (
http://www.openshot.org/) for its non-linear editing interface.
the next problem is how to apply color grading to each camera angle without wasting time/disk space in color grading parts that will not even appear on the time line.
so I was thinking of using the mlt xml export format of openshot to determine the time codes of each shot and each clip. this way i can know which parts to color grade. and then i can replace the clips in openshot with the color graded versions.
these things are just wishful thinking , I still have to know how to do them lol.
i will be using bash python since that's the only thing that's readily available
my projects are moving faster than before so i guess i really need to r&d this one.o
i use dark table as opposed to blender because I feel comfortable using darktable in color grading since I also do photographs.
i used to use GIMP in grading my photos but it was VERY DIFFICULT to control the hue and saturation
openshot's effects are too simple for my color grading requirements
if anyone knows a way how i can operate ffmpeg to stream input from some source like darktable, i would really appreciate