Correct me please if I'm wrong, but I seem to remember a conversation some time ago that the data from today's CMOS cameras have made the LRGB process of acquiring images at the telescope somewhat obsolete. If this is true, how does the workflow process go? Do you do an RGB combine to create a synthetic luminance image, and then use this image as the "L" channel in the LRGB combination tool, or just create a standard RGB combine image and process it from there on its own?
If there's a true improvement in quality of the finished image by shooting with a luminance filter, then by all means I'll continue to invest the time to do so. If not, then eliminating it would of course be a major time saver at the telescope. I haven't yet done a comparison of processing results of this question with my own data, but until I do I would very much appreciate your thoughts.
Thank you.
Gregory B. Miller
Comments