Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Ditch the luminance filter?

Hi Adam;

Correct me please if I'm wrong, but I seem to remember a conversation some time ago that the data from today's CMOS cameras have made the LRGB process of acquiring images at the telescope somewhat obsolete. If this is true, how does the workflow process go? Do you do an RGB combine to create a synthetic luminance image, and then use this image as the "L" channel in the LRGB combination tool, or just create a standard RGB combine image and process it from there on its own?

If there's a true improvement in quality of the finished image by shooting with a luminance filter, then by all means I'll continue to invest the time to do so. If not, then eliminating it would of course be a major time saver at the telescope. I haven't yet done a comparison of processing results of this question with my own data, but until I do I would very much appreciate your thoughts.

Thank you.

Gregory B. Miller

Comments

  • Yes, I demonstrate using a synthetic luminance in the latest video I published (the Telescope Live Mosaic of iC 4812 in Horizons). 

    I agree that LRGB in the past took advantage of binning the color data in order to save time. The loss of resolution was not important since the spatial information was in the Luminance (real) image and there was an improvement in S/N by making the relative contribution of read noise less in this configuration. 

    But nowadays, you cannot bin CMOS detectors (and even if you do..there is not a benefit in read noise reduction). 

    So  a real acquired Luminance isn't critical. You *can* technically save some time with a Luminance by purposefully taking less color data. You would only do this for faint objects. For bright objects it has always been true that straight RGB is best. 

    If you search the PI forum I gave a well-thought version of this answer.

    -the Blockhead
Sign In or Register to comment.