Hi Rafael,
Could you make your example available to me for demonstration purposes?I think there are two parts of the answer. One part is the mechanics of adding new data to already acquired information. The second part is optimizing the result of pr…
Yes of course that would do it.In FastTrack Training- the first thing I tell people to do when they have an error is look at all of the initial data and output... BLINK. Look at the values.It seems this advice would have solved the issue very quickl…
The problem is the caching. I would not use this as a method of WBPP pipeline execution in general. It is really to make a single adjustment somewhere along the way... not as a method of rerunning the script incrementally. The files cannot be found …
Maybe you do need a solution for each created panel...I can't remember right now.I didn't think so because you will need to figure out the center of the frame.-the Blockhead
The astrometric solution and other metadata is now found under the VIew Explorer.This is a change with the newer versions of PI.
It sounds like the third panel is the issue? It needs the astrometric solution. The other two must be OK since they went…
Yes. Also, if the state(s) happen to be at the end, you can create a clone of the view. The history states will be contained in the "Intiial State" you can double click on and it will not include the obviated ones.
OR...and this is a weird one. Any …
You can "mix" in... but my suggestion is based on a general principle.Integrated everything AGAIN after adding more frames is best. You are changing the statistics of the set and the resulting weighting and rejection. Integrating across all data is …
No... you should open them and confirm that size of the images are the same number of pixels.I suspect the difference is that the other masters have the rejection maps included (you will see them when you open). This certainly bloats the memory stor…
When you use the full WBPP pipeline, it will re-compute a plate solution at the end automatically. However, for purposes of FastTrack- where understanding how things work is important- you need to apply it yourself. An integrated image will never ha…
Did you see my video on the NB Normalization module?This and other information in found in Horizons:
https://www.adamblockstudios.com/articles/narrow-band-normalization
and more generally:https://www.adamblockstudios.com/categories/narrow-band
From…
Well..the process console, by default, is not "sticky" and only pops out when a process is running However, if you are using multiple monitors or have so other graphics issue- this might be an issue.I would force it to pop out by clicking on the Pro…
Hi Mladen,
The last note on the members page (when you log in) is a link to my e-mail list:
https://mailchi.mp/150fd19303b4/adamblockstudios-e-mail-notifications
Please do join!
Thanks for the timestamps. I will need to take care of this. Would you…
In this particular case, there isn't a tension between methods or scripts. They are literally different tools/methods of doing things. Is there tension between pliers and a wrench? :)
Color Mask is a selective method that allows you to modify a rang…
I *just* sent an e-mail about this!Are you getting my e-mails? Please let me know.You are not doing anything wrong. Yes, this is a consequence of the new version. I have to make the decision to re-do all of FastTrack or "patch" it to keep up with ch…
Mike summarizes it best in this forum post:https://pixinsight.com/forum/index.php?threads/benefit-of-starless-stars-compared-to-starless-stars.21794/#post-135814
I have nothing more to add.-the Blockhead
Your questions come across as a challenge.
The benefit of working with linear images is using tools such as SXT and BXT on the data. These work best on linear data. There are also a number of other operations which are best done in the linear state…
True, you cannot use the subtraction method in the case of screening. Applying a mask is probably the most direct and safest way to manage which of the two images the pixel data goes to.-the Blockhead
No, this is not correct. It is still giving you distinct integrations based on either exposure time or filer name. It is troubling that your OSC data has "Luminace" in the name (why?). You have also run things more than once which is confusing. You …