Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
docs:mm_dispim_plugin_user_guide [2020/04/28 19:41]
Jon Daniels [Default Timing]
docs:mm_dispim_plugin_user_guide [2020/09/23 18:29]
Jon Daniels [Volume Settings] fixed axial resolution
Line 219: Line 219:
  
 Distinguishing two nearby point sources of light is a classic problem in optics. If a camera’s pixels are so large that both points are read by the same pixel, you won't be able to tell them apart, regardless of how good the optics. The camera resolution must be at least twice the optical resolution to avoid being the limiting factor; essentially, a dark pixel must be between the two bright pixels. This is a manifestation of the Nyquist sampling theorem, which describes a fundamental mathematical relationship between continuous (or analog) and discrete (or digital) signals. Distinguishing two nearby point sources of light is a classic problem in optics. If a camera’s pixels are so large that both points are read by the same pixel, you won't be able to tell them apart, regardless of how good the optics. The camera resolution must be at least twice the optical resolution to avoid being the limiting factor; essentially, a dark pixel must be between the two bright pixels. This is a manifestation of the Nyquist sampling theorem, which describes a fundamental mathematical relationship between continuous (or analog) and discrete (or digital) signals.
-The standard formula for the optical lateral resolution is the Rayleigh criterion, a distance given by the formula: 0.61*lambda/NA (where lambda is the wavelength of light). For a 40X 0.8NA objective with 500 nm light, the lateral resolution is ~381 nm. That objective with a camera sensor that has a 6.5 um pixel pitch, is spatially sampling at (6.5 um/40X) ~162.5 nm pixel size, so we meet the Nyquist criteria (because 381 nm/162.5 nm = 2.34), but we wouldn't if we were using 400 nm light. + 
-Normally, for axial resolution limited by optics, the Z-axis step must be smaller than twice (per Nyquist) the optical axial resolution. Optical axial resolution, or depth of field, is usually taken to be lambda/(NA^2); for the 40X 0.8NA objective at 500 nm, it is ~781 nm. Therefore, the Z-step should be <0.39 nm.+The standard formula for the optical lateral resolution is the Rayleigh criterion, a distance given by the formula: 0.61*lambda/NA (where lambda is the wavelength of light). For a 40X 0.8NA objective with 500 nm light, the lateral resolution is ~381 nm. That objective with a camera sensor that has a 6.5 um pixel pitch, is spatially sampling at (6.5 um/40x) ~162.5 nm pixel size, so we meet the Nyquist criteria (because 381 nm/162.5 nm = 2.34), but we wouldn't if we were using 400 nm light. 
 +Normally, for axial resolution limited by optics, the Z-axis step must be smaller than twice (per Nyquist) the optical axial resolution. Optical axial resolution, or depth of field, is usually taken to be 2*lambda*RI/(NA^2); for the 40x 0.8 NA water objective at 500 nm, it is ~2.1 um. Therefore, the Z-step should be < 1 um. 
 For diSPIM, we have two views that can be merged computationally. The axial perspective from each objective is a lateral perspective (with higher resolution) from the other, so we can undersample in Z to a certain extent, which is advantageous from a speed perspective. However, giving up too much axial resolution the registration of the two views will suffer; in an extreme case your Z-step could be large enough to completely skip over a point source. For this reason, we recommend the Z-step be at least as small as the objective's depth of field. (With Fiji MVR and bead datasets, it's easy to register datasets with 0.5 um Z-step spacing but not with 1 um Z-step spacing.) For diSPIM, we have two views that can be merged computationally. The axial perspective from each objective is a lateral perspective (with higher resolution) from the other, so we can undersample in Z to a certain extent, which is advantageous from a speed perspective. However, giving up too much axial resolution the registration of the two views will suffer; in an extreme case your Z-step could be large enough to completely skip over a point source. For this reason, we recommend the Z-step be at least as small as the objective's depth of field. (With Fiji MVR and bead datasets, it's easy to register datasets with 0.5 um Z-step spacing but not with 1 um Z-step spacing.)
 </WRAP> </WRAP>