MIPAV is a framework not unlike Fiji, and the generatefusion plugin was developed in collaboration with the Shroff lab. It does intensity-based registration and joint deconvolution. You can download MIPAV at http://mipav.cit.nih.gov/ and then install the plugin. As of early 2019 this is still the cannonical workflow for some groups, though the Shroff lab has been developing a successor software.
In general the MIPAV software is much slower and more computationally intensive that Fiji MVR. However, it uses image-based registration so it can register datasets without distinct interest points. (Image-based registration is in the future plans for MVR per this thread but it is not implemented as of Aug 2018).
MIPAV generatefusion is referenced in the 2014 Nature Protocols paper, though some things have changed since. This is the place to look for updated information.
Credit Ryan Christensen as the original author of this tutorial, which can be found in its original form in this PDF.
diSPIM image processing involves several steps, including background subtraction, cropping, transformation, fusion, and deconvolution. The first two steps in this process, background subtraction and cropping, are carried out in ImageJ or FIJI, while the remaining steps are carried out in MIPAV.
You can find sample data for this tutorial, already background-subtracted (-90 Y axis rotation). There is also a bead dataset of unknown origin that can be used to test your MIPAV workflow.
Before describing the image processing, a couple notes on file naming and organization are in order. These apply to MIPAV only. Instead of using image metadata, MIPAV uses the file naming and user-input parameters to figure out where each image belongs in the overall experiment.
diSPIM images are originally written as image sequences (with each volume being equivalent to one image sequence). Image sequences are saved in individual folders, and need to be converted into a single .tiff stack for MIPAV to be able to read them. Typically we use bgX (where X is a number) to name the folder containing the background image sequence, and bvX (where X is a number) to name the folders containing the actually data.
The image sequences for each SPIM arm get their own separate folder. The naming of the SPIM arm folders depends on whether you are using Micro-Manager or Labview. For Labview-based systems, when facing the diSPIM the right-side camera is called SPIMB, and writes to a folder we name SPIMB_unprocessed, while the left. Micromanager uses an opposite camera naming system, so the right-side camera is named SPIMA and writes to a SPIMA_unprocessed folder, while the left-side camera is named SPIMB and writes to a SPIMB_unprocessed folder. Keep track of which naming system you are using as this can affect the rotations done in MIPAV. (This convention can be changed in Micro-manager without too much difficulty, ask the developers ).
The SPIMA_unprocessed and SPIMB_unprocessed folders are used to contain individual image sequence folders. So the file path for an image sequence before processing is usually SPIMA_unprocessed/SPIMB_unprocessed → bvX → individual image files.
Processed images are saved in SPIMA and SPIMB folders. MIPAV assumes the input images are stored in folders named SPIMA and SPIMB, so it’s easiest to just use the conventional naming formula when saving the processed images.
If you use the Micro-manager plugin to acquire your datasets, you can use the plugin to export the dataset in the way that MIPAV expects it. However, it is strongly recommended that you preserve the Micro-manager datasets because they contain metadata that can help reconstruct your experiment. You should always keep a copy of the as-acquired data in case you need to re-do any post-processing step, and exporting from Micro-manager dataset to MIPAV input format is the first such step.
Once the images have been cropped and background subtracted, they can be registered and deconvolved in MIPAV.