Experimenters' guide to colocalization studies: finding a way through indicators and quantifiers, in practice.

Multicolor fluorescence microscopy helps to define the local interplay of subcellular components in cell biological experiments. The analysis of spatial coincidence of two or more markers is a first step in investigating the potential interactions of molecular actors. Colocalization studies rely on image preprocessing and further analysis; however, they are limited by optical resolution. Once those limitations are taken into account, characterization might be performed. In this review, we discuss two types of parameters that are aimed at evaluating colocalization, which are indicators and quantifiers. Indicators evaluate signal coincidence over a predefined scale, while quantifiers provide an absolute measurement. As the image is both a collection of intensities and a collection of objects, both approaches are applicable. Most of the available image processing software include various colocalization options; however, guidance for the choice of the appropriate method is rarely proposed. In this review, we provide the reader with a basic description of the available colocalization approaches, proposing a guideline for their use, either alone or in combination.


INTRODUCTION
In cell biology, colocalization studies are performed to infer the coincidence of two or more signals within the volume of a sample. This event might occur in two ways: Proteins of interest are locally present on a structure; their concentrations are locally linked. The diagnosis of coincidence is based on a representation of the biological sample by a set of images. The latter is formed through a microscope and its attached numerization device (detector). Since the resolution of an optical system is diffraction-limited, colocalization studies may also be impaired by this limited optical resolution. Thus, optical resolution is a referential that should always be clearly stated. In cell biology, the conclusion of colocalization analysis is usually formulated as "two markers are at the same location." But the real conclusion of a colocalization analysis should rather be stated as "knowing the current resolution, it cannot be excluded that the two markers are at the same location." The term "colocalization" is used abusively, as only a diagnosis of close vicinity can be stated with certitude.
The image formation process being the key point for setting the colocalization referential, care should be taken in both, sample preparation and image acquisition. The former implies preserving the spatial distribution of the components to analyze and using appropriate markers, usually fluorescent, that are devoid of cross talk and bleedthrough (for review, refer to Bolte & Cordelières, 2006). Sample mounting is also a critical parameter. When dealing with 3D samples, the use of setting mounting media may alter the thickness of cells and impair further analysis. As resolution depends on both the composite refractive index (RI) of the crossed media and the angular properties of the objective (numerical aperture, NA), oil and objectives should be chosen with care. For instance, when working on subcellular structures, the use of high-NA objectives is recommended, with immersion media that match mounting medium RI. Finally, the imaging process is also crucial. As stated by the Shannon-Nyquist-Whittaker theorem (Shannon, 1998), an element of resolution should always be sampled at least twice, which means in practice that a subdiffraction object should be represented by 3 Â 3(Â3) pixels (voxels).
Several strategies might be used to unravel colocalization occurrence in a set of images. One may want to have a global diagnosis, taking the image as a whole or increasing the level of granularity and rather work on objects. In the former case, two types of colocalization evaluation might be performed, extracting either indicators or quantifiers. When working on objects, only quantifiers have yet been proposed as colocalization evaluators. Two choices appear when working on a set of images: the experimenter might analyze intensities, expecting some links between their distributions, or investigate signal coincidence. There is a whole set of parameters one has to choose from. In this review, we present the most commonly used colocalization approaches, giving insight in their domains of application, while guiding the user in the choices and combinations that are offered.

TWO TYPES OF NUMERICAL VALUES TO EXTRACT: COLOCALIZATION INDICATORS AND COLOCALIZATION QUANTIFIERS
Colocalization analysis should always start by a close visual inspection of the image couple and a simple channel overlay might be the starting point. However, the experimenter should stay critical as the typical yellowish colocalization pattern might be achieved through an exaggerated image processing. This first step might be a final step in conditions where evident colocalization is present in the sample. However, care should be taken about cross talk and bleedthrough.
In the following lines, colocalization evaluation will be used as a generic term encompassing the use of both colocalization indicators and colocalization quantifiers. Colocalization indicators are numerical values that evaluate a degree of signal coincidence over a predefined scale, but are not suitable to calculate a precise amount of overlap. They are suitable for relative comparisons, without being usable for direct quantitative studies. Colocalization quantifiers provide the experimenter with an absolute value, quantifying the overlap of signals by using physical descriptors such as area or volume or by measuring distances between defined coordinates within the structures.

Legacy colocalization indicators and visualization methods
How should one evaluate the dependency between signals acquired for colocalization evaluation? Most basically, a linear relationship among the intensities of both channels can be assumed. Manders, Stap, Brakenhoff, van Driel, and Aten (1992) transposed a classical visualization of flow cytometry data by a scatter plot to confocal images: The intensity of a given pixel in the green image is used as the x-coordinate of the scatter plot and the intensity of the corresponding pixel in the red image as the y-coordinate. The composed figure takes the shape of a dot cloud, its form unraveling the colocalization state. In case of unambiguous colocalization, the shape might be approximated as a line centered over the cloud. The scatter plot might also display two separated populations of dots, close to each axis. In this case, two conclusions might be drawn: If these two clouds have a line shape, they might result from either cross excitation or cross detection of signals. Alternatively, less structured clouds hint for nonlinked signals. Finally, the scatter plot might also display additional dot arrangements: combinations of the former, multiple linear dependencies, and/or single/multiple nonlinear dependencies. Those arrangements increase the difficulty of the interpretation process. Under such circumstances, one should remember that the scatter plot representation omits spatial information. Therefore, it is crucial to go back to the overlay image, trying to identify regions where each single dependency (linear/nonlinear) occurs and redo the analysis on identified regions of interest (ROI).
To characterize the linear dependency of two signals, Manders et al. introduced the use of the Pearson correlation coefficient (PC) (Pearson, 1901), as a colocalization indicator in 1992 (Manders et al., 1992) (see Table 21.1). The PC characterizes the linear relationship between two variables, providing a value of 1 in case of complete positive correlation (colocalization), À1 for negative correlation (exclusion), and zero when no correlation is found. Although this scale seems comfortable, the extent of each extreme value remains to be determined: which limit should be put between total positive correlation (PC ¼ 1) and no correlation (PC ¼ 0)? Midrange values will remain hard to interpret, when taken alone. Therefore, a scatter plot should always accompany PC. As previously shown (Bolte & Cordelières, 2006), a midrange value might correspond to either no correlation or correlation corrupted by noise. Depending on the shape of the dot clouds, the experimenter will be able to infer the former or the latter. In cases where several experimental conditions are to be compared, PC might be used to show a difference of colocalization. However, the PC remains an indicator and may not be used to express the amount of colocalization within a sample.  Attempts have been made to ease the interpretation of PC. The overlap coefficient (see Table 21.1) has been designed by taking the mean intensity values out of the equation (Manders, Verbeek, & Ate, 1993). This results in a numerical value in the 0-1 range, where 0 corresponds to negative correlation, 0.5 to no correlation, and 1 to full correlation. This shift brings confusion in the interpretation. The experimenter may abusively refer to this [0, 1] as a percentage of colocalization, which, of course, is not true.
Looking more closely to the overlap coefficient, one might distinguish two parts within the expression: one encountering for channel A, the other for channel B. Subsequently, two colocalization indicators were built, k 1 and k 2 (Manders et al., 1993, see Table 21.1). When a perfect correlation is present in the image couple, k 1 tends to a value close to the normalized stoichiometry (value in the 0-1 range) depending on the slope of the average line in the scatter plot, and k 2 to 1/k 1 . A noise-and/or background-corrupted channel will result in an increase in the denominator and therefore the k coefficient will decrease (Cordelières & Bolte, 2008). The evaluation of the distortion between the k coefficient and its expected value when considering plain correlation might be a trail to pursue, helping to infer colocalization diagnosis in the case of noisy signals.
Correlation-based methods require a certain degree of dependency between signal intensity. Colocalization might be a more subtle phenomenon: two proteins might appear in a spatially correlated manner, not implying specific stoichiometries. To assess the superimposition of distribution patterns, Manders et al. (1993) introduced M 1 and M 2 coefficients, nowadays named after the author (see Table 21.1). Calculating M 1 implies calculating the ratio of intensities, taking summed intensities of channel A that find a non-null counterpart in channel A, and dividing it by the total intensity of pixels in A. M 2 is obtained by calculating the other way round. M 1 and M 2 express the percentage of fluorescence having a counterpart in the other channel. Manders coefficients are therefore to be considered as colocalization quantifiers. Those coefficients were originally defined for use with confocal images, based on photomultiplier detectors, for which a detection offset has to be set. Technical innovations on detectors and imaging modalities have opened a wide range of applications where the minimum intensity on an image might not be null. As a consequence, the zero value is not always the appropriate value to distinguish nonpertinent from pertinent signal. Two new parameters have been derived from the Manders coefficient, the thresholded Manders coefficients (tM 1 and tM 2 ; see Table 21.1), belonging to the class of colocalization quantifiers. Thresholded Manders coefficients consider intensity values above a user-defined value.

Legacy colocalization indicators and visualization methods, revisited
Interpreting the scatter plot might seem easy when dealing with a well-defined colocalization phenomenon. The experimenter might find easily the contribution of background/noise as a shapeless cloud surrounding the scatter plot origin and cross talk/ bleedthrough as a rather linearly shaped dot cloud located near the axes. It is however difficult to eliminate those unwanted contributions.
Simple thresholding of the images is one option when using Manders coefficients however, it is user-dependent. Costes et al. (2004) proposed an automated thresholding method. It consists in iteratively lowering the threshold, starting at highest intensity values for both channels, and calculating PC only taking into account values below these thresholds. When a null or negative value of PC is found, that is when the uncorrelated population has been delineated (background/noise), and the process is stopped. Pertinent coefficients are then calculated from the intensities lying above both thresholds. As background, cross talk, and bleedthrough are hardly present as a well-defined line parallel to the axis, this procedure will only remove them partially. Gavrilovic and Wählby (2009) proposed an alternative to the scatter plot, in the form of a spectral angle representation. This histogram carries angles taken within the 0 to 90 range on the x-axis. Each couple of intensities (A i , B i ) contributes to the histogram through the angle atan(B i /A i ), assuming that channel A's intensities are plotted on the x-axis of the scatter plot, and a magnitude depending on the distance of the corresponding point to the origin. Corrections to the spectral angle histogram are made to account for the discrete nature of image intensities (please refer to the original paper for more details). From this representation, three populations of pixels might be extracted: two corresponding to cross talk/bleedthrough and the third to colocalization, from which legacy indicators and quantifiers might be calculated. This method has the advantage of defining populations in the scatter plot based on angles, rather than intensity thresholds. It is also a method of choice when several stoichiometries link channel intensities as multiple threshold angles might be extracted out of the spectral angle histogram.
An additional issue relies on the interpretation of indicator/quantifier values, especially when dealing with PC coefficient. Although this process is rather simple when comparing several experimental conditions, care should be taken if only one situation is to be evaluated. Two descriptive methods might be used to help interpreting data: Van Steensel's method and Costes' randomization.
Van Steensel et al. (1996) proposed the use of a cross correlation function in colocalization studies. It consists in calculating PC while operating a pixel shift to one of the two channels. As a consequence, colocalization contribution should disappear when the pixel shift is applied: the higher the pixel shift, the lower the PC. Plotting the PC as a function of the pixel shift results in a bell-like curve when colocalization occurs, a rather flat line when no colocalization is present, and a pit in case of signal exclusion. Van Steensel's method is a graphical way to assess colocalization for low PC values (close to zero). It also provides a mean to quantify and further correct for chromatic aberration, since in such case, the maximum of the bell-shaped curve will not fit the zero pixel shift.
Costes' randomization (Costes et al., 2004) is a process based on the comparison of the PC of original image couples and the PC between a randomized image of channel A and the original image of channel B. Randomization of the image of channel A is done by shuffling pixel positions. Repeating the process allows collecting a distribution of PCs that encounters for colocalization events due to hazard. Comparing the original PC value, obtained from nonrandomized channels, to this distribution helps to position the current situation to a random colocalization event.
Finally, in spite of those methods, some experimental situations are simply not compatible with the use of PC. PC requires the two signals to be linearly linked, which may not always be the case. When a monotonic relation links intensities, either increasing or decreasing, an additional colocalization indicator might be used: Spearman's coefficient (SC) (French, Mills, Swarup, Bennett, & Pridmore, 2008;Spearman, 1904). Rather than working on raw intensities, a classification is first made. To exemplify, let's take a group of intensities: {0, 1, 2, 4, 8, and 16}. Grouping intensities into classes will convert the former array of values to the following: {0, 1, 2, 3, 4, and 5}. While the original data were unevenly distributed, the transformed data are well ordered, all classes being evenly spaced. This process results in a linearization of the nonlinear dataset. The PC calculated accordingly is called SC and its value is in the [À1, 1] range, while its interpretation is similar to the PC.

Which strategy to adopt?
As we have seen, when dealing with pixel intensity-based colocalization methods, a plethora of tools exist. While most of the experimenters would usually only pick one tool, several of them can be combined to describe the signal interplay.
First, as a preliminary step, colocalization study should always start by overlaying channels. The resultant image gives a first insight on signal coincidence. It also helps defining ROI, to which analysis might be restricted later on. Chromatic aberration-free images being a prerequisite to proper colocalization evaluation, overlay is also a tool to detect such phenomenon. Van Steensel's cross correlation approach helps quantifying it, as its extremum's position (maximum in case of colocalization and minimum in case of exclusion) generally corresponds to chromatic shift.
Once images have been corrected for chromatic aberration, a scatter plot should be built. Its analysis will be useful in defining the contribution of cross talk/bleedthrough and background/noise to pertinent signal. Two strategies are to be applied: The first consists in removing both contributions by applying thresholds to channels A and B. This step might be achieved either manually or using automated Costes' threshold. Another approach is to perform spectral angle representation and use the histogram segmentation method proposed by Gavrilovic and Wählby (2009).
Signals being now exempt of parasite contributions, the colocalization evaluation process can be performed. Two kinds of tests might be performed: retrieval of colocalization indicators and quantifiers. The former are to be used to prove a monotonic relationship between intensities of both channels and the later to quantify the amount of overlapping signals.
Being the oldest used indicator, the PC is usually the first to be calculated. This is only appropriate when a linear relationship links intensities of both channels. Alternatively, in case the connection is not linear but still monotonic, the use of SC should be preferred. In both cases, care should be taken with their interpretation. While low, null, and high values permit straightforward conclusions (exclusion, no correlation, and colocalization, respectively), midrange values are ambiguous. Going back to the scatter plot helps, as the spread of cloud distribution is an indication of noisecorrupted signal. This can easily be assessed by calculating the two components of the overlap coefficient: k 1 and k 2 . Both values should drift from the ideal 1/a and a values, a being the slope of the regression line describing best the dot cloud. Alternatively, one may identify more than one dot cloud on the scatter plot, which would lead to similar issues on PC/SP, k 1 and k 2 . This situation might be circumvented by isolating individual dot populations and performing the former tests on individual groups of intensities.
Once those observations and partitioning have been performed, the experimenter may want to investigate colocalization indicator relevance in the current experimental situation. When comparing several conditions, PC/SP for each might be compared using regular statistical tests (see the recent review by McDonald & Dunn, 2013). When only one experimental situation is to be diagnosed for colocalization, the task might seem harder to pursue. Using Costes' randomization is then the best available choice, as a reference dataset is generated from the actual content of both input images, one of them being shuffled. When performing this approach, care should be taken for two parameters: the size of the ROI, relative to the extent of pertinent signal, and the number of randomizations to be performed. Performing the analysis on an ROI where the ratio between the number of pixels belonging to the objects is low, as compared to the number of nonobject pixels, will lead to a distribution of PC where values are expected to be centered around 0 and the full width at half maximum (FWHM) is rather low. In this situation, the PC calculated on the original couple of images is likely to be well outside this distribution. To the opposite, an ROI devoted to nonobject pixels will give a larger FWHM, intensities within objects likely being correlated. Under those circumstances, the original PC will fall into the distribution, and erroneous conclusion of "random" colocalization might be drawn. Therefore, it is advisable to consider an ROI where approximately 50% of the pixels belong to structures, when performing such an approach. The number of randomization rounds is also of main concern. A low number of randomization will end up in a sharp bell-shaped curve, centered on the most probable PC encountered in random colocalization situations, omitting the less occurring events, located on both extreme sides of the distribution. In their original paper, Costes et al. (2004) used 200 rounds of randomization of their tests. The final output of Costes' method is a P-value (to be differentiated from the statistical test output, p-value). It corresponds to the area under the distribution of PC from randomized images, starting from its minimum, until the intercept with the original PC value. Colocalization is considered true when the P-value is above 95%, meaning that for 200 rounds of randomization, the original PC value or higher is found in less than 10 cases.
Among legacy colocalization quantifiers lay the Manders coefficients. M 1 and M 2 should rather be used to estimate the amount of colocalization between two channels, meaning when proper quantitative values have to be extracted. The first step here is to correct for potential chromatic aberration using Van Steensel's method. To get rid of cross talk/bleedthrough and noise, either manual thresholding or Costes' method might be applied, together with spectral angle histogram representation, as previously stated. Alternatively, image processing might be applied to isolate the pertinent objects from the image. Denoising, wavelet transformation, texture-based analysis, etc., might be performed to segment the image. Once information-containing regions have been extracted, the analysis is to be performed on the original image. Binary masks may be generated from the processed images and applied on the original images. Manders et al. had not proposed these kinds of processing steps in their original paper but rather applied a simple threshold before calculating tM 1 and tM 2 (M 1 and M 2 in their thresholded form). Once clean out has been performed on both images, both coefficients are calculated on remaining pixel intensities. While Manders coefficients give a direct readout of the amount of signal engaged in the colocalization process, this method requests a precocious determination of the thresholds, partitioning the intensities belonging to objects and the ones lying in the background. When comparing several sets of conditions, this process should be first applied to the images where colocalization is supposed to be the most present, assuming all sets of images have been acquired under the same conditions. Once determined, those parameters should be left untouched and applied to all remaining images within the dataset.

Working on objects
Manders coefficients are colocalization quantifiers calculated using only pixels belonging to objects. While this approach is object-based, the calculations are made using a different level of granularity, pixel-wise. Those quantifiers are therefore global, and more advanced diagnostic might be performed using a lower level of granularity, object-wise.
Two strategies might be used when dealing with problematic colocalization: considering the image couple as a container for intensity couples, which may or may not be spatially related, and, alternatively, as a container for a population of object couples. The latter opens the possibility to evaluate the interplay between signals locally. The methods that will be applied not only are restricted to colocalization evaluation but can also be used to investigate parameters such as proximity, apposition, and overlap in an object-to-object way.

Grouping pixels into objects: Image segmentation
Images are composed of discrete elements, resulting from the sampling of the original scene, namely, the biological sample, seen through a detector, and by the use of fluorescent molecules. Each pixel carries an intensity that corresponds to the local concentration of the probes. Fluorescence emission and its digitalization are noise-creating processes, and the experimenter has to deal with those parasite contributions to the signal. Generating an image through a microscope is also not a faithful process. The view of a biological sample presented by an image is dependent on the instrument response function that takes the form of a point-spread function (PSF). The image of a subresolution object will therefore be a 3D hourglass shape, whose minimum width is the actual resolution of the optical system. Performing a proper isolation of the objects out of the image will imply taking into account all those contributions to the image formation process.
Two types of noise impair the image, Poisson and Gaussian noise, resulting from both the stochastic nature of the fluorescence process and the electronics used to collect photons and to digitize the signal. A good object delineation might be hard to achieve, when dealing with low signal-to-noise ratio (SNR) images, for instance, in video-microscopy experiments. Several denoising tools have been released that use both local and temporal estimations of noise to enhance the original signal (Boulanger et al., 2010;Luisier, Vonesch, Blu, & Unser, 2010). Additionally, image restoration algorithms may be applied to account for and revert the instrument function response. Deconvolution is a process aimed at bringing the signal spread out back to its origin, using the PSF. It will enhance the SNR and ease the object delineation process (for review, see Sibarita, 2005). Recent work by Paul, Cardinale, and Sbalzarini (2013) proposed to do both deconvolution and image segmentation concomitantly, a strategy that may also be applied to images before object-based colocalization.
Once image enhancement has been performed, easy object delineation steps might be achieved. Wavelet transformation, among other methods (for a comparative review, see Ruusuvuori et al., 2010), can be used to isolate objects of specific shape and size.
The final steps of object isolation are achieved through thresholding and connexity analysis of the resultant image. Pixels are therefore separated into two populations, based on their intensities, namely, the nonobject and the object pixels. Then, classification of spatially juxtaposed pixels is done to determine the objects.
Those preprocessing steps impair the original signal and thus have to be performed with care. However, they are crucial for faithful object delineation. Knowing the nature of analysis to be performed on object geometry or intensity, the resulting images will have to be used raw or as masks, through which original intensities will be observed, respectively.

Colocalization quantifiers based on object overlaps
Connexity analysis is a means to combine pixels carrying relevant intensities (i.e., which lie above the threshold) into objects and to retrieve pertinent information from them subsequently. This information may contain the measurement of the area in two dimensions, volume in three dimensions, perimeter, and centers. Centers can be considered either geometric, only taking into account positions of pixels belonging to objects, or intensity-based (center of mass or barycenter). Using those descriptors, new colocalization quantifiers might be built.
Objects larger than the optical resolution will always appear as covering a surface larger than the expected 3 Â 3 pixel area, when optimal sampling has been performed. A straightforward analysis is to measure the overlap surface existing between the two channels for each object. In a way, this approach is similar to the Manders coefficient method but differs from it as it relies on physical overlap rather than on intensity overlap. A Manders coefficient can also be calculated, object per object, determining the proportion of signal involved in colocalization.
The proposed approaches rely on local analysis as well as global analysis. They allow investigating region per region coincidence of signals or, more generally, physical overlap.
However, when dealing with size heterogeneity of structures between two channels, physical or intensity-wise overlaps are hard to interpret. For instance, taking images presenting large objects in one channel and resolution-limited objects in the other, the measures will be low in the former and close to 100% in the latter. There are two ways to proceed: The first would consist in setting a threshold of overlap above which structures are considered as colocalized. Two ratios are then calculated, expressing the percentage of objects in channel A that overlap, physically or intensity-wise, with an object from channel B, and vice versa. In this matter, the thresholding step relies only on user input, which might be impaired by a priori knowledge of the expected results. A second option consists in plotting a histogram of the degrees of overlap, to help tuning the limit above which two objects are considered as colocalized.
Alternatively, resolution-limited objects from channel A might be reduced to their centers. In homogeneously labeled structures, both centers of mass and geometric centers should fall on the same pixel. Colocalization quantifiers are then calculated by counting the number of centers from channel A that fall onto the surface/volume of objects from channel B. This approach was proposed by Lachmanovich et al. (2003). The reduction of an object to its center is only allowed in cases where its dimensions are close to the optical resolution. This is also true for the barycenter, when intensities are evenly distributed within the object. Therefore, using one or the other parameter will depend on both the area of the object and the intensity distribution within the object.

Colocalization quantifiers based on object distances
Previously described methods were based on the search for an overlap between relevant object descriptors. However, due to the combination of the limit of resolution and the sampling performed by the imaging process, two point-shaped structures will appear on blocks of 3 Â 3 pixels. In this situation, colocalizing objects may fall onto pixels as distant as 3 pixels. This imprecision of localization may result in underestimation of colocalization when only looking for a true, precise overlap. The alternative might be to work rather on distances than on coincidences (Cordelières & Bolte, 2008;Lachmanovich et al., 2003).
Knowing all parameters that can be extracted out of objects, several choices are offered to quantify colocalization. Distances might be calculated between the envelopes of objects from both channels, considering that colocalization occurs when two surfaces/volumes are distant from less than the optical resolution. The same rule may be applied to distances between centers of objects from both channels. Direct quantifications are achieved by counting the number of colocalization events and dividing it by the overall number of objects for one channel.
More subtle characterization may be performed: A histogram of all distances can be plotted. The analysis of this representation may help in distinguishing several populations. More than a binary analysis, distance histograms may help to reveal colocalization (distances below the optical resolution), apposition (distances higher but close to optical resolution), or non-colocalization events. When considering an experimental situation where colocalization varies upon the addition of a drug, and/ or as a function of time, a comparison of the histogram helps directly to visualize the evolution of the three aforementioned populations.

Which strategy to adopt?
There are a plethora of parameters that can be extracted out of objects, after the segmentation process, and this might impede the choice of the appropriate colocalization method. It has to be made depending on three factors: the size of the structures, their shapes, and their relative distribution between channels A and B.
When the objects have a size close to the optical resolution, the easiest way to work is to use their centers. The small number of pixels makes the use of both types of centers (of mass and geometric) equivalent. Objects larger than the optical resolution have to be used as a whole: Their surface, if working in 2D, or volume, if working in 3D, has to be used. Colocalization methods will then depend on the parameters extracted from both channels. In the center-center case, evaluation has to be done on distances. In the center-area/volume case, the experimenter should rather investigate coincidence between the latter and the former. Finally, in the area/volume-area/ volume, measure of the overlap may be performed. This measure might be performed either geometry-based or intensity-based. The choice should be performed in view of the phenomenon to study and the signal distribution within objects. When the intensity is homogeneous within the structure, both methods will end up with similar results. Combining measures of the overlap of surfaces/volumes and the coincidence of signal might be used to infer the nonhomogeneous distribution of the markers within the structures.
Additional analysis is accessible from those parameters. For instance, when two vesicular markers are to be analyzed, one being the content and the other an outer labeling, apposition of the two signals may be evaluated in a two-step process. First, using the center-center approach, the experimenter will measure distances close to the optical resolution. As a second step, the overlap measurements between areas/ volumes will display a value close to zero. This is a typically expected result when working on round structures, surrounded by a donut-shaped signal. When the structures are rather elongated than round-shaped, for instance, when working on pre-/ postsynaptic markers, apposition might be revealed by considering the minimum distances between envelopes of objects from both channels.

CONCLUSION
Colocalization studies are usually performed using the most widely available methods, namely, by calculating the PC or Manders coefficient. However, these are not generic methods, and both have domains of application that should be respected. Alternative methods should be investigated, as they either are more appropriate or might give insights to the legacy indicator and quantifier interpretation. All the tools that were made available were designed to answer a specific biological problem. A more general approach should therefore consist in first verifying that published tools are applicable to the experimenter's problem and alternatively trying to design appropriate indicators/quantifiers relevant to the field of investigation.
The development of superresolution microscopy techniques is now breaking down the resolution limit. Previously published colocalization studies might soon have to be revisited in light of the improved performances of optical systems. The output of structured illumination and STED microscopies is still images, where legacy methods might be applied. However, pointillist techniques generate not only images but also localization information that is characterized by uncertainty of measure, making the data uneven resolution-wise. Using generic methods is the most straightforward strategy, which should be applied to images reconstructed from localization data. Raw data have to be analyzed as distance maps, taking into account the localization imprecision. In this matter, up to now, no real effort has been made in trying to find good indicators/quantifiers. This definitely opens new fields of investigation for colocalization specialist.