IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO.

12, DECEMBER 2008

2289

A Multiresolution Stochastic Level Set Method for Mumford–Shah Image Segmentation
Yan Nei Law, Hwee Kuan Lee, and Andy M. Yip
Abstract—The Mumford–Shah model is one of the most successful image segmentation models. However, existing algorithms for the model are often very sensitive to the choice of the initial guess. To make use of the model effectively, it is essential to develop an algorithm which can compute a global or near global optimal solution efficiently. While gradient descent based methods are well-known to find a local minimum only, even many stochastic methods do not provide a practical solution to this problem either. In this paper, we consider the computation of a global minimum of the multiphase piecewise constant Mumford–Shah model. We propose a hybrid approach which combines gradient based and stochastic optimization methods to resolve the problem of sensitivity to the initial guess. At the heart of our algorithm is a well-designed basin hopping scheme which uses global updates to escape from local traps in a way that is much more effective than standard stochastic methods. In our experiments, a very high-quality solution is obtained within a few stochastic hops whereas the solutions obtained with simulated annealing are incomparable even after thousands of steps. We also propose a multiresolution approach to reduce the computational cost and enhance the search for a global minimum. Furthermore, we derived a simple but useful theoretical result relating solutions at different spatial resolutions. Index Terms—Basin hopping, global optimization, image segmentation, Mumford–Shah model, region splitting and merging, stochastic level set method.

I. INTRODUCTION MAGE segmentation is indispensable in many applications since it facilitates the extraction of information and interpretation of image contents. For instance, in high throughput imaging, storage and organization of a large number of images according to the extracted information are required. Therefore, it is crucial to segment each image into meaningful partitions in an accurate, fast, automated and robust way. The segmentation problem is fundamental to image processing. However, it is a difficult one since it is highly ill-posed. While standard segmentation methods may work reasonably well when the images have simple contents and are of high quality, they perform poorly on practical images which are often

I

nonideal. For example, the nuclei of the breast cancer cells in Fig. 4 have diverse intensity levels. Thus, simple thresholding or even adaptive thresholding [1] may miss many nuclei. In the brain MRI image shown in Fig. 11, the various matters and tumor have no sharp defining boundary. Most edge-based segmentation methods do not work. In the zebrafish image shown in Fig. 12, the proliferating-cell nuclear antigen (PCNA) clusters (black dots around the intervillus pockets) consist of many clusters of cells. Most algorithms will treat spatially separated clusters as individuals. However, sometimes it may be more useful to perceive them collectively as a region. Similarly, the villi also appear to be formed by many small white patches. This can make the recognition of the PCNA and villi much more difficult. The high noise level also poses additional difficulty which would fail methods like thresholding and watershed [1]. When these complexities can not be reduced by repeating or redesigning the experimental setup, sophisticated computational methods provide valuable alternatives. There are many advanced approaches for image segmentation: clustering, histogram-based, region-growing, graph partitioning and optimization model-based. A recent survey in segmentation can be found in [2]. Among the various advanced segmentation approaches, optimization model-based methods often give very promising results, provided that the underlying optimization problem is solved accurately numerically. An advantage of this approach is that one can incorporate domain knowledge into the model explicitly through rigorous mathematical modeling. A. Mumford–Shah Segmentation Model In this paper, we consider the piecewise constant Mumford–Shah segmentation model [3] which is one of the best models in segmentation. It can handle gracefully the aforementioned complex situations and is very robust to noise. A distinctive feature is that it is region-based which allows it to segment objects without edges. For the same reason, it can group a cluster of smaller objects into a larger object. For a given image , the piecewise constant Mumford–Shah model seeks for a set of curves and a set of constants which minimize an energy functional given by

Manuscript received October 01, 2007; manuscript revised June 09, 2008. Current version published November 12, 2008. A. M. Yip was supported by the National University of Singapore under Grants R-146-050-079-101/133 and R-146-000-116-112. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Gaudenz Danuser. A. M. Yip is with the Department of Mathematics, National University of Singapore, Singapore (e-mail: andyyip@nus.edu.sg). Y. N. Law and H. K. Lee are with the Bioinformatics Institute, Singapore (e-mail: lawyn@bii.a-star.edu.sg; leehk @bii.a-star.edu.sg). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIP.2008.2005823

(1) The curves in partition the image into mutually exclusive for . The idea is to partition the segments in each segment is well-apimage so that the intensity of proximated by a constant . The goodness-of-fit is measured by the fitting term . On the other hand,

1057-7149/$25.00 © 2008 IEEE

they have not been utilized to solving the Mumford–Shah image segmentation problem. a new solution is obtained by moving the current solution along a descent direction in which the energy is decreasing [8]. The paramcontrols the trade-off between the goodness-of-fit eter and the length of the curves . it is essentially the greedy pixel flipping algorithm in [20]. To remedy the weaknesses of simulated annealing with local updates. Simply put. However. we use a multiresolution method. A similar hybrid approach has been proposed for solving the shape-from-shading problem [18]. it still does not provide a practical solution to our problem. VOL. see the section Related Work. the solution obtained can be far from any globally optimal solution. many advanced MC methods such as generalized ensembles [12]. For nonconvex energy functionals. A breakthrough in solving the Mumford–Shah optimization problem is achieved by Chan and Vese [5]. reweighting [14]. For this. which is usually unknown in many imaging applications. These methods are very efficient to compute a local optimal solution of the Mumford–Shah energy. a major class of stochastic methods. The split-and-merge step is the crucial element in our algorithm. As . DECEMBER 2008 a minimum description length principle is employed which requires the curves to be as short as possible. they are often suitable only for problems of very small size due to their high complexity. However. the membership of a small group of pixels is updated at a time. provide viable alternatives. It is a kind of basin hopping method [17] which uses the gradient information of the energy function to make the best local move and uses a stochastic split-and-merge method to make a global move towards a global solution. see a result. Thus. They show that for in (1) and for fixed constants . we go beyond it by using the more sophisticated basin hopping method. 10. tremendously different solutions may be obtained when different initial solutions are used. [15]. cluster algorithms [16]. Other greedy schemes are considered in [20]. To the best of the authors’ knowledge. one can easily construct examples where the solution is not unique. C. It can be easily shown that for each fixed . a gradient descent method based on topological derivatives is proposed. A different approach is proposed in [23] which reduces the optimization problem to a series of simple thresholding and filtering steps. Monte Carlo (MC) methods. They propose to use the level set method introduced by Osher and Sethian [7] to represent the curves by the zeroth level of a set of functions defined on the image domain. The problem then becomes a standard variational problem which is much easier to handle. A key issue is to ensure that the solution at a lower spatial resolution indeed gives a good approximation to the original problem. Moreover. The annealing theorem says that such a method can converge to a global solution in the long run [11]. and basin hopping [17] have been developed over the past couple of decades to handle a variety of problems in condensed matter physics that cannot be solved efficiently using traditional simulated annealing. [13]. When it is nonconvex. In [4]. which is the case of the Mumford–Shah model. However. we propose a hybrid approach. 17. However. an elliptic approximation by -convergence to the weak formulation of the Mumford–Shah model has been proposed. we derived an important theoretical result relating solutions at different spatial resolutions. Existence of global minimizers has been estabof lished in [3]. simulated annealing implemented with the Metropolis algorithm [10] and a local updating scheme has been popular in solving many optimization problems. These deterministic methods work well when the energy functional is convex. its convergence to a global solution is logarithmically slow. In these methods. the energy functional in (1) becomes Section II-C for details. see Fig. is far from trivial. D. In [22]. To further reduce the computational cost. We show in this paper that basin hopping with global updates provides a very effective means to solving the problem. and a more general version in [6]. However. [21]. 12. It is tailor-made for the Mumford–Shah model and can overcome the slowness of many standard stochastic methods by allowing a large change in the partition of the image. including the Mumford–Shah model. Related Work Our method combines the best of deterministic and stochastic methods for the Mumford–Shah energy. it is often difficult to specify a good annealing schedule to make the method work. Optimization Methods Solving the optimization problem associated with most sophisticated models. After discretization. membership in pixel or region level is updated in a greedy way in order to optimize the energy locally.2290 IEEE TRANSACTIONS ON IMAGE PROCESSING. A smooth function is used to approximate the characteristic function of the curves so that the optimization with respect to the curves becomes an elliptic problem which can be solved easily. the optimal constant is given by the average over . In greedy based deterministic methods. to combine the strengths of advanced deterministic and stochastic optimization approaches. which we called Stochastic Level Set Method. they make no attempt to compute a global solution. Moreover. Our Contribution In this paper. the nonconvex Mumford–Shah energy can be reformulated as a convex . Global optimization techniques such as interval arithmetics and subdivision schemes [9] are available. we assume that the user has specified a choice of the parameter such that any of the global minimizers is an acceptable solution. Updating of the curves in (1) is complicated in situations in which the curves need to be split or merged. This increases the robustness to noise and avoids spurious segments. it is a change of variable from to . A greedy region merging method is proposed in [19] which starts with a very fine segmentation and progressively merges two regions at a time according to a predefined schedule. B. In particular. They can overcome the problem of being trapped in a local minimum. In this paper. Some of these methods adopt global updating strategies. NO. Our work has alleviated a practical problem existing in current Mumford–Shah algorithms: The need to specify a “good” initial guess. A very interesting result is obtained in [24]. Methods for solving such a variational problem can be largely divided into two categories: deterministic and stochastic. In gradient descent based deterministic methods.

the probability of escaping lower than the energy barrier . The method essentially flattens the energy landscape to reduce energy barriers. A gradient descent is performed again from point C to obtain a lower energy minimum at point D. In general. The success of the basin hopping method highly depends on the design of the hopping strategy. each point is attracted to a local minimum through a gradient descent process. the Mumford–Shah model can be cast in a probabilistic framework in which the pointwise maximum a posteriori estimate is equivalent to the solution of the original model [25]. a soft Mumford–Shah segmentation model in the probabilistic framework is also proposed. our framework may be applied to improve their solution quality. also our main contribution. Basin Hopping Method The basin hopping method [17] combines deterministic and stochastic methods to remove energy barriers and escape from local minima. the aim in these work is fundamentally different from us—we apply stochastic optimization methods to optimize a deterministic model whereas the above methods apply deterministic optimization methods to optimize stochastic models. A. However. their method relies on simulated annealing that is constrained by the annealing theorem. A problem is that if the hopping leads to a bad solution. say point C.LAW et al. On the other hand. this is verified in Fig. we present the outline of the proposed algorithm. Our method does not require any annealing schedule but can still overcome local traps and stabilize in a finite number of steps. 7). Fig. In contrast. is that our updates are global whereas theirs are local. Several hopping steps can be performed to obtain a better iterate. We alleviate this problem using a multiresolution approach. However. II. In contrast. once the temperature is . being based on local updates. basin hopping explores many possibilities. Thus. 10. the result does not hold for . The energy landscape is also not simplified in simulated annealing. . a coin may be flipped to determine if the new iterate is accepted. then the new solution will be taken without reservation. it is necessary to allow temporary increases in energy. Greedy algorithms of [19]–[21] can be viewed as a series of deterministic hops.: MULTIRESOLUTION STOCHASTIC LEVEL SET METHOD FOR MUMFORD–SHAH IMAGE SEGMENTATION 2291 energy. With basin hopping. a gradient descent is performed to find the local minimum (point B). Starting from an initial guess (point A). which is from this local minimum goes as exponentially small. in Section II-B. for example. The major difference. A detailed description of each major component of our method is presented in Sections II-C–II-F. However. Therefore. In [26]. We may view the method as a series of hopping between local minima (basins). then there is no way out. our hopping step is region based and is. each point on the active contour (or level set function) moves in its normal direction at a speed which is a random perturbation of the one suggested by the gradient descent. In Section II-A. it is clear that stochastic methods have an edge over deterministic ones for nonconvex problems. We illustrate the main ideas of the method through a schematic depicted in Fig. which is a global update move selected randomly from a small set of candidate moves. a local minimum is quickly found using the gradient descent while in simulated annealing. repeated hopping and local optimization increase the computational cost. they can still escape from local minima even if we accept only a new solution leading to a decreased energy. ALGORITHM In this section. In effect. in [27]. 1. Indeed. we present the basic ideas behind basin hopping and explain why it is superior to sole deterministic or sole stochastic methods. 1. the algorithms used to compute the optimal distributions are deterministic. Thus. Finally. 1. Thus. thus. an additional hop brings the iterate from point D to E and then to F. This effective energy is depicted as the staircases in Fig. a global solution (for a fixed ) can be easily com. puted. Notice that the simplified energy has much less energy barriers. which can escape from local minima effectively (see Fig. basin hopping simplifies the energy landscape. There the authors present a general framework for solving stochastic differential equations arising from active contour models. we present the proposed algorithm. These algorithms make no significant attempt to compute a global solution and may get trapped in a local minimum. Next. Thus. However. Basin hopping is clearly much better than simulated annealing with local updates. for methods using global updates. The exploration of the search space using local updates is much slower than that using the global updates we proposed. global. a minimum is not found until the final temperature. In their method. Our hopping strategy is a stochastic region splitting and merging method which is specially designed for the Mumford–Shah energy and is effective for the diverse images we tested. For methods using local updates. the speed depends only on local quantities. As with many variational models. Otherwise. provided that a good starting point is available. Another important difference is that. If the new local minimum gives a lower energy. the gradient descent method of [5] computes a local minimum only. especially in high-throughput imaging. Good results can be obtained efficiently. A stochastic hopping step is invoked to jump from point B to a new iterate. we are dealing with a nonconvex energy which when makes existing algorithms impractical due to the need of a good initial solution—a luxury which is seldom available. Schematic plot of the basin hopping method with x-axis representing the multidimensional solution space C . The use of stochastic methods to advance active contours is also considered by Juan et al.

Stochastic Level Set Method The proposed algorithm is outlined as follows. To simplify the discussion. original image by a factor of 2) Local optimization: Apply the level set based gradient deto obtain a new iterate scent method with initial curves and energy . we will use the terms “Chan-Vese model” and “Mumford–Shah model” interchangeably since the former stems from the latter. We also remark that the Chan-Vese version (3) is slightly different from the original piecewise constant Mumford–Shah model in (1). 3) Repeat the followings up to a predetermined number of steps or until all possible hops have been rejected. we use the alternating approach in [5]. First. the Mumford–Shah energy (1) can be expressed as (3) C. Each phase may consist of more than one connected component. a) Hopping: Use the stochastic split-and-merge to hop to a configuration . However. b) Local optimization: Apply the level set based gradient to obtain a descent method with initial curves new iterate .2292 IEEE TRANSACTIONS ON IMAGE PROCESSING. This method can be formulated with simulated annealing [11] by setting the acceptance probability to and decreasing the value of gradually. NO. compute by applying one step of . reduce the spatial resolution of the in each dimension. compute by solving the problem 1) For a given fixed . We remark that it is possible to obtain many segments using only a small number of phases. c) Accept: Set the current iterate. Here. the total length of the curves is computed by the total length of the zero level set of the level set functions. 2) For a given fixed . (2) . D. see [6] for details. Each connected component is a segment. Then. 17. In the sequel. Starting from a given initial guess . 4) Repeat the followings until the original spatial resolution is reached. In fact. . Second. . 1) For a given integer . VOL. To optimize with respect to as well. Genset functions eralization to an arbitrary number of phases is straightforward. Vese and Chan [6] propose to use a set of functions defined on the image domain to encode the segments into the intersections of the positive and negative regions of and to encode the dividing curves into the locations at which one of component functions is zero. Level Set Methods for the Mumford–Shah Model Splitting and merging of evolving curves are nontrivial. DECEMBER 2008 B. The crucial advantage of such a change of variable from to is that the minimization with respective to is much easier to carry out. and 0 othHeaviside function which is 1 if erwise. gradient descent starting at For the ease of presentation. This method is used to optimize with respect to . posed new iterate c) Accept/reject: If . a) Increase the image resolution by a factor of 2 in each dimension. as follows. 1/2 if . For each location . 12. This is valid as long as the intersection of the zero level sets has a zero length. the phase that it belongs to is determined by the following encoding: This version of the Mumford–Shah model is commonly known is the as the Chan-Vese model in the literature. However. Moreover. splitting and merging of curves and subregions are done by simply “moving the level set functions up and down”. In practice. . we update and alternatively. such an approximation makes no significant difference in the final results. we present the case where two level are used to represent up to 4 phases. We remark that the dependence of the Mumford–Shah energy on the constant is ignored since it can be determined from the curves [see (4)]. b) Local optimization: Apply the level set based gradient to obtain a prodescent method with initial curves and energy . A consequence is that the optimal constants of the over the corresponding former model are the average of phase whereas that of the latter model are the average of over the individual connected components. Level Set Based Gradient Descent Method The gradient descent method brings an initial solution to its respective local minimum. then set and . phases can be represented using level set functions. our hopping scheme is global so that simulated annealing is not needed. the segments within each phase are fitted with the same constant by the model. the former model puts an a priori upper bound of the to be used whereas the latter model number of constants allows an arbitrary number of constants (in the continuum setting). we shall call this alternating method as level set based gradient descent. in the former model.

Using variance. using the sum-of-differences-squared can balance these two factors.g. We found that Thus.. We use the Matlab function bwdist for reinitialization. Thus. we use the deterministic scheme for simplicity. In contrast. which is to allow a temporary increase in energy so that large global changes can be achieved. This approach is quite natural on the energy change since it leads to a greedy way to minimize the energy. we apply the ISODATA thresholding method [28] which computes an optimal threshold so as to optimize the 2-means objective (7) where (6) Here. (7). Thus. e. Since a sum-of-differencessquared of a connected component can be written as the product of region size and intensity variance we tried two different weighting schemes. is different from the in (4) which Note that the constant for a constant is computed over phase . we randomly (a connected component in a phase) choose a segment according to the probability distribution region size. The phase into two sets of pixels or to the one of the merging step merges either remaining phases and keeps the other set in the th phase. The merger which Thus. of E. and . we first compute the connected components in each phase using the Matlab function bwlabel with 8-neighbor connectivity. and re-normalize the probability of choosing each segment. very small regions (e. background) are often picked. there is results in the smallest fitting term is used. the move due to splitting these small regions is too small to escape from local minima. we denote by with respect to . this is done by changing the sign of according to (2) followed by the level set functions at reinitializing the level set functions to signed distance functions.. (respectively ) is the set of pixels in whose intensity levels are less (respectively greater) than and (respectively ) is the average intensity over (respectively ). remove the last chosen segment from the list of candidates for splitting. In our examples. Then.: MULTIRESOLUTION STOCHASTIC LEVEL SET METHOD FOR MUMFORD–SHAH IMAGE SEGMENTATION 2293 It can be easily shown that the problem in the first step has the following explicit solution: (4) Thus. Otherwise. 2-pixel regions) are often picked. often depends heavily on the length term. cf. However. 8 (right). potential mergers. However.LAW et al. However. which is the local minimum we obtain a configuration starting from . it is sufficient to get a much improved segmentation within a few split-and-merge steps whereas the results of standard simulated annealing with local updates are incomparable even after thousands of steps. the total number of phases is conserved. the gradient descent method of computes the next iterate by the following update: (5) where is the step length parameter. We have also considered other possible weighting schemes. When the alternating minimization converges at the th step. This weighting scheme is designed for the Mumford–Shah model since it prefers to split a segment having a high sum-of-differences-squared. The detailed numerical implementation of (5) can be found in [6]. Using Here. We found experimentally that stochastic merging based on the reduction in the fitting term does not give a better performance than the deterministic scheme. This scheme is inconsistent with the essence of basin hopping. This scheme often picks small local updates to avoid a large increase of energy. we apply gradient descent to carry out local optimization. 2) Merging: For the -phase Mumford–Shah model.g. This is the reason why we exclude the length term in our splitting scheme. which resembles the fitting term of the Mumford–Shah energy. the gradient For the second step. This can reduce the fitting term of the Mumford–Shah energy optimally. If the resulting energy is decreased (compared to the energy before the splitting step).g. To split the selected segment into two subsegments. is the average of over the segment which can be easily computed once the connected components are identified. we will revert back to the configuration before the splitting. we found that these two schemes are unable to escape from local minima (results not shown in the paper). We design a very effective stochastic split-and-merge in which a large area of the image can be splitted and merged in a single step. intensity segment. After merging. Another possible weighting scheme we considered is based . the constants can easily be determined from . supbelonging to the th pose the splitting step splits a segment and . and portional to a decreasing function of . 1) Splitting: In the splitting step. Our merging step is deterministic. the largest but homogeneous regions (e. Numerically.. the inclusion of the length term may hinder the splitting of large regions. Stochastic Region Split-and-Merge The success of the proposed method depends largely on the type of hop performed. they are often not good candidates to split. then the new configuration is accepted and another round of hopping is performed. In particular. In this scheme. Faster . Then. This greatly reduces the number of hops to search for a global minimum. A split together with a merge has a net effect of changing of a connected component the membership of a subset from a phase to another. see Fig. we randomly choose a segment with probability pro.

This result is stated precisely in the following theorem. and more accurate alternatives such as fast sweeping [29] and fast marching [30] methods can also be used. 2(c). NO. (g) The two splitted subsegments (depicted in light and dark gray). Multiresolution Hybrid Approach Spatial down-sampling of the image reduces the search space and provides a less noisy image. (d) The probability (6) of choosing each segment to split. 3. if the segment having the probability 0. In (f). 4) for a similar example of local minimum. the results of two potential split-and-merge operations are shown. 4 and 8 (left). The new black phase contains mostly pixels of the bridge whereas the pixels from the background have been removed. In (c). Our results show that this approach allows us to obtain a solution of better quality in much less time compared to the pure deterministic method in [6] and its multiresolution version. (e) The two splitted subsegments (depicted in light and dark gray). 3(d). For these reasons. it is necessary to refine the solution at the original resolution. the optimization problems at different resolutions must have similar solutions. F. Computational cost for low resolution images is also relatively cheap.422. This restricts the amount of resolution that can be reduced.339 is chosen to split. The result is depicted in Fig. they are reluctant to move. The light gray subsegment in (e) is merged with the black phase in (c). Heterogeneous segments (e. the probability of selecting the black phase is increased from 0. Indeed.. A problem here is that the boundary of the black phase is the zero and .573 and 0. The light gray subsegment in (g) is merged with the white phase in (c). 17. there are two main issues. Each time the resolution is increased. VOL. (h) The new four phases after merging.2294 IEEE TRANSACTIONS ON IMAGE PROCESSING. This allows us to find a close-to-optimal solution quickly. Split-and-merge step. In Fig. We refer the readers to [6] (Fig. The result of splitting and merging the black phase is shown in Fig. the missing legs of the bridge are recovered.e.g. but we iterate more to ensure convergence. 2(h) and apply 1000 gradient descent. DECEMBER 2008 Fig. then the ideal segmentation is obtained without needing any additional hopping. see Figs. It is because moving one of them while keeping the other fixed (which is what the gradient descent does) will create a new segment in-between. 2(f). The contours are trapped in a local minimum. (c) Result of gradient descent after 3000 iterations (actually the segmentation stabilizes after 200 iterations). we use a synthetic image to illustrate how the splitting and merging steps work. It is a typical local minimum. the legs of the bridge are recovered. without any split-and-merge. if the segment having the probability 0. Actually. (b) Stabilized result of gradient descent after 1000 iterations. 2(f) and apply 1000 gradient descent iterations. If we start with the split-and-merge result in Fig. 2. In (e)–(f) and (g)–(h). containing both background and object) have a higher probability of being chosen (0. The proof of the theorem can be found in the Appendix. We remark that the splitted subsegments in (f) and (h) may be very noisy because the splitting is done solely based on the intensity level. We found that if the image at one lower resolution level is constructed by a simple averaging of four adjacent pixels into 1.573 is chosen to split. (d) The ideal segmentation is obtained after 1000 gradient descent iterations. When the zero levels of two funclevel set of both tions coincide. The result of another 1000 gradient descent is shown in Fig. 3(b). This may lead to an increase in the fitting term with respect to the light or dark gray phase. In order for the multiresolution method to work. The resolution of the images used in the experiments is reduced by a factor of or without any problem in preserving the object outlines. then the segmentation obtained from the lower resolution image is also optimal for the original image among all segmentations at the lower resolution. 12. We found that only a small number of iterations are needed because a good solution is already obtained after applying the stochastic level set method at the lowest resolution. 2. We start with the split-and-merge result in Fig. (a) Original image with noise. bridge and part of the background. in the second round of hopping. 3 shows the effectiveness of the split-and-merge step. Fig. the result of gradient descent after 3000 iterations is shown.339 to 0. (b) Initial condition. (f) The new four phases after merging. the mixture of pixels from the background and the bridge in the black phase in (c) has been corrected. (a) The merging result in Fig. Escaping from local minimum. A subsequent local gradient descent will smooth out the segmentation quickly. The missing legs of the bridge are recovered. Here the ideal segmentation is obtained. an outline of the target objects should be preserved at the lowest spatial resolution. Note that these illustrations do not follow exactly the algorithm presented in Section II-B since the multiresolution scheme is not used. we apply the stochastic level set method at a low resolution image to search for a global minimum at the reduced resolution. First. i. This can further reduce the number of local minima. Second.339).. we apply the (deterministic) level set based gradient descent method to refine the solution. Comparing this result with that in Fig. the segmentation stabilizes after 200 iterations. Fig. We do this by progressively increasing the spatial resolution by a factor of 2 in each dimension one step at a time until the original resolution is reached. Since detailed features are lost in the low resolution image. (c) Result after another round of split-and-merge. In (h). This will lower the probability of selecting the background and raise the probability of selecting the black phase which contains the . 3(c).

. the image resolution is increased by a factor of 2 three times. To validate our method. . respectively. .e. [6]. Third row: Gradient and 300 iterations. The CPU time is 135 s. and 500. In our results. To test the robustness of the methods to the initial guess. Similar initial guesses have been used in [5]. and perform 100. the solutions obtained are qualitative similar. we compare it with the pure gradient descent method proposed in [5] and [6]. our method takes 64 equivalent iterations at the original level. . First row: Original image with size 480 640. 9 2 10 Fig. Except for the test in Fig. The Rand Index between each pair of phase assignments is greater than 0. we choose the initial condition so that the pure gradient descent method gives the most visually appealing results. Thus. As a result. The brain MRI and zebrafish intestine images are obtained in [31] and [32]. . (Bottom left) Average energy for  . The parameter is chosen manually. Segmentation of the breast cancer cell image. For this reason. Rand Index and energy of various . 500 gradient descent steps are taken both initially and after each hop. Then we increase the resolution by a factor of 2 three times until the original image resolution is attained. The stochastic level set method gives higher average Rand Index and lower average energy than the gradient descent method consistently across a wide range of . While other existing methods such as the multiscale scheme in [21] and the thresholding method in [23] can be computational more efficient. To demonstrate the efficiency of the proposed hopping scheme. 4. . . . (Bottom right) Average energy for  . the image spatial resolution is reduced by for the synthetic. we also compare our method with the multiscale version of the gradient descent method and the simulated annealing method. To standardize the number of iterations at different levels for comparison. Then the solution of the piecewise constant Mumford–Shah problem with data is identical to that with data if the segmentation of is restricted to one resolution level must belong lower. . = = 3 2 10 = 3210 Theorem 1: Let be a given image. For the other images. 2 Fig. we use various initial guesses when testing on the breast cancer cell image. 5. that is for . Thus. 9 2 10 = 10 10 10 10 10 1 2 10 2 2 10 .LAW et al. it is sufficient to compare our method with the standard gradient descent method. 100 gradient descent steps are taken both initially and after each hop. . . a factor of 10 hops are applied. and 20 gradient descent iterations respectively each time the resolution is increased. Let be the lower resolution image constructed by a simple averaging. For the test in Fig. more local gradient descent steps are carried out. i. 5. For the zebrafish and natural scene images. As before. 80. . Second row: Three initial conditions. When applying the hopping steps. (Top right) Average pairwise Rand Index for  Index for  . The CPU time is 690 s. . . a total of 1100 gradient descent steps are taken at the lowest resolution. to the same segment for each III.: MULTIRESOLUTION STOCHASTIC LEVEL SET METHOD FOR MUMFORD–SHAH IMAGE SEGMENTATION 2295 = 10 10 10 10 10 = 1 2 10 2 2 10 . . the image resolution . . (Top left) Average pairwise Rand . the breast cancer cell image is obtained within the first and second authors’ institute. This is because resolution as and the cost the number of pixels is reduced by a factor of per iteration is proportional to the number of pixels.99. descent method with  . . and 50 gradient descent iterations are taken respectively. 200. the cell and the MRI images. we use four phases which are sufficient to obtain a good segmentation for each of the test images. The natural scene is downloaded from the World Wide Web through the Google Image search engine. they are deterministic and not designed for global optimization. METHODS For the images used in our experiments. . to ensure convergence to a higher accuracy. Number of equivalent Fourth row: Stochastic level set method with  iterations at the original level is 64. The computational cost spent on the stochastic split-and-merge steps is negligible. we treat one iteration at resolution levels lower than the original iterations at the original level. Here. We count the number of gradient descent steps and use it as the number of iterations. . 5..

See [6] for another example of trapping in a local solution experienced by the gradient descent method. is the number of pairs of pixels which are assigned to the same phase in both segmentations. third column. We also show the corresponding average energy in Fig. the refinements are very little. B. Intermediate results of the stochastic level set method. and is the number of pairs of pixels which are assigned to different phases in both segmentations. our method leads to essentially the same solution (average pairwise Rand Index greater than 0. 12. However. the quality of the segmentation is much better and the energy is lower. the iterate is trapped in a local minimum. To assess the robustness to the initial condition. Hence. It is observed that our method always gives a higher Rand Index and its value is close to 1. To show that the results are robust to . 5. a much improved result is obtained. RESULTS A. we repeat the above experiment using a wide range of and a focused range around the manually chosen used in Fig. Here. we use the Rand Index [33] (8) where is the total number of pixels. Each update is done by randomly choosing a pixel. (IC STANDS FOR INITIAL CONDITION) is reduced by two levels and the remaining settings are the same. In our comparison to the simulated annealing. The computation cost of one iteration is comparable to that of one iteration of our stochastic level set method. For all the three distinct initial conditions. NO. The is when gradient descent method also exhibits a larger fluctuation in the Rand Index.2296 IEEE TRANSACTIONS ON IMAGE PROCESSING. All algorithms are implemented in MATLAB 7 with a Pentium D 3-GHz machine. 4. The result there is qualitatively similar to the one by the pure gradient descent at the original level. The gradient descent method is very sensitive to the initial condition. third column). This annealing scheme is optimized to obtain the largest decreased of energy with the least number of iterations. 4. Thus. is the change of energy due to the proposed change and is the annealing temperature which is initially set to and reduced by a factor of 0. the refinements are very little. The total CPU times are recorded. This is because of its instability to the initial condition. cf. The energy of the resulting segmentations are reported in Table I as a measure of the goodness of the results. showing that the . As the resolution increases (after ten hops). third row. proposing a new randomly chosen phase membership. In contrast. IV. followed by taking the average. VOL. Fig. DECEMBER 2008 TABLE I ENERGY OF THE SEGMENTATION RESULTS OBTAINED BY THE TWO METHODS. The results are shown in Fig. For the pure gradient descent method. It can be seen that our method always leads to a lower energy on average. We tried to use the smallest possible number of iterations while maintaining the quality of the resulting segmentations. These results use the Initial Condition 3 in Fig. As the resolution increases (after 10 hops). Further increasing the number of iterations does not improve the solution (results not shown here) since the iterations are trapped in a local minimum. we start the simulated annealing scheme with assigning each pixel to a random phase membership. we use an image consisting of breast cancer cells. Fig. 4. A value of 1 indicates a perfect match whereas a value of 0 indicates a total disagreement between the two segmentations. Hopping Process In Fig. showing that the results in the lowest level are indeed very good. 6. After five rounds of hopping. A reasonable result is obtained using the Initial Condition 1. The only exception where the Rand Index is about 0. 17. the stochastic level set method is very robust to the initial condition. To assess the agreement between two segmentations.98 in every (number of pixels) updates. 4. our method takes 109 equivalent iterations at the original level. cf. Fig. After the first 100 iterations at the lowest level. our method is more effective than the gradient descent method in computing global minimum. the number of iterations varies and is reported in each figure individually. The results of the pure gradient descent and our hybrid methods utilizing different initial conditions are shown in Fig. 6. the results using the other initial conditions are poor. Five rounds of hopping give a much improved result (cf. we compute the average pairwise Rand Index (cf. we compute all pairwise Rand Index. We call times of such local updates as one iteration. (8)) between the three segmentations obtained from each method. 4 . cf. Table I. The solutions computed by the two methods using the three initial conditions are compared.99. 5. we show some intermediate results during the hopping process. and then accepting the proposed change with the probability .8. To compare several segmentations. (8). The Rand Index is between 0 and 1. third row. This shows that the segmentations using the three initial conditions are quite close to each other. Although we are not able to verify that the solution is globally optimal. Robustness to the Initial Condition In this test.

a large area in the image is splitted and merged. Thus. No significant improvement over the deterministic scheme is observed. and 8000 iterations are shown in Fig. the result after 3000 iterations is similar to ours.LAW et al. After the gradient descent is applied. It takes about 20 h for every 1000 iterations. The convergence profile is shown in Fig. 8. Compared to our result. see the Method section. The results of simulated annealing after 1000. The result is very similar to the result obtained by deterministic merging scheme. the energy of simulated annealing after 8000 iterations is 2% lower than ours and the phase assignments differ only by 3%. Therefore. results in the lowest level are indeed very good. The Initial Condition 3 is used. cf. even after 8000 iterations. cf. The result is only slightly (2%) different from the result after 3000 iterations. The disagreement between the two phase assignments is about 2%. we compare the convergence profile of four different methods. the energy is the dark gray phase is chosen. Fig. D. instead of choosing the merger which causes the greatest reduction in the fitting term. and 8000 iterations. the gradient descent method (dashed line). we choose the simpler deterministic one. third row. the result is still incomparable to our result although it has already taken 20 h. Our splitting step is able to pick up this segment (with a high probability of 0. simulated annealing does not perform significantly better than our method. The plots of the energy versus number of iterations (up to 100 iterations) for the stochastic level set method (solid line). This scenario is in analogy with the illustration in Fig. 10 (left). 8 (right). They roughly correspond to the background and cytoplasm respectively. we start with the Initial Condition 3. The solution is trapped by a local minimum. Comparison with competitive approaches. Beand a large segment in fore the hop. We can see from the figure that the result is trapped in a local minimum and looks similar to the result of the pure gradient descent method. After 1000 iterations. the energy drops significantly to which is much lower than the energy before the hop. the hop is accepted. third column. 7. the original image in Fig. we further run the simulated annealing until 8000 iterations. 7. Energy increases after the split-and-merge but decreases significantly after the gradient descent iterations.: MULTIRESOLUTION STOCHASTIC LEVEL SET METHOD FOR MUMFORD–SHAH IMAGE SEGMENTATION 2297 Fig. fourth row. 9. we randomly choose a combination (a subsegment and a phase) with probability proportional to the reciprocal of the fitting term to merge. Next. Then it is splitted into two subsegments depicted in vertical and horizontal strips in the second picture. 3000.31) and try to correct it. 8 (left). 1. In terms of quality. This indicates that a multiresolution approach alone without hopping is insufficient to escape from a local minimum. To gain our confidence that our method can obtain solutions of good quality. Our energy is about 0. where a hop brings the iterate from D to E and then F. The CPU time for every 1000 iterations is about 20 h. The subsegment in vertical strips is then merged with the black background where the energy increases to . Multiresolution With Hopping Versus Multiresolution Without Hopping The result of multiresolution gradient descent method without hopping is shown in Fig. third column. we show the result before and after the fifth hop. Result before and after the fifth hop. 4. The result is shown in Fig. Fig. 4. Stochastic Level Set Method With Global Updates Versus Simulated Annealing With Local Updates We performed a systematic investigation of different annealing schemes on the breast cancer cell image. 9. Fig. cf. C. we demonstrate the effect of our global hopping. This indicates the potential to further lower the resolution when applying the hopping steps. Fig. Forty-one different annealing schemes are performed and we pick the best scheme for comparison with the stochastic level-set method. (Left) Multiresolution gradient descent method without hopping. Note that one iteration consists of 480 640 local updates. (Right) Stochastic level set method but with a stochastic merging scheme. Simulated annealing after 1000. 3000. For the simulated annealing method. 4. Deterministic Merging Versus Stochastic Merging In the merging step. The Initial Condition 3 is used and the number of iterations for each level is the same as those for stochastic level set method. Next. This scenario is in analogy with the illustration in Fig. For the stochastic level set and gradient descent methods. the multiresolution gradient descent method (dashed-dot line) and the simulated annealing method (dotted line) are shown in Fig. we start with a random configuration. We observe that the stochastic . E. In a single hop.2% lower. 1 where a hop brings the iterate from point D to E and then F. 10 (left). In Fig. Since such a stochastic merging scheme does not significantly improve the result.

the segments have a complex topology and shape. VOL. The Mumford–Shah model can exhibit these complex segments as a solution. The CPU time is 21 s. 12. We also show in Fig. 2 = 1000 = 1000 2) Proliferating-Cell Nuclear Antigen: The results are shown in Fig. 2 = 1000 = 1000 Fig. 64 iterations for the stochastic level set and multiresolution gradient descent (GD) methods. The image is also noisy. Stochastic level set method obtains a much lower energy in smaller number of iterations. 10. DECEMBER 2008 Fig. (Bottom left) Gradient descent and 1000 iterations. our method can obtain the desirable segmentation with all the different initial conditions that we tried (only one is shown in the figure). The objects to segment consist of clusters of small molecules—proliferating-cell nuclear antigen (PCNA) in this case. Fig. 10 (right) the result of simulated annealing up to 8000 iterations. 11. The stochastic level set method has no difficulty in segmenting them out. The CPU time is 17 s. Segmentation of the brain MRI image. (Bottom) 8000 iterations for the simulated annealing. The spikes in the solid curve are due to a temporary increase in energy right after the merging step (but before the gradient descent iterations followed). The CPU time is 217 s. 11. However. Both hops that are accepted and rejected are shown. (Top right) Initial condition. (Top) 100 iterations for the gradient descent and simulated annealing methods. (Bottom left) Gradient descent method with  and 1000 iterations. Segmentation of the zebrafish image. (Top left) Original image with size 190 200. (Bottom . 17. F. The CPU time is 170 s.2298 IEEE TRANSACTIONS ON IMAGE PROCESSING. Convergence profile of energy for various methods. it is nontrivial to realize. 12. (Bottom method with  right) Stochastic level set method with  . The number of equivalent iterations at the original level is 64. More Examples 1) Brain MRI Image: The results on a brain MRI image are shown in Fig. The black phase obtained by the gradient descent method contains many features which are not PCNA and are actually of lighter colors in the original image. Our energy level is reached by simulated annealing only after about 3000 iterations after which the energy stabilizes. (Top left) Original image with size 156 188. The objective is to segment out the tumor near the middle and to separate the grey matters from the white matters. They are depicted in black color in Fig. 12. The pure gradient descent method groups the tumor and the white matters together for many different initial conditions. NO. However. In this image. 12. (Top right) Initial condition. The number of equivalent right) Stochastic level set method with  iterations at the original level is 109. . level set method achieves a much lower energy level in less iterations than the other two methods. They are depicted in the original image as black dots around the intervillus pockets.

this situation is like partitioning a rectangle with uniform intensity into two parts by a vertical cut. The segments correspond to the sky. DISCUSSION In this paper. splitting and merging based on the sum-of-differences-squared appear to be a good candidate.g. it can be replaced by any other fast local optimization procedures. Then we have 3) Natural Scenery: The results are shown in Fig. (Bottom left) Gradient descent method and 10000 iterations. However. For the stochastic level set method. removing the cut can greatly reduce the length term. three significant phases with a total of four large segments are obtained. removing the cut) in a single hop. APPENDIX PROOF OF THEOREM 1 be the image domain of the original image and Let be the image domain of the downsampled image obtained by averaging four adjacent pixels. This is because of the regularization terms. 13. defined on . In this case. However. e. hence. This can be achieved by the stochastic split-and-merge which allows reassigning the phase membership of a large region (and. 13. The number of equivalent iterations at the original level is 109.e. 2 = 3000 = 3000 be the objective function defined on . (Top right) Initial condition.LAW et al. Our intuition is that although the objective function is nonconvex. Thus. the curve between the black and dark gray phases is stuck in a local minimum. trees. For the gradient descent method. Our results suggest that coupling global and local updates is a promising approach to solving nonconvex variational problems in image segmentation. Segmentation of a natural scenery. The energy attained by the proposed method is also lower. Intuitively. Let for be the th phase in for defined by a segmentation . beach and sea which are consistent with the natural segmentation of the image. [21]. V.. However. a local movement of the cut in the normal direction does not induce any reduction in the fitting term and the length term. It also obtains solutions of lower energy in much less computational resources.. Here and are the area of a pixel in and respectively. Then. designing an effective global updating scheme for a given problem can be nontrivial since the global landscape of the objective function is difficult to characterize. Let a constant extension of from to . see Table I. The CPU time is 56 s. we propose a hybrid optimization method to solve the Mumford–Shah segmentation problem. we have for any . it is an image such that . we show that our method is very robust to the initial guess and outperforms standard deterministic and stochastic methods. (Bottom right) Stochastic level set method with  with  .: MULTIRESOLUTION STOCHASTIC LEVEL SET METHOD FOR MUMFORD–SHAH IMAGE SEGMENTATION 2299 we use gradient descent to carry out local optimization. Let in defined by . Let be an arbitrary segmentation of and let be the be the th phase optimal constants determined from . Let be the objective function defined on and let Fig. we remark that while Let be the optimal solution of for . Finally. for the Mumford–Shah model. (Top left) Original image with size 133 150. is also a minimizer of on . its landscape is usually “not that complicated” either (at least for our examples). i. Using numerous images with distinct features.

Andy M. Comput. National University of Singapore. no.” J. and M.” Int. degree in computer science from the University of California. 2006. vol. 2050–2053.. J. Lecture Notes Comput. no. Univ. N. Shah. no. 74. no. pp. Cambridge. [17] D. A. D. 66. and M. 846–850. 282–299.” Phys. van der Born. degree in mathematics from the Chinese University of Hong Kong in 1998. vol. 1. J. [31] [Online].” J. M. J. Nov.” Int. vol. Pittsburgh. Convex Optimization. X. Esedoglu and Y. [6] L. R. 2352. The Netherlands: Kluwer. Ambrosio and V. Wang and D. Biomedical Imaging. pp. Comput.” Comput. Univ. [19] G. Chan and L. 2001. K. Tsai. 13. Assoc. 630–632. “A multiphase level set framework for image segmentation using the Mumford and Shah model. [24] T. [25] D. no. 25. S. [10] N. “A fast algorithm for level set based optimization. Singapore. Comput. Y. Dordrecht. P. with an award from the Japan Society for Promotion of Science. . vol. In 2005. “Optimal approximation by piecewise smooth functions and associated variational problems. 12–49. Juan. Amer. Lecture Notes Comput. 5111–5116. Landau. F. Vis. Sci. vol. “A multiresolution approach for shape from shading coupling deterministic and stochastic optimization. Math. Available: http://www. B. pp. 1953. no. Digital Image Processing.. motion and shape. 2006. pp. [3] D. no. 11.-S. F. Appl. Raba.-P. Teller. 2002. [18] A. Phys.D. Phys. E. “Optimization by simulated annealing. 195–215. Los Angeles. Muñoz. Yan Nei Law received the Ph. Rand. [26] J. 577–685.K. 02-68. Cufí. “Reweighting for nonequilibrium Markov processes using sequential importance sampling methods. Phys. 79. Okabe.” IEEE Tran. U. vol. 2003.Sc. Swendsen. Vese. X. Monte Carlo Strategies in Scientific Computing. Kearfott. C. Statist. 1996.” CAM Rep. M. His research interests include variational and PDE methods in image processing and data mining algorithms. 1990. D. Gonzalez and R. Vandenberghe. 43. “Equations of state calculations by fast computing machines. 03-57.” Phys.. P. vol.cfm. A. “Approximation of functionals depending on jumps by elliptic functions via -convergence.. [15] H.. Numer. W. and H. New York: Springer. 42. A. where he worked on developing advanced Monte Carlo methods and nano-magnetism. Rev. Chan and S. 671–680. Mach. Comput. His current research work involves developing and deployment of computer vision aglorithms for high throughput bioimaging experiments. G.: Cambridge Univ. Anal. 81.” SIAM J. Teller. Tortorelli. Postelnicu. he worked on Monte Carlo simulations of liquid-liquid phase transitions and quasicrystals. Press. Nonconvex Optimization and Its Applications. 7. 17. in 2001. pp. 2006. Hurlstone.Phil. Pattern Anal. Koepfler. Hwee Kuan Lee received the Ph. Esedoglu. 1087–1092. E. pp. 2007.. pp. In 2006. [12] F. “Picture thresholding using an iterative selection method. 999–1036. Durou. 2. Pure Appl. in 2006. 1996. Osher and J. “A fast marching level set method for monotonically advancing fronts.. Syst. Okabe. Nat. She joined the Bioinformatics Institute. 2006. Sci. in 2006. texture. vol. no. Mumford and J. pp. S. [8] S. A.” EMBO Rep. Esedoglu. [23] S. Aug. Rigorous Global Search: Continuous Problems. Chan. Math.” Int. “Stochastic motion and the level set method in computer vision: Stochastic active contours. H. “Global optimization by basin-hopping and the lowest energy structure of Lennard-Jones clusters containing up to 110 atoms. M. “Efficient. 2nd ed. 015102(R). no. Keriven.” Math. Yip received the B. in July 2005 as an Assistant Professor. 1978. n.. REFERENCES [1] R. A. and X. Doye. Scale Space and Variational Methods in Computer Vision. vol.. 72..D. 220.” J. “Cluster Monte Carlo algorithms.umassmed. [4] L. Los Angeles. 1997. vol.. 21. Image Process. 444–449. Shen. A.” Proc. pp.” Science. Boyd and L. and J. vol. Phys. 408–422. [14] J. investigating novel recording methods such as hard disk recording via magnetic resonance. where he developed solutions to extremely long time scaled problems and a reweighting method for nonequilibrium systems. 4. 2001.. Rosenbluth. 175. vol. He and S. [30] J. “A multiscale algorithm for image segmentation by variational methods. vol. P. 2002. vol. 71. 3. 6. Lopez. [5] T. [32] A. vol. G. Feb. Deriche. pp. 2006.2300 IEEE TRANSACTIONS ON IMAGE PROCESSING. and the Ph. “A stochastic-variational model for soft Mumford–Shah segmentation. 8. and J. 2002. F. Previously. Lett. “Active contours without edges. 167. vol. Vis. 1994. “Adenomatous polyposis coli-deficient zebrafish are suspectible to digestive tract neoplasia .-D. Sethian. Offerhaus. 50. Lee and Y. Kirkpatrick. 1. degree in mathematics from the University of Hong Kong in 2000. ser. Lee.” CAM Rep. no. Comput. A. Phys. Los Angeles. [20] B. pp. “Objective criteria for the evaluation of clustering methods. Crouzil. J. [29] H. vol. Nikolova. F. Cremers. pp. California. and D. pp. Wang and R. van der Velden. Chem. Descombes. M. and E. Martí. Intell. “Threshold dynamics for the piecewise constant Mumford–Shah functional. 1983. [2] J. 4. 2. He is a Principal Investigator of the Imaging Informatics division in Bioinformatics Institute. Rev. 777–788. 2004. 101. [21] T. he joined the Tokyo Metropolitan University. and G. J. Acad. in 2005. Sethian. vol. 4598. the M. vol. 36–40. and R. vol. Haramis. vol. 266–277. “A multiscale algorithm for Mumford–Shah image segmentation. no. as a Postdoctoral Research Fellow. H. Chan. “A review of statistical approaches to level set segmentation: Integrating color. 10.” J. Article ID 92329. Calvard. A. pp. 2002. 2007. A. 2003. [27] O. 69. pp. pp. pp.D. Osher. 7–25. 1591–1595. K.. Vis.. A. he returned home to join the Data Storage Institute.. multiple-range random walk algorithm to calculate the density of states. H. 1416–1421. degree in theoretical physics from Carnegie Mellon University.” Int. Commun.” Commun. Woods. Chem. Song and T. [16] J. 271–293. C. PA. C. [9] R. 1971. Zhao. Freixenet. Man Cybern. DECEMBER 2008 ACKNOWLEDGMENT The authors would like to thank the reviewers for providing very helpful suggestions to improve the presentation of the manuscript. Vecchi. He joined the Department of Mathematics. “A fast sweeping method for Eikonal equation. A. ECCV. “Convergence and refinement of the Wang-Landau algorithm. 93. 1990. F. VOL.. M. vol. Offerhaus. 2006. pp. Comput. Englewood Cliffs. K.. vol. Los Angeles. 1988. Sci. 2005. Metropolis. In 2003.” IEEE Tran. 565–579.. J. S. Landau. [22] L. 1.” J. “Fronts propogating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation. 603–627. “Yet another survey on image segmentation: Region and boundary informationintegration. NO. vol. Rousson. Ridler and E. He then held a joint postdoctoral position with Oak Ridge National Laboratory and the University of Georgia. Gelatt.” SIAM J. Math. M. Vese and T. pp. Morel. degree in mathematics from University of California. “Solving the Chan-Vese model by a multiphase level set algorithm based on the topological derivative. C. [13] H. 367–384. 2005. he joined the Bioinformatics Institute as a Principle Investigator in the Imaging Informatics Division. California. 66. “Algorithms for finding global minimizers of denoising and segmentation models. Wales and J. pp. Begthel. Her research interests include issues related to bioimaging with emphasis on developing image processing and data mining techniques for high throughput images. vol. 4485.” IEEE Tran.” Phys. 31. Liu..” in Proc. [7] S. 1989. P. W. pp.” in Proc. Rosenbluth. [11] S. pp. H. no.edu/bmp/faculty/ross. [33] W. Chan. 211. p. vol. Y. pp. Pure Appl. Clevers. 1632–1648. 2001. SMC-8. J. NJ: Prentice-Hall. pp. 12. [28] T.” Commun.