Spectral CT Material Decomposition in the Presence of Poisson Noise: A Kullback-Leibler Approach

While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decom-positions in the projection allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made possible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.


INTRODUCTION
Standard Computed Tomography (CT) is currently one of the most useful imaging modalities. However, X-ray detectors integrate the whole energy spectrum and since material attenuation depends on the incident X-ray photons energy, there is a loss of information. By performing energyresolved acquisition, spectral CT is promising to further improve the interest of CT. In particular, 3D mono-energetic images or 3D density map of materials can be retrieved. SPCT has many potentials for clinical applications, in particular, to image high Z contrast agents [1,2]. While, dual-energy CT have been proposed to acquire energy-resolved data [3], but with a large irradiation dose. Photoncounting detectors based on a new technology can count photons that are hitting the detector and retrieve their energy [4].
Three different approaches for decomposing SPCT data into material volumes are available. A first approach consists in the tomographic reconstruction of the energy-resolved measurements followed by a material decomposition step in the object domain [5]. A second approach is to do the tomographic reconstruction and the decomposition at the same time, which is usually referred to as the one-step approach [6]. A third approach is to perform material decomposition in the projection domain followed by a tomographic reconstruction step [7]. The last approach is highly parallelizable (each projection angle can be processed individually) and it embeds all the non-linearities of the forward problem, i.e., tomographic reconstructions can then be performed linearly without any simplifying assumptions. In this work, material decomposition in the projection domain is considered.
Material decomposition is a non-linear ill-posed inverse problem that can be solved by minimizing a cost function composed of a data fidelity term and a regularization term. A regularization parameter is chosen to balance the two terms of the cost function. SPCT data are corrupted by photonic noise, which is Poisson distributed. Roessl et al. and Schlomka et al. have proposed decomposition methods that take into account the statistics of the noise in the projection domain [8,7], e.g., using negative log likelihood. However, the latter methods rely on zero-th or first-order minimization algorithms [9,10] that have slow convergence rates. Recently, a fast method based on a Gauss-Newton algorithm was proposed [11]. Unfortunately, it uses a weighted least squares fidelity term that is not perfectly adapted to Poisson noise. In this work, the Kullback-Leibler distance is proposed to better handle the noise statistics and a Gauss-Newton algorithm is implemented to get fast decomposition.
This paper is organized as follows. In Section II, the forward model that maps the projected mass of the material onto the number of photons detected is presented. In Section III, we detail the regularization functional considered and the minimization algorithms used. Numerical experiments are presented in Section IV before the conclusion.

Object model
Assuming that the object of interest can be decomposed on a basis of M materials [12], the linear attenuation coefficient is written where µ(E, x) is the linear attenuation coefficient (in cm −1 ) at point x and energy E, ρ m (x) is the mass density (in g.cm −3 ) of the m-th material at point x, and τ m (E) is the mass attenuation (in cm 2 .g −1 ) of the m-th material.

Forward model
The standard SPCT forward model [11,13] is written as is the projection of all the mass of the m-th material along the x-ray path L(u), s i (u) is the number of photons hitting the detector in the i-th energy bin at pixel u, n 0 (E, u) is the emitted X-ray spectrum, and d i (E) is the response function of the i-th energy bin of the detector [7].

Discretization and noise
The detector is assumed to have P pixels, with u p the location of the p-th pixel, and I energy bins. We define the measurement vector s ∈ R IP as where s i,p is the photon count measured in the i-th energy bin at pixel u p , i.e., s i,p = s i (u p ). Similarly, the projected mass density (PMD) vector a ∈ R M P is defined by where a m,p is PMD of the m-th material at pixel u p , i.e., a m,p = a m (u p ). Data is assumed to be corrupted by Poisson noise. Let s * be the noiseless data vector, s the noisy data, and P(γ) the Poisson distribution with mean γ. We have:

Inverse problem
Material decomposition is an ill-posed problem that consists in recovering the projected mass density a from the data s. Let s = F(a) be the forward model defined in (2). The PMD is recovered by minimizing the following cost function where D(s, F(a)) is the fidelity term, α the regularization parameter and R(a) the regularization term used to stabilize the solution. The Tikhonov regularization terms used in [11] is chosen where ∆ and ∇ are first-and second-order differential operator.

Fidelity term
The usual fidelity term is the WLS, defined by where ||.|| 2 is the 2 -norm and W is the weighting matrix defined as W = diag( 1 √ s+1 ) where diag(x) represents a diagonal matrix with the elements x. However the noise associated with photon detection problem is Poisson noise. The Kullback-Leibler divergence is the distance associated to Poisson noise [14]. The KL distance between two Poissondistributed data s and F(a) is written [15] where ζ is a small integer that avoids to dividing by zero and having the log of zero.
The corresponding data fidelity terms will be denoted D KL and D WLS .

Minimization of the cost function
The cost function (7) is minimized by using a Gauss-Newton's algorithm. This second-order algorithm provides faster convergence than first order methods. Starting with an initial guess a 0 , the solution is computed iteratively with the following update formula a k+1 = a k + λ k δa k (11) where a k is the solution at the iteration k, δa k is the Newton's step and λ k a scalar optimizing the minimization in the direction δa k . The Newton's step is computed by solving (J Z h J+αH reg )δa k = −(J Z g (F(a k )−s)+αg reg ) (12) where J is the Jacobian of the forward problem, H reg and g reg is the Hessian and the gradient of the regularization term, respectively. Z h and Z g are diagonals matrices which depend on the fidelity term. They can be written as follow [16] WLS : where W is the weighting matrix of the WLS term, Y = diag(F(a) + ζ) and S = diag(s + ζ).
The step length λ k is computed as

NUMERICAL EXPERIMENTS AND IMAGE ANALYSIS
Material decomposition is performed in the numerical phantom DigiMouse [17]. Projections are generated with the radon Matlab function. The goal is to decompose M = 2 materials (soft tissues and bones) from projections of P = 437×992 pixels.
We define the number of photons N 0 send at each pixel detector (14) Since N 0 controls the signal-to-noise ratio, a wide range is tested: N 0 = [10 2 , 10 2.2 , ..., 10 4 ]. The Xray spectrum used is generated with a 90 kV source. Considering a detector with I = 3 energy bins (14− 39keV , 40 − 64keV and 65 − 90keV as energy bins). The regularization parameter values tested are α = [10 −2 , 10 −1.8 , ..., 10 4 ]. Our initial guess a 0 was set as uniform material maps with a 0 soft = 2 g.cm −2 and a 0 bone = 0 g.cm −2 . The algorithm is stopped when the relative decrease of the cost function is below 0.1%, if the computed step λ k is below 10 −2 or if the maximum iteration number 50 is reached. The relative error is computed as where a dec m is the decomposed map of the m th material and a true m is the ground truth for the material m. Moreover,α denotes the regularization parameter that minimizes the relative error ξ

RESULTS AND DISCUSSION
In this section, we present results from experiments on DigiMouse.
In Figure 1 (left), the evolution of the fidelity terms D KL and D WLS is plotted as a function of the number of iterations. There is an apparent decrease of the regularized functionals, and local minima are obtained after four iterations. The algorithm stops since there is a low decrease of the cost function or a small step length λ k , which represents a local minimum because δa k is, by construction, a descent's direction.    Figure 1 (right) shows the evolution of the decomposition error forα with increasing numbers of photons N 0 for the fidelity terms D KL and D WLS . For real projections, the ground truth is not known and thus from this figure the improvement of the reconstruction results with the KL distance for the lower number of photons is highlighted. For these low numbers of photons the weighted least square is not a good approximation of the KL distance and this statistical divergence is more efficient to take into account the noise statistic. Similar curves are obtained when relative errors for a given material are considered. Figure 2 shows the ground truth images of the PMD images and decompositions with WLS and KL for N 0 = 10 2.2 . A low number of photons is chosen to highlight the performance of KL. The KL distance clearly retrieves more faithfully soft tissues and bones than WLS at a small number of photons.
In Table 1, the average time required in order to make one iteration (3.5 GHz Xeon E5-1650 v3 and 64 GiB of RAM) as well as the standard deviation for α = [10 −2 , 10 −1.8 , ..., 10 4 ] are shown for N 0 = 10 2.2 . The minimum and the maximum number of iterations required for convergence are also displayed. Table 2 provides the value of the optimal α, the computation time and the iterations number in order to converge for N 0 = 10 2.2 . Both KL and WLS converge rapidly for a large range of α's and take almost the same CPU time. KL is found to outperform WLS for a low numbers of photons.α

CONCLUSION
In this work, methods for material decomposition are carried out in the projection domain in SPCT. The non-linearity of the forward problem is taken into account. Two data fidelity terms, one based on weighted least squares and the other on the Kullback-Leibler divergence, are compared. The regularized cost functions are minimized with a Gauss-Newton scheme. The methods are compared for 2D projections of a 3D realistic phantom for different numbers of photons. The Kullback-Leibler distance gives the best results for the lower number of photons. In the future, we plan to assess the method with other phantoms and a larger number of materials and to develop methods for automatic selection of the regularization parameter.