[Castor-users] Reconstructing with normalization

Philip Kalaitzidis kalaitzidis.philip at gmail.com
Fri Sep 25 19:07:58 CEST 2020


Dear CASToR developers and users,

I have been facing an issue when trying to reconstruct PET data (a 18F
measurement with the NEMA IQ Body Phantom) acquired on a G.E. Discovery MI
with CASToR, in particular when reconstructing with normalization
correction factors embedded in the file. The correction terms (attenuation,
scatter, random, normalization, dead-time, and pile-up) that I am using
when reconstructing with CASToR have been extracted with the G.E. PET
Toolbox. I am converting the G.E. PET list-file to a CASToR list-file,
embedding all aforementioned correction terms in that file. Additionally, I
am also constructing the CASToR normalization file by looping over all
(unique) crystal-pairs and embedding the attenuation- and normalization
correction factors as is suggested in the general documentation.

When performing the reconstruction (in my case OSEM with a 4 mm axial and
transaxial FWHM post-filtering) with normalization factors embedded in both
the list-file and the normalization file, every other slice has a reduced
intensity - this does not appear if I do not use the CASToR normalization
file (and thus not embedding attenuation- and/or normalization correction
factors in the list-file). This can be seen in the attached image, (1)
shows the reconstructed images (summed) with CASToR including attenuation
and normalization correction embedded in the file, (2) shows the
reconstructed images (summed) when using the pifa obtained from the PET
Toolbox to account for attenuation correction and not including attenuation
and normalization correction factors in the CASToR file, and (3) is when
the reconstruction is performed with the PET Toolbox.

Now, if I look at the sinograms extracted from the PET Toolbox I can see
that for the first axial indices (0-70) there is a notable difference in
the number of counts, something that I am expecting to be corrected for by
the normalization correction. So, I tried to look at the prompts sinogram
by subtracting the randoms and scattered coincidences, and then multiplying
each sinogram bin by the attenuation- and normalization correction factors.
>From this I get a smooth sinogram (summed over all views) which can be seen
as (4) in the attached image. If I would do the same thing again, but this
time not multiply with the normalization correction factors, this differing
intensity becomes apparent in the first axial slices in the sinogram (ring
difference 0 for the DMI with four axial units), seen to the left in (5) in
the attached image.

This made me think that the error lies in the CASToR normalization file.
So, to be on the safe side that I constructed the CASToR normalization file
correctly, I looked at the normalization file in the benchmark of the G.E.
SIGNA data event by event and could not find any discrepancy between how I
constructed the normalization file and the organization to that of the
benchmark normalization file.

I have tried changing the projector but the issue remains (although less
apparent with the distance-driven projector).

Is there something obvious, or less obvious, that I might have missed? Is
there something that you could point out that might be wrong?

I hope to hear from you!

Best regards,
*Philip*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.castor-project.org/pipermail/castor-users/attachments/20200925/eed9a2dd/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 2494922 bytes
Desc: not available
URL: <http://lists.castor-project.org/pipermail/castor-users/attachments/20200925/eed9a2dd/attachment-0001.png>


More information about the Castor-users mailing list