Reconstruction of a Real Dataset, Part 1: Introduction

This is part one of a tutorial that describes how to create a reconstruction of a cone-beam computed tomography (CT) dataset (also read part two, part three, and part four. This tutorial includes the preprocessing step that is needed to go from raw X-ray images to projections (in the mathematical sense of the word) that can be used for tomography.

If you need a general overview or a refresher on tomography, read my basic introductory series of articles first. More recently, I wrote a tutorial on the ASTRA Toolbox, a well-known open-source toolbox for tomographic reconstruction. I got quite some reactions on that tutorial, and one thing that kept coming back was that people wanted to see the reconstruction demonstrated using a real dataset instead of a simulated one. This article does just that, and it also uses the ASTRA Toolbox for the reconstruction. The preprocessing step does not use the ASTRA Toolbox.

For this tutorial, I’ve used a real dataset of an apple. Figure 1 shows one of the raw (i.e., unprocessed) X-ray images, downscaled to 500×500 pixels. I would like to thank my friends at UGCT for the permission to use this dataset on my blog.

Figure 1. Raw X-ray image of an apple.Figure 1. Raw X-ray image of an apple.

Apart from the actual images, you’ll need certain information on the dataset and on the geometry of the scanner. This information is typically contained in logfiles that should accompany the X-ray images in your dataset. The following information is relevant for this tutorial.

  • The geometry is cone-beam, with 2001 projections in the range of 0 to 360 degrees.
  • The images are 2000×2000 pixels.
  • The source-detector distance is 1256.0 mm.
  • The object-detector distance is 361.0 mm.
  • The size of a detector pixel is 0.2×0.2 mm.
  • There are 5 dark-frame images that were taken before the scan.
  • There are 20 flat-field images that were taken before the scan and 20 flat-field images that were taken after it.

This is clearly a large dataset (it weighs in at 16 GB of images on my drive). For the examples in this article, I’ve made it a lot smaller. I’ve downscaled the projections to 500×500, and reduced the number of projections to 400 (simply by using only every 5th projection). I’ll continue the rest of this series of articles as if this was the original dataset.

If you are trying to make a reconstruction from a new type of dataset, e.g., from a scanner that you haven’t used before, I would recommend that you always start from a reduced dataset, since it might save you a lot of time in experimenting with different settings.

Converting this info into the format of the first Python fragment of the article ASTRA Toolbox Tutorial: Reconstruction from Projection Images, Part 1 results in the following Python file.

# filename: config.py
import numpy as np
 
distance_source_origin = 361.0  # [mm]
distance_origin_detector = 895.0  # [mm]
detector_pixel_size = 0.8  # [mm]
num_projections = 400
angles = np.linspace(0, 2 * np.pi, num_projections, endpoint=False)
 
margin = 10
horizontal_shift = -2
 
raw_dir = 'raw'
preproc_dir = 'preprocessed'
proj_dir = 'projections'
reco_dir = 'reconstruction'

Note that the dectector pixel size is now 0.8 mm instead of 0.2 mm, because each pixel of the reduced images groups 4×4 original pixels. The variables after the first group of five will be used in part 3, which is still under construction…

Part two of this tutorial shows how to do flat-field correction of the raw X-ray images.

Muhammad123 (not verified)

Fri, 08/13/2021 - 16:00

Hi,

Thanks for this great tutorial. It may be a silly question but can you please tell me about the margin and horizontal_shift, I mean what to do they mean.
I am also trying to do cone beam reconstruction on a dataset where I have parameters offsetH and offsetW (in positive values) in pixels(unit) and I think thats why the projection are biased to the left and tilted downward. I don't know where do I need to pass in ASTRA toolbox geometry.

Thanks in Advance.

horizontal_shift is used in part four to correct for the axis of rotation not being entirely in the middle of the detector. margin is used in part three to estimate the intial intensity of the X-ray beam. Your offsetW could be a value to use for horizontal_shift (or the negative of the value). For offsetH you could add something like the roll statement in the reconstruction script, but then for the vertical axis.

Hi Tom,
Thanks for your reply. I found out later in the tutorial what(horizontal shift and margin) they are used for.
You mentioned thats we can use Astra for the adjustments of geometry. Can I use this function astra.functions.move_vol_geom(geom, pos, is_relative=False) instead of using np,roll() because in my dataset the background is not clear like in your dataset.

Amirreza (not verified)

Wed, 06/15/2022 - 09:33

Hi Tom,
Thank you very much for this excellent tutorial.
Could you please tell me that how did you calculate the amount of horizontal_shift ?
I saw it's effect on my image reconstruction, but i don't know how should i calculate it's amount for each dataset.
Thanks a lot

Thanks for your kind words! There are techniques to estimate this from the data, of course, but the ASTRA Toolbox does not contain algorithms for that. Commercial scanners normally include this functionality in their bundled software. To keep things simple for this particular example, I decided to stick with an integer value for horizontal_shift, and I actually determined the best integer value by making several reconstructions and picking the best one... ☺

Add new comment

The content of this field is kept private and will not be shown publicly.
Spam avoidance measure, sorry for this.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Submitted on 7 April 2021