Skip to content
/ TrAXer Public

An (Axion) X-ray raytracer for X-ray optic simulations

Notifications You must be signed in to change notification settings

Vindaar/TrAXer

Repository files navigation

TrAXer

Traces Real Axion X-ray Emission Rays ? Or something like that. Who knows, naming things is always fun huh.

This is a fork of my Raytracing In One Weekend implementation (see Ray Tracing in One Weekend), which majorly extends it.

It is an easier to use, extend and interact with version of the raytracer part of https://github.com/jovoy/AxionElectronLimit.

It adds some features of the second book, others from the amazing https://pbr-book.org/, but most importantly makes it a useful tool for my work:

It is now a raytracer for X-rays (in the context of Axion helioscopes like CAST or (Baby)IAXO).

To make it actually useful, it has an interactive raytracing mode using SDL2, so that you can investigate the scene geometry in real time. The standard approach to sample rays from the camera is extended by a secondary ray sampling, which samples directly from light sources towards LightTargets. In addition with ImageSensors we can visualize the accumulation of rays on a sensor in the scene (and save them to binary buffers with F5).

Every time you move the camera the image buffer will be deleted. Until you do so however, it will accumulate more and more rays.

Installation

The only real dependency is SDL2, which should be easy to install on all linux distributions.

Outside of that it uses malebolgia for multithreading. In addition for the --solarModelFile parsing it depends on https://github.com/SciNim/datamancer and its dependencies.

Install both via:

nimble install malebolgia datamancer

Controls

Once you click on the window, it activates mouse control. To exit mouse control, press Escape.

  • W, A, S, D (or arrow keys): movement
  • Ctrl, Space: up, down (a bit broken)
  • T: switches to light tracing mode when single threaded (irrelevant nowadays)
  • PageUp, PageDown: Increase / decrease the movement speed
  • TAB: invert the view front to back
  • F5: Save the current buffers (image, counts and all ImageSensors) to the out directory, which can be plotted using the plotBinary helper script.

Options

Auto generated CLI by cligen. Help texts still need to be added, but the basics should be pretty obvious.

Usage
  main [optional-params]
Looking at mirrors!
Options:
  -h, --help                              print this cligen-erated help

  --help-syntax                           advanced: prepend,plurals,..

  -w=, --width=          int        600   set width

  -m=, --maxDepth=       int        5     set maxDepth

  -s=, --speed=          float      1.0   set speed

  --speedMul=            float      1.1   set speedMul

  --spheres              bool       false set spheres

  -l, --llnl             bool       false set llnl

  --llnlTwice            bool       false set llnlTwice

  -f, --fullTelescope    bool       false set fullTelescope

  -a, --axisAligned      bool       false set axisAligned

  --focalPoint           bool       false set focalPoint

  -v=, --vfov=           float      90.0  set vfov

  -n=, --numRays=        int        100   set numRays

  --visibleTarget        bool       false set visibleTarget

  -g, --gridLines        bool       false set gridLines

  -u, --usePerfectMirror bool       true  set usePerfectMirror

  --sourceKind=          SourceKind skSun select 1 SourceKind

  --solarModelFile=      string     ""    set solarModelFile

  --nJobs=               int        16    set nJobs

  -r=, --rayAt=          float      1.0   set rayAt

Show me!

The classical

The LLNL CAST telescope setup

Pictures of the first implementation of the LLNL CAST telescope setup (without any external pipes of course!)

You get this scene if you use the --llnl command.

./media/raytracing_llnl_telescope_bore.png ./media/raytracing_llnl_telescope_sensor.png ./media/raytracing_llnl_magnet_bore_sun_disk.png ./media/raytracing_focal_point_interactive_first_attempt.png ./media/raytracing_focal_point_paralle_xray_finger_interactive_after_fixes.png ./media/raytracing_llnl_focal_point_image_sensor_parallel_light_looking_at.png ./media/raytracing_llnl_focal_point_image_sensor_xray_finger_14.2m_3mm_looking_at.png

Hypothetical full LLNL telescope

You get this scene if you use the --llnl --fullTelescope command.

./media/full_telescope_side.png ./media/full_telescope_front_1500mm.png ./media/full_telescope_at_1530mm.png ./media/full_telescope_side_above.png ./media/full_telescope_towards_sensor.png

Hypothetical “double” LLNL telescope

You get this scene if you use the --llnl --llnlTwice command.

./media/llnl_twice_from_above.png ./media/llnl_twice_from_front.png ./media/llnl_twice_towards_image_sensor.png

Things to do

  • [ ] Use colormap for the ttLights view -> Will this even remain? We could also implement it by just having the key bring the camera directly to the image sensor and/or reading from the sensorBuf instead of having a separate sampling proc
  • [X] Refactor ttLights code to also work for multithreaded -> Maybe just rely on the actual Sensor buffer? And then when in ttLights mode we could only sample rays based on the sources and simply display the image sensor buffer? That way we wouldn’t get into the trouble of having to read and write from the temp buffer in places we don’t know where they might end up on.
    • [X] the underlying buffer is now protected by a Lock, both for read and write access
    • At this point it should be pretty much really GC safe. Each thread has its own RNG, Camera and HittablesList. Relevant information between them is synced after each tracing run is done.
      • [X] ORC is still not happy though. -> I marked all ref types as acyclic (hittables and the textures). This seems to have fixed it. They are indeed acyclic. We have no intention of constructing ref cycles after all.
  • [X] Disable sensitivity of ImageSensor to rays emitted by the Camera directly! (I guess they kind of act like noise haha) -> I gave the Ray type a RayType field that indicates if the initial ray was created from the Camera or directly from a light source. NOTE: This currently implies that if a ray is shot from the camera, hits a light source and then hits an image sensor, no counts are recorded on the image sensor! Only the rays we sample directly from light sources count. BUT: Currently our light sources act as perfect sinks anyway! They do not scatter light, so this scenario does not exist anyway.
  • [ ] A BVH node of the telescope is currently broken! It causes all sorts of weirdness. The shape is correct, but the lighting is very wrong!
  • [X] make the multithreaded code safer. It’s still a bit crazy. -> taken care of by using acyclic & a lock on the sensor
  • [ ] Implement X-ray reflectivities
  • [X] Check LLNL telescope setup
  • [ ]
  • [X] Implement Target material that is used when sampling rays for DiffuseLight.
  • [X] For parallel light use Laser instead!
  • [X] Implement SolarEmission material to sample correctly based on flux per radius CDF
  • [ ] COULD WE add an option to “draw” a sampled ray? I think it should be pretty easy!
    1. have a button, say F8, when pressed it samples a ray
    2. trace the entire ray and mark each hit, store each intermittent ray
    3. construct cylinders for each ray and overlay them on the sampled ray -> How?
      1. Use ray origin as target position for the cylinder origin
      2. Use ray direction as guide for required rotation
      3. Use (hit position - start) as length, making it end at the hit
    4. add these cylinders to the world HittablesList
    5. update the world of each threads data.
    6. upon new sampling the ray should appear.

    Potential problem: The added ray interferes with the image we see on the ImageSensor! -> Add only to the world that is used for the Camera!

  • [ ] Add ImageSensorRGB which stores the colors in each pixel. That way can give each layer a color, e.g. using ggplot2 colors and then can tell where rays from each layer end up on the sensor!
  • [ ] Implement an ImageSensorVolume thing. Then sample a conversion point at that distance and take the projected point as the point hit on the actual readout! This allows us to do simplified gas detectors. How? Material of the ‘box’ would be the volume. If hit, sample a conversion point given gas (setup initially) and ray energy. Based on that, propagate the corresponding distance and activate pixel at the ‘bottom’! I like this idea. WE COULD EVEN couple this idea with our fake event generator! For each conversion point generate an event and process! -> If we did that we could have a real time event simulator, lol, by resetting the ImageSensorVolume and lowering the of rays shot to the detector! -> Well: Downside is of course that the camera still has to sample rays to see what is shown on the ImageSensorVolume in the live event display. For that we need multiple second exposures to see stuff after all. Combining that makes it not very live anymore. We would need some way to directly map image sensor data to the camera. That might not be too hard if we had something more like a raycasting approach to the ImageSensor. I.e. before tracing a ray in the classical approach, do a test ray cast and see if we hit a pixel and in that case neglect the actual raytracing? But aren’t we already doing that in some sense? If we hit an ImageSensor we don’t actually continue. Ah! But we still check everything in the scene to be sure nothing is closer than the sensor! I guess that makes it slow? Need the bounding boxes working! -> Fixing those should make sampling towards the image sensor very fast, if my theory is correct.
  • [ ] Add new shape that allows subtracting one shape from another. I.e. have a rectangle and subtract a disk from the center. Should be easy, just combine their individual hit using boolean logic. Subtraction is: hitsA and not hitsB for example.
  • [ ] ADD SANITY CHECK that checks that in our current model indeed the bottom part of layer i+1 is perfectly flush with the top part of layer i!
  • [ ] Implement alternative to sSum where we keep the entire flux data! This would allow us to directly look at the axion image at arbitrary energies. It would need a 3D Sensor3D to model flux as 3rd. Note: when implementing that it seems a good idea to order the data such that we can copy an entire Spectrum row into the sensor. So the last dimension should be the flux, i.e. [x, y, E]. That way copying the Spectrum should be efficient, which should be the most expensive operation on it.
  • [ ] implement toggle between XrayMatter behavior for X-rays and light for Camera!
  • [ ] FIX Camera usage of attenuation for XrayMatter when calling eval! Need to convert to RGBSpectrum

NOTE: The LLNL simulation is slightly wrong in the sense that the mirrors actually become smaller towards the end with smaller radius, because we have real cone cutouts. In reality it starts from flat glass with 225 mm length and a fixed width, which is just curved to a cone shape. That means the required ‘angle’ φ_max actually increases along the axis. We don’t model this, but the effect of this should be very marginal. See fig. 1.4 of the DTU thesis about the LLNL telescope.

About

An (Axion) X-ray raytracer for X-ray optic simulations

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages