This collection of scripts is used to extract and analyze data gathered by TRINAT's nuclear acquisition to measure the nuclear polarization of trapped potassium atoms.
-
_constants.py
: Stores any relevant physical constants (e.g.$\mu_B$ ) in a single place. -
_load.py
: Contains three functions:load_data()
,compute_observables()
andgenerate_histograms()
:-
load_data()
extracts relevant data from the given.root
file. It also converts units where needed (TDC_*
$\rightarrow$ $ns$ ,TTTL_*
$\rightarrow$ $\mu s$ ). The function returns a dictionary of NumPy arrays. -
compute_observables()
takes the dictionary generated byload_data()
and computes relevant observables (e.g.Y = TDC_ION_MCP_LE[0] - TDC_PHOTO_DIODE_LE[0]
, ...) -
generate_histograms()
plots the data extracted usingload_data()
after runningcompute_observables()
. It plots the data before and after applying cuts.
-
-
_models.py
: Stores models used to fit the data (currently only$F=2 \rightarrow F'$ is implemented). -
_physics.py
: Stores functions to compute physical quantities (e.g. nuclear polarization). -
polarization.py
: Performs the analysis using functions defined in the files above. Loads data, makes cuts and fits the model to the generated spectra. Generates a printout of the fit parameters and saves them to a.txt
file.
- Make a new folder and copy the
.py
files from one of the examples. - Move the relevant
.root
files into the folder (one for FLIP and one for NORM polarization). - Set
OUTPUT_PATH
andpath_flip
andpath_norm
to point to the correct.root
files. - Set the measurement parameters
V_MIN
,V_MAX
,STEPS
andLOCKPOINT
. These should be recorded for every measurement. - Run the script. The script must be run from the Polarization directory (running
pwd
should print/{some path}/Polarization
). On Linux the script can be run by simply typingpython /{some path}/Polarization/{folder name}/polarization.py
(this assumes that the python installation is properly set up on the system).- When using conda ensure that the correct environment is used by running
conda activate {environment name}
before executing the python script.
- When using conda ensure that the correct environment is used by running
- Check the histograms generated in
.root
files calledhistograms_*.root
in the output folder to make sure that cuts are done properly. AdjustCUTS
if necessary and run again. - Once the histograms look good adjust fit parameters until fit looks good.
Version: 3.10.5 or higher (python is excellent with respect to backwards compatibility excluding the jump from python 2 to python 3)
On Linux systems I recommend using a conda environment. The advantages are that you can avoid messing with the python version installed natively on most systems which might affect how the system works. It also makes it easy to install ROOT with its python interface (which is needed for this code to run).
I recommend miniconda (https://docs.conda.io/en/latest/miniconda.html). Prior to installing any conda environments I recommend installing mamba (https://anaconda.org/conda-forge/mamba). Mamba speeds up how fast conda can install environments (on my machine installing root without mamba took about 15 minutes, with mamba less than a minute).
- A ROOT install that provides access to ROOT's python interface (I recommend installing it using conda)
- I have tested this with ROOT6. ROOT5 should also work since I am using very basic functionality, but I have not tested this.
- Matplotlib: Required for plotting data.
- Numpy: Used for making cuts on data and computations.
- iMinuit: Used for fitting.
- uncertainties: Used to propagate statistical uncertainties through nuclear polarization calculations.
- scipy: Obtain physical constants from the NIST CODATA database. Implementation of Voigt profile.
- tabulate: For making nice tables.
These can be installed using pip:
pip install matplotlib numpy iminuit uncertainties scipy tabulate
-
fit.pdf/.png: Image of the two spectra along with fits and residuals. In the residuals plot,
$\pm$ 1-$\sigma$ of the residuals is indicated by a shaded bar. -
histograms_*.root: Saves the data relevant to the analysis in
.root
files. Included is data before applying cuts and after. - parameters.txt: Copy of the script output. Includes fit parameters and statistics, nuclear polarization and the cuts applied to the data.
- Top row shows data (black dots with errorbars) and fit (red line)
- Bottom row shows residuals (black dots), mean of residuals (dark grey line) and
$\pm$ 1-$\sigma$ region centered around the mean (grey region)
The main problem with the polarization data is that the sublevels are identified using their frequency shifts.
This is hard since the sublevel shifts depend on three parameters:
Accounting for the "frequency" shift and the Zeeman shift at the same time leaves us with two unrelated parameters that both affect the frequency locations of the sublevel transitions.
Essentially we have an expression with two unknowns leaving us with infinitely many solutions.
One approach to circumvent this issue would be to fix one of the parameters, but the frequency shifts are usually not known to high enough precision and the magnetic field strength inside the trap is hard to measure in situ.
The second approach is to fit two spectra at opposite pumping polarizations (norm and flip) simultaneously.
This takes advantage of the symmetry of the Zeeman effect when probing it with
To fit this global model the sum of two log-likelihoods is minimized.
A single dataset
To perform a simultaneous fit we add the likelihood functions computed using the norm and flip data:
Here we have 3 parameter vectors
contains the shared parameters and
contains the sublevel populations for norm polarization
Here
Note that in its current form the calculated
/mnt/Secondary/Repositories/Polarization/Example 2/polarization.py:132: RuntimeWarning: divide by zero encountered in log
chi2_flip[y_flip != 0] += y_flip * np.log(y_flip / sublevel_model(frequencies, *p_flip))
/mnt/Secondary/Repositories/Polarization/Example 2/polarization.py:132: RuntimeWarning: invalid value encountered in multiply
chi2_flip[y_flip != 0] += y_flip * np.log(y_flip / sublevel_model(frequencies, *p_flip))
Traceback (most recent call last):
File "/mnt/Secondary/Repositories/Polarization/Example 2/polarization.py", line 132, in <module>
chi2_flip[y_flip != 0] += y_flip * np.log(y_flip / sublevel_model(frequencies, *p_flip))
ValueError: operands could not be broadcast together with shapes (45,) (53,) (45,)
In this case just comment out the section of code that computes the variables chi2_flip
, chi2_norm
and manually set chi2
equal to zero.
- Fitting
$F \rightarrow 1'$ . Currently only$F \rightarrow 2'$ is modelled. To implement this functionality consider the following steps:- Implement a function
sublevel_model_F1
in_models.py
(can be done similarily to howsublevel_model_F2
is implemented). - Implement a function
nuclear_polarization_41K_F1
in_physics.py
. Usenuclear_polarization_41K_F2
as a guide. Usingufloat()
allows for easily propagating statistical uncertainties. - In
polarization.py
adjustglobal_poisson_likelihood
to callsublevel_model_F1
instead ofsublevel_model_F2
. Adjust parameters names so that they make sense. - Adjust the initial guess in the call to
Minuit()
. This also involves changing the parameter names.Minuit
knows the parameter names of the cost function and so when setting the initial guess they need to be correct. - Adjust parameter limits and fixed parameters if necessary.
- Adjust what parameters are assigned to
p_norm
andp_flip
. In reality this only requires making sure that the correct parameter names are given and that they are in the same order as required by thesublevel_model_F1
function. - Call
nuclear_polarization_41K_F1
instead ofnuclear_polarization_41K_F2
and make sure that the arguents are correctly provided.
- Implement a function
- Instead of
$\chi^2$ it might be better to just provide the value of the likelihood function at the minimum.