The repo shows the corresponding codes of the paper: Deep DIH : Statistically Inferred Reconstruction of Digital In-Line Holography by Deep Learning
In this paper, we propose a novel DL method that takes advantages of the main characteristic of auto-encoders for blind single-shot hologram reconstruction solely based on the captured sample and without the need for a large dataset of samples with available ground truth to train the model. The simulation results demonstrate the superior performance of the proposed method compared to the state-of-the-art methods used for single-shot hologram reconstruction.
If you have any question, please contact the author: hl459@nau.edu
You can also review the existed rusults on .html file or .ipynb file
- Complex_conv.html:
- DeepDIH.html / DeepDIH.ipynb
- main.py
Noting:The HTML and Notebook could be also found in https://drive.google.com/drive/folders/13o86AYWUPvxQxanq4cHxIiDjW22vOd75?usp=sharing
- GPU memory > 8 GB
- Python 3
- PyTorch(=1.6.0) install:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
(anaconda)
pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
- OpenCV for Python install:
pip install opencv-contrib-python
-
torchsummary
pip install torchsummary
For more information, check:
- Clone this repository.
git lfs clone https://github.com/XiwenChen-NAU/DeepDIH.git
cd DeepDIH
- run
python main.py
- The ouputs (amplitude and phase) in the subfolder
./results
- Spherical light function Nx, Ny :
Nx = 1000 Ny = 1000
- hologram size z:
z = 857
- object-sensor distance wavelength:
wavelength = 0.635
- wavelength of light deltaX, deltaY :
deltaX = 1.67 deltaY = 1.67
- If you want to setup your paras, go
main.py
and modify them in:main(Nx = *, Ny = *, z = *, wavelength = *, deltaX = *, deltaY = *)
then run it.
The objective function can be formulated as:
where we want to propagate the reconstructed object wave to the hologram plane with transmission
We implement our model using the PyTorch Framework in a GPU workstation with an NVIDIA Quadro RTX5000 graphics card. Adam optimizer is adopted with a fixed learning rate of 0.0005 for simulation-based experiments and 0.01 for optical experiments. We train the network with an angular spectrum propagation (ASP) back-propagation reconstruction as input for 1500 to 3500 iterations for simulated holograms, and 2500 to 5000 iterations for real-world holograms, respectively.
Optical Experimental hologram of USAF Resolution Chart and reconstructions. (A) The captured hologram. (B) Amplitude reconstruction with our method. (C) The reconstructed quantitative phase with our method.