This project allows you to obtain gaze provided by Pupil Labs Neon eye tracker in an alternative ego-centric camera view.
You should have Python 3.10 or higher installed in your system. You should also have an available GPU. This repository assumes a minimum technical knowledge of the command line and Python, if you are not familiar with them, or if you do not have access to a local GPU, we recommend that you use our Google Colab notebook instead:
Open the terminal and go to the directory where you would like to clone the repository
cd /path/to/your/directory
Clone the repository by running the following command from the terminal:
git clone git@github.com:pupil-labs/action_cam_mapper.git
Create a virtual environment for the project:
python3.10 -m venv egocentric_mapper source egocentric_mapper/bin/activate
Or if you are using conda:
conda create -n egocentric_mapper python=3.10 conda activate egocentric_mapper
Go to the project directory:
cd /path/to/your/directory/action_cam_mapper pip install -e .
Download the directory with model weights for EfficientLOFTR from the following download link and place it in the src/pupil_labs/action_cam_mapper/efficient_loftr directory.
To run the project, you can open 'PL-mapper.ipynb' in the IDE of your choice and run the cells. Conversely, you can run the following command from the terminal and it will open the notebook in your browser ready to run the cells:
jupyter notebook --port=9000 src/pupil_labs/action_cam_mapper/PL-mapper.ipynb
For any questions/bugs, reach out to our Discord server or send us an email to info@pupil-labs.com.