Python functions and CLI to mirror public git repositories available on HTTP(S) to S3. Essentially converts smart protocol git repositories to the so-called dumb protocol. Does not use temporary disk space, and uses streaming under the hood. This should allow the mirroring to be run on systems that don't have much disk or available memory, even on repositories with large objects.
This project mirrors objects stored on Large File Storage (LFS). Note however LFS objects are not accessible via the dumb protocol. To work around this, you can use git-lfs-http-mirror that fires up a temporary local LFS server during git commands.
At the time of writing repositories with many objects can be slow to mirror.
mirror-git-to-s3.mov
pip install mirror-git-to-s3
To mirror one or more repositories from Python, use the mirror_repos
function, passing it an iterable of (source, target) mappings.
from mirror_git_to_s3 import mirror_repos
mirror_repos((
('https://example.test/my-first-repo', 's3://my-bucket/my-first-repo'),
('https://example.test/my-second-repo', 's3://my-bucket/my-second-repo'),
))
Once a repository is mirrored to a bucket that doesn't need authentication to read, it can be cloned using standard git commands using the virtual host or path of the S3 bucket.
git clone https://my-bucket.s3.eu-west-2.amazonaws.com/my-first-repo
Under the hood, boto3 is used to communicate with S3. The boto3 client is constructed automatically, but you can override the default by using the get_s3_client
argument.
import boto3
from mirror_git_to_s3 import mirror_repos
mirror_repos(mappings(), get_s3_client=lambda: boto3.client('s3'))
This can be used to mirror to S3-compatible storage.
import boto3
from mirror_git_to_s3 import mirror_repos
mirror_repos(mappings(), get_s3_client=lambda: boto3.client('s3', endpoint_url='http://my-host.com/'))
To mirror repositories from the the command line pairs of --source
--target
options can be passed to mirror-git-to-s3
.
mirror-git-to-s3 \
--source 'https://example.test/my-first-repo' --target 's3://my-bucket/my-first-repo' \
--source 'https://example.test/my-second-repo' --target 's3://my-bucket/my-second-repo'
At the time of writing, there is no known standard way of discovering a set of associated git repositories, hence to remain general, this project must be told the source and target addresses of each repository explicitly.
The project requests a git packfile with all objects from each source, separates it into its component git objects, and stores each separately in S3 as an S3 object.
-
The packfile is requested via a single POST request. Its response is stream-processed to avoid loading it all into memory at once.
-
An attempt is made to split processing of the response into separate threads where possible. Stream processing is somewhat at-odds with parallel processing, but there are still parts that can be moved to separate threads.
-
Where parallism is not possible, for example when a delta object in the packfile has to wait for its base object, a threading.Event is used for the thread to wait.
-
Delta object processing is quite slow. A delta object is an object whose contents aren't given directly in the packfile, but rather as instructions based on the contents of another object. Each instruction can result in a request to S3, which has a high latency. Efforts are made to reduce the effects of this, such as fetching more data than needed each time and caching the results, but this can probably still be improved.
-
So far a deadlock has not been observed, but this depends on the packfile being returned in an order where base objects are always before the deltas that depend on them.