-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent gitea behaviors and definition about config SERVE_DIRECT #16711
Comments
I have checked that avatars/attahcments/archives have support that, but we may never support it for LFS. |
Thanks for your reply. By the way, why didn't gitea implement with it? Is there some security tricks in it? Thanks for your reply again. |
The s3 or minio will create a secret link for the download objects and the link will be expired about some days defaultly. |
I got it. I want to implement this task and contribute to gitea. I think I need to add some codes about By the way, should I add this feature in lfs file downloading on the web page? |
Before you do that, do you know if git client or lfs will follow the redirection URL? |
I found I have a mistake in the previous response. |
When I implemented the redirection logic in the BatchHandler, it download directly from my S3 URL. |
And In the upload phase, I prefer to choose to make it following the download phase -- just return the PresignedPutObject url in the |
|
OK. I will requet a PR about download. I have also tried to change the upload phase. |
The upload may be tricky because currently we verify the size and hash of the uploaded data with the LFS metadata. |
I have some questions about currently upload workflow.
|
"upload": {
"href": "https://github-cloud.s3.amazonaws.com/alambic/media/325942445/fb/8f/fb8f7d8435968c4f82a726a92395be4d16f2f63116caf36c8ad35c60831ab042?actor_id=1666336&key_id=0&repo_id=330759100",
"header": {
"Authorization": "AWS4-HMAC-SHA256 Credential=xxxxxxxxx/20210820/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=ad139477146c04b7b512ee9118093a63729cd2a6377a010279a8ecb03b55deab",
"x-amz-content-sha256": "fb8f7d8435968c4f82a726a92395be4d16f2f63116caf36c8ad35c60831ab042",
"x-amz-date": "20210820T164935Z"
},
"expires_at": "2021-08-20T17:04:35Z",
"expires_in": 900
} The client must provide the
|
Thanks for your great answer. As your answer says, we can't add more necessary actions in verify handler because it may not be called. I will read more aws s3 docs for some details about the I can't ensure my thought true, and I am not good at English. |
I think this has been resolved via the title. For upload things, we can open another issue. |
Gitea version (or commit ref): gitea docker latest
Git version: git version 2.30.1
Operating system: ubuntu18.04
Database (use
[x]
):Can you reproduce the bug at https://try.gitea.io:
Log gist:
Description
I try the config field 'SERVE_DIRECT' which mentioned in gitea doc as
Allows the storage driver to redirect to authenticated URLs to serve files directly. Currently, only Minio/S3 is supported via signed URLs, local does nothing.
My config related is below:
However, I found nothing changes when I open this config. All lfs actions including add/download/clone are also add the I/O NET of gitea container.
My docker stats i/o summary is blow:
Is my understanding of 'SERVE_DIRECT' config wrong? Or it is impossibe that my lfs file upload/download request can be served by minio directly with currently version gitea.
Thanks for any related answers or suggestions.
Screenshots
The text was updated successfully, but these errors were encountered: