Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't deploy updated deployment package for Lambda with ACK #1550

Closed
timam opened this issue Nov 15, 2022 · 12 comments
Closed

Can't deploy updated deployment package for Lambda with ACK #1550

timam opened this issue Nov 15, 2022 · 12 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. service/lambda Indicates issues or PRs that are related to lambda-controller.

Comments

@timam
Copy link

timam commented Nov 15, 2022

Describe the bug
I have created a lambda fuction with ACK. I am using a file from s3 to deploy lambda.
Now, I have a new deployment package and want ack to deploy the new package. Unfortunately, ack is giving me following error.

This resource already exists but is not managed by ACK. To bring the resource under ACK management, you should explicitly adopt the resource by creating a services.k8s.aws/AdoptedResource

Steps to reproduce
Create deployment package and upload it to s3
Create a lambda fuction with ACK with newly updated deployment package to s3
Make any code change, package it and upload it to s3 with a seperate name
Update yaml with new deployment package name
Deploy changes

Expected outcome
It should update lambda package

Environment

  • Kubernetes version
    1.23
  • Using EKS (yes/no), if so version?
    EKS 1.23
  • AWS service targeted (S3, RDS, etc.)
    Lambda
@timam timam added the kind/bug Categorizes issue or PR as related to a bug. label Nov 15, 2022
@a-hilaly
Copy link
Member

Hi @timam can you provide similar yaml files to what you wanted to deploy? I would like to reproduce the same bugs using the same steps.

@timam
Copy link
Author

timam commented Nov 17, 2022

Hey @a-hilaly

I have created a helm chart with following yaml.

apiVersion: lambda.services.k8s.aws/v1alpha1
kind: Function
metadata:
  name: {{ .Values.function_name }}
  namespace: {{ .Values.namespace }}
  annotations:
    services.k8s.aws/region: {{ .Values.aws_region }}
spec:
  name: {{ .Values.function_name }}
  code:
    s3Bucket:  {{ .Values.artifact_bucket }}
    s3Key: {{ .Values.function_name }}/{{ .Values.artifact_tag }}
  role: {{ .Values.iam_role }}
  runtime: {{ .Values.runtime }}
  memorySize: {{ .Values.memory }}
  timeout: {{ .Values.timeout }}
  handler: {{ .Values.handler }}
  description: {{ .Values.function_name }} created by ACK lambda-controller
  environment:
    variables:
      {{- range $key, $val := .Values.env_vars }}
        {{ $key }} : {{ $val | quote }}
      {{- end }}
  vpcConfig:
    securityGroupIDs:
      - {{ .Values.security_group_id }}
    subnetIDs:
      {{- range .Values.subnet_ids }}
        - {{ . | quote }}
      {{- end }}

Next i have created a argocd pipeline. To deploy this helm chart. I supply all the helm values from a seperate (deployment) git repo.

I was able to successfully create lambda with this approch. The problem is, when I update the value of artifact_tag in deployment git repo, argocd syncs it but the updated package doesn't get deployed.

This resource already exists but is not managed by ACK.

I deleted and recreated the lambda, the issue was resolved however, i am still enable to deploy updated package with ack-lambda with mentioned approch.

@a-hilaly a-hilaly added service/lambda Indicates issues or PRs that are related to lambda-controller. and removed Lambda labels Dec 13, 2022
@boris-ait
Copy link

Same here. It seems like the lambda's code is getting updated just once when the function was created. BTW the environment variables of lambda are updated when you update the function manifest.
Of course, you can delete and then recreate the function resource, so the lambda's code will be updated according to the function manifest. However, that does not suit for GitOps approach.

@a-hilaly
Copy link
Member

a-hilaly commented Feb 1, 2023

/assign @Vandita2020

@ack-prow
Copy link

ack-prow bot commented Feb 1, 2023

@a-hilaly: GitHub didn't allow me to assign the following users: Vandita2020.

Note that only aws-controllers-k8s members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @Vandita2020

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Vandita2020
Copy link
Member

/assign

@ack-bot
Copy link
Collaborator

ack-bot commented Jul 3, 2023

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale

@ack-prow ack-prow bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2023
@ack-bot
Copy link
Collaborator

ack-bot commented Sep 1, 2023

Stale issues rot after 60d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 60d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten

@ack-prow ack-prow bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 1, 2023
@a-hilaly a-hilaly removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 25, 2023
@ack-bot
Copy link
Collaborator

ack-bot commented Mar 23, 2024

Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale

@ack-prow ack-prow bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 23, 2024
ack-prow bot pushed a commit to aws-controllers-k8s/lambda-controller that referenced this issue May 7, 2024
**Issue #**
[#1550](aws-controllers-k8s/community#1550)

**Description**
This PR adds the feature to update the `Spec.Code.S3` variables. The issue arises as Lambda API does not keep record of `Spec.Code` variables, because of which ACK is not being able to compare the changes in the state for `Spec.Code` (ex: change in `Spec.Code.S3Key`) and thus fails to recognize the changes.

Thus we have added a new field `Spec.Code.SHA256`, the user has to manually calculate the SHA256 for their code, they can do this by running this command `sha256sum filename.zip |cut -f1 -d\ | xxd -r -p | base64`. This field will then be compared with current SHA256 taken from `Status.CodeSHA256` and will determine if the update has been made for deployment package. 

**Acknowledgment**
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
@Vandita2020
Copy link
Member

Closing this issue as the PR to support updating deployment package for Lambda functions in ACK is merged.

@Vandita2020
Copy link
Member

/close

@ack-prow ack-prow bot closed this as completed May 7, 2024
Copy link

ack-prow bot commented May 7, 2024

@Vandita2020: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. service/lambda Indicates issues or PRs that are related to lambda-controller.
Projects
None yet
Development

No branches or pull requests

5 participants