admin管理员组

文章数量:1127014

I'm writing a Terraform module that creates AWS Lambda functions. It should work with both containerized and non-containerized functions and to do that I've added a boolean variable - containerization. I'm currently working on the creation of a non-containerized lambda function and I'm trying to implement a few important things into the module but I can't get everything to work as intended.

This is the directory structure of the module:

.
├── ./lambda_code
│   ├── ./lambda_code/order_verification.py
│   └── ./lambda_code/requirements.txt
├── ./locals.tf
├── ./main.tf
├── ./order_verification.zip
├── ./outputs.tf
├── ./providers.tf

This is the logic I'm trying to achieve:

  1. Incase this is the first apply, and the order_verification.zip file is not yet created, use the dummy logic (for the plan to pass without errors), create the zip file in the root of the module, upload it to s3 and continue to create the lambda function.

  2. Incase this is not the first time the module runs and only the code has been changed inside ./lambda_code, I want the module to re-zip the files in ./lambda_code and continue to upload the newly created zip file to s3 and update the lambda function - all in one apply - this is where I'm stuck. If I get it to recreate the zipped file, it doesn't update the function with the new code, or if I remove the zipped file, it doesn't identify the removal and then it just tries to update the function without actually creating the zip file and uploading it to s3.

I've tried many things but whenever I get one thing to work, another thing breaks.

this is the current version of the main.tf of the lambda module:

# IAM Role for the Lambda Function
resource "aws_iam_role" "lambda_execution_role" {
  name               = var.role_name
  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement: [
      {
        Effect    = "Allow",
        Principal = { Service = "lambda.amazonaws" },
        Action    = "sts:AssumeRole"
      }
    ]
  })

  tags = var.tags
}

resource "aws_iam_role_policy" "lambda_policy" {
  name   = "${replace(var.function_name, "_", "-")}-policy"
  role   = aws_iam_role.lambda_execution_role.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect   = "Allow",
        Action   = ["logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"],
        Resource = "arn:aws:logs:*:*:*"
      },
      {
        Effect   = "Allow",
        Action   = ["s3:GetObject"],
        Resource = "arn:aws:s3:::ordering-system/*"  # Grant access to all objects in the bucket
      },
      {
        Effect   = "Allow",
        Action   = ["sqs:SendMessage"],
        Resource = var.sqs_queue_arn
      },
      {
        Effect   = "Allow",
        Action   = ["dynamodb:PutItem", "dynamodb:GetItem", "dynamodb:UpdateItem"],
        Resource = var.dynamodb_table_arn
      }
    ]
  })
}

resource "aws_lambda_permission" "allow_s3_trigger" {
  count = var.monitored_bucket_name != "" ? 1 : 0

  statement_id  = "AllowS3Invoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.lambda.function_name
  principal     = "s3.amazonaws"

  source_arn = "arn:aws:s3:::${var.monitored_bucket_name}"
}

resource "aws_s3_bucket_notification" "monitored_bucket_notification" {
  bucket = var.monitored_bucket_name

  lambda_function {
    lambda_function_arn = aws_lambda_function.lambda.arn
    events              = ["s3:ObjectCreated:*"]
    filter_suffix       = ".json"
  }

  depends_on = [
    aws_lambda_permission.allow_s3_trigger
  ]
}

# Null resource to zip and upload code if containerization is false
resource "null_resource" "zip_and_upload" {
  count = !var.containerization && var.function_directory != "" ? 1 : 0

  provisioner "local-exec" {
    command = <<EOT
      cd ${var.function_directory} && \
      if [ -f "../${var.function_name}.zip" ]; then rm ../${var.function_name}.zip; fi && \
      zip -rX ${var.function_name}.zip ${var.function_name}.py requirements.txt && \
      mv ${var.function_name}.zip ../
    EOT
  }

  triggers = {
    zip_hash = sha256(join("", [
      filebase64sha256("${var.function_directory}/${var.function_name}.py"),
      fileexists("${var.function_directory}/requirements.txt") ? filebase64sha256("${var.function_directory}/requirements.txt") : "no-reqs"
    ]))
  }
}

# Upload the zip file to S3
resource "aws_s3_object" "lambda_zip" {
  bucket = var.s3_bucket_name
  key    = "${var.function_name}.zip"
  source = "./${var.function_name}.zip"
  etag   = filebase64sha256("./${var.function_name}.zip")

  lifecycle {
    ignore_changes = [etag]
  }

  depends_on = [null_resource.zip_and_upload]
}

# Lambda function resource
resource "aws_lambda_function" "lambda" {
  function_name = var.function_name
  role          = aws_iam_role.lambda_execution_role.arn

  # Use S3 deployment if containerization is false
  runtime          = !var.containerization ? var.runtime : null
  handler          = !var.containerization ? var.handler : null
  s3_bucket        = !var.containerization ? var.s3_bucket_name : null
  s3_key           = !var.containerization ? "${var.function_name}.zip" : null
  source_code_hash = !var.containerization && fileexists("./${var.function_name}.zip") 
                     ? filebase64sha256("./${var.function_name}.zip") 
                     : base64sha256("dummy")

  # Use container image if containerization is true
  image_uri = var.containerization ? var.image_uri : null

  environment {
    variables = var.environment
  }

  tags = var.tags

  # Ensure Lambda function doesn't get recreated for no reason
  lifecycle {
    ignore_changes = [source_code_hash]
  }

  depends_on = [
    aws_s3_object.lambda_zip
  ]
}

Edit #1:

If I try this:

# Null resource to ensure the zip file is created if missing
resource "null_resource" "ensure_zip_exists" {
  count = !var.containerization && var.function_directory != "" && !fileexists("./${var.function_name}.zip") ? 1 : 0

  provisioner "local-exec" {
    command = <<EOT
      cd ${var.function_directory} && \
      zip -rX ${var.function_name}.zip ${var.function_name}.py requirements.txt && \
      mv ${var.function_name}.zip ../
    EOT
  }

  triggers = {
    missing_zip = !fileexists("./${var.function_name}.zip") ? "missing" : "exists"
  }
}

# Null resource to re-zip and upload if code changes
resource "null_resource" "zip_and_upload" {
  count = !var.containerization && var.function_directory != "" ? 1 : 0

  provisioner "local-exec" {
    command = <<EOT
      cd ${var.function_directory} && \
      zip -rX ${var.function_name}.zip ${var.function_name}.py requirements.txt && \
      mv ${var.function_name}.zip ../
    EOT
  }

  triggers = {
    zip_hash = sha256(join("", [
      filebase64sha256("${var.function_directory}/${var.function_name}.py"),
      fileexists("${var.function_directory}/requirements.txt") ? filebase64sha256("${var.function_directory}/requirements.txt") : "no-reqs"
    ]))
  }

  depends_on = [null_resource.ensure_zip_exists]
}

# Upload the zip file to S3
resource "aws_s3_object" "lambda_zip" {
  bucket = var.s3_bucket_name
  key    = "${var.function_name}.zip"
  source = fileexists("./${var.function_name}.zip") ? "./${var.function_name}.zip" : null
  etag   = fileexists("./${var.function_name}.zip") ? filebase64sha256("./${var.function_name}.zip") : base64sha256("dummy")

  lifecycle {
    ignore_changes = [etag]
  }

  depends_on = [null_resource.zip_and_upload]
}

# Lambda function resource
resource "aws_lambda_function" "lambda" {
  function_name = var.function_name
  role          = aws_iam_role.lambda_execution_role.arn

  # Use S3 deployment if containerization is false
  runtime          = !var.containerization ? var.runtime : null
  handler          = !var.containerization ? var.handler : null
  s3_bucket        = !var.containerization ? var.s3_bucket_name : null
  s3_key           = !var.containerization ? "${var.function_name}.zip" : null ? filebase64sha256("./${var.function_name}.zip") : base64sha256("dummy")

  # Use container image if containerization is true
  image_uri = var.containerization ? var.image_uri : null

  environment {
    variables = var.environment
  }

  tags = var.tags

  depends_on = [
    aws_s3_object.lambda_zip
  ]
}

When I run apply I bump into this issue:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.order_verification_lambda.aws_s3_object.lambda_zip will be created
  + resource "aws_s3_object" "lambda_zip" {
      + acl                    = "private"
      + bucket                 = "order-verification-code"
      + bucket_key_enabled     = (known after apply)
      + content_type           = (known after apply)
      + etag                   = "taLJYlBhI2bqJy/6xtl0Sq9LRarNlqp8/Lkx7jtVglk="
      + force_destroy          = false
      + id                     = (known after apply)
      + key                    = "order_verification.zip"
      + kms_key_id             = (known after apply)
      + server_side_encryption = (known after apply)
      + storage_class          = (known after apply)
      + tags_all               = (known after apply)
      + version_id             = (known after apply)
    }

  # module.order_verification_lambda.null_resource.ensure_zip_exists[0] will be created
  + resource "null_resource" "ensure_zip_exists" {
      + id       = (known after apply)
      + triggers = {
          + "missing_zip" = "missing"
        }
    }

  # module.order_verification_lambda.null_resource.zip_and_upload[0] must be replaced
-/+ resource "null_resource" "zip_and_upload" {
      ~ id       = "8133341959139985635" -> (known after apply)
      ~ triggers = { # forces replacement
          - "combined_status" = "933f13bed7d8850565965991daabb9a3715951e43eaea47b26edb2154aa3ce6e" -> null
          + "zip_hash"        = "07aa0c2506298283c26eb5a825943dd2ae031cb6b8383345c81d1fc6dd4794d2"
        }
    }

Plan: 3 to add, 0 to change, 1 to destroy.
module.order_verification_lambda.null_resource.zip_and_upload[0]: Destroying... [id=8133341959139985635]
module.order_verification_lambda.null_resource.zip_and_upload[0]: Destruction complete after 0s
module.order_verification_lambda.null_resource.ensure_zip_exists[0]: Creating...
module.order_verification_lambda.null_resource.ensure_zip_exists[0]: Provisioning with 'local-exec'...
module.order_verification_lambda.null_resource.ensure_zip_exists[0] (local-exec): Executing: ["/bin/sh" "-c" "      cd lambda_code && \\\n      zip -rX order_verification.zip order_verification.py requirements.txt && \\\n      mv order_verification.zip ../\n"]
module.order_verification_lambda.null_resource.ensure_zip_exists[0] (local-exec):   adding: order_verification.py (deflated 67%)
module.order_verification_lambda.null_resource.ensure_zip_exists[0] (local-exec):   adding: requirements.txt (deflated 16%)
module.order_verification_lambda.null_resource.ensure_zip_exists[0]: Creation complete after 0s [id=1949417230852622518]
module.order_verification_lambda.null_resource.zip_and_upload[0]: Creating...
module.order_verification_lambda.null_resource.zip_and_upload[0]: Provisioning with 'local-exec'...
module.order_verification_lambda.null_resource.zip_and_upload[0] (local-exec): Executing: ["/bin/sh" "-c" "      cd lambda_code && \\\n      zip -rX order_verification.zip order_verification.py requirements.txt && \\\n      mv order_verification.zip ../\n"]
module.order_verification_lambda.null_resource.zip_and_upload[0] (local-exec):   adding: order_verification.py (deflated 67%)
module.order_verification_lambda.null_resource.zip_and_upload[0] (local-exec):   adding: requirements.txt (deflated 16%)
module.order_verification_lambda.null_resource.zip_and_upload[0]: Creation complete after 0s [id=5217531696015301099]
╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for module.order_verification_lambda.aws_s3_object.lambda_zip to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/aws" produced an invalid new value for .source: was null, but now cty.StringVal("./order_verification.zip").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for module.order_verification_lambda.aws_s3_object.lambda_zip to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/aws" produced an invalid new value for .etag: was cty.StringVal("taLJYlBhI2bqJy/6xtl0Sq9LRarNlqp8/Lkx7jtVglk="), but now
│ cty.StringVal("gDlsnAlsiBpZK9s2tACPmCufe5Lk+ubCoUbcLSNBIcA=").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

I need this process to run without errors as it's going to be part of the CI process.

I'm writing a Terraform module that creates AWS Lambda functions. It should work with both containerized and non-containerized functions and to do that I've added a boolean variable - containerization. I'm currently working on the creation of a non-containerized lambda function and I'm trying to implement a few important things into the module but I can't get everything to work as intended.

This is the directory structure of the module:

.
├── ./lambda_code
│   ├── ./lambda_code/order_verification.py
│   └── ./lambda_code/requirements.txt
├── ./locals.tf
├── ./main.tf
├── ./order_verification.zip
├── ./outputs.tf
├── ./providers.tf

This is the logic I'm trying to achieve:

  1. Incase this is the first apply, and the order_verification.zip file is not yet created, use the dummy logic (for the plan to pass without errors), create the zip file in the root of the module, upload it to s3 and continue to create the lambda function.

  2. Incase this is not the first time the module runs and only the code has been changed inside ./lambda_code, I want the module to re-zip the files in ./lambda_code and continue to upload the newly created zip file to s3 and update the lambda function - all in one apply - this is where I'm stuck. If I get it to recreate the zipped file, it doesn't update the function with the new code, or if I remove the zipped file, it doesn't identify the removal and then it just tries to update the function without actually creating the zip file and uploading it to s3.

I've tried many things but whenever I get one thing to work, another thing breaks.

this is the current version of the main.tf of the lambda module:

# IAM Role for the Lambda Function
resource "aws_iam_role" "lambda_execution_role" {
  name               = var.role_name
  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement: [
      {
        Effect    = "Allow",
        Principal = { Service = "lambda.amazonaws.com" },
        Action    = "sts:AssumeRole"
      }
    ]
  })

  tags = var.tags
}

resource "aws_iam_role_policy" "lambda_policy" {
  name   = "${replace(var.function_name, "_", "-")}-policy"
  role   = aws_iam_role.lambda_execution_role.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect   = "Allow",
        Action   = ["logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"],
        Resource = "arn:aws:logs:*:*:*"
      },
      {
        Effect   = "Allow",
        Action   = ["s3:GetObject"],
        Resource = "arn:aws:s3:::ordering-system/*"  # Grant access to all objects in the bucket
      },
      {
        Effect   = "Allow",
        Action   = ["sqs:SendMessage"],
        Resource = var.sqs_queue_arn
      },
      {
        Effect   = "Allow",
        Action   = ["dynamodb:PutItem", "dynamodb:GetItem", "dynamodb:UpdateItem"],
        Resource = var.dynamodb_table_arn
      }
    ]
  })
}

resource "aws_lambda_permission" "allow_s3_trigger" {
  count = var.monitored_bucket_name != "" ? 1 : 0

  statement_id  = "AllowS3Invoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.lambda.function_name
  principal     = "s3.amazonaws.com"

  source_arn = "arn:aws:s3:::${var.monitored_bucket_name}"
}

resource "aws_s3_bucket_notification" "monitored_bucket_notification" {
  bucket = var.monitored_bucket_name

  lambda_function {
    lambda_function_arn = aws_lambda_function.lambda.arn
    events              = ["s3:ObjectCreated:*"]
    filter_suffix       = ".json"
  }

  depends_on = [
    aws_lambda_permission.allow_s3_trigger
  ]
}

# Null resource to zip and upload code if containerization is false
resource "null_resource" "zip_and_upload" {
  count = !var.containerization && var.function_directory != "" ? 1 : 0

  provisioner "local-exec" {
    command = <<EOT
      cd ${var.function_directory} && \
      if [ -f "../${var.function_name}.zip" ]; then rm ../${var.function_name}.zip; fi && \
      zip -rX ${var.function_name}.zip ${var.function_name}.py requirements.txt && \
      mv ${var.function_name}.zip ../
    EOT
  }

  triggers = {
    zip_hash = sha256(join("", [
      filebase64sha256("${var.function_directory}/${var.function_name}.py"),
      fileexists("${var.function_directory}/requirements.txt") ? filebase64sha256("${var.function_directory}/requirements.txt") : "no-reqs"
    ]))
  }
}

# Upload the zip file to S3
resource "aws_s3_object" "lambda_zip" {
  bucket = var.s3_bucket_name
  key    = "${var.function_name}.zip"
  source = "./${var.function_name}.zip"
  etag   = filebase64sha256("./${var.function_name}.zip")

  lifecycle {
    ignore_changes = [etag]
  }

  depends_on = [null_resource.zip_and_upload]
}

# Lambda function resource
resource "aws_lambda_function" "lambda" {
  function_name = var.function_name
  role          = aws_iam_role.lambda_execution_role.arn

  # Use S3 deployment if containerization is false
  runtime          = !var.containerization ? var.runtime : null
  handler          = !var.containerization ? var.handler : null
  s3_bucket        = !var.containerization ? var.s3_bucket_name : null
  s3_key           = !var.containerization ? "${var.function_name}.zip" : null
  source_code_hash = !var.containerization && fileexists("./${var.function_name}.zip") 
                     ? filebase64sha256("./${var.function_name}.zip") 
                     : base64sha256("dummy")

  # Use container image if containerization is true
  image_uri = var.containerization ? var.image_uri : null

  environment {
    variables = var.environment
  }

  tags = var.tags

  # Ensure Lambda function doesn't get recreated for no reason
  lifecycle {
    ignore_changes = [source_code_hash]
  }

  depends_on = [
    aws_s3_object.lambda_zip
  ]
}

Edit #1:

If I try this:

# Null resource to ensure the zip file is created if missing
resource "null_resource" "ensure_zip_exists" {
  count = !var.containerization && var.function_directory != "" && !fileexists("./${var.function_name}.zip") ? 1 : 0

  provisioner "local-exec" {
    command = <<EOT
      cd ${var.function_directory} && \
      zip -rX ${var.function_name}.zip ${var.function_name}.py requirements.txt && \
      mv ${var.function_name}.zip ../
    EOT
  }

  triggers = {
    missing_zip = !fileexists("./${var.function_name}.zip") ? "missing" : "exists"
  }
}

# Null resource to re-zip and upload if code changes
resource "null_resource" "zip_and_upload" {
  count = !var.containerization && var.function_directory != "" ? 1 : 0

  provisioner "local-exec" {
    command = <<EOT
      cd ${var.function_directory} && \
      zip -rX ${var.function_name}.zip ${var.function_name}.py requirements.txt && \
      mv ${var.function_name}.zip ../
    EOT
  }

  triggers = {
    zip_hash = sha256(join("", [
      filebase64sha256("${var.function_directory}/${var.function_name}.py"),
      fileexists("${var.function_directory}/requirements.txt") ? filebase64sha256("${var.function_directory}/requirements.txt") : "no-reqs"
    ]))
  }

  depends_on = [null_resource.ensure_zip_exists]
}

# Upload the zip file to S3
resource "aws_s3_object" "lambda_zip" {
  bucket = var.s3_bucket_name
  key    = "${var.function_name}.zip"
  source = fileexists("./${var.function_name}.zip") ? "./${var.function_name}.zip" : null
  etag   = fileexists("./${var.function_name}.zip") ? filebase64sha256("./${var.function_name}.zip") : base64sha256("dummy")

  lifecycle {
    ignore_changes = [etag]
  }

  depends_on = [null_resource.zip_and_upload]
}

# Lambda function resource
resource "aws_lambda_function" "lambda" {
  function_name = var.function_name
  role          = aws_iam_role.lambda_execution_role.arn

  # Use S3 deployment if containerization is false
  runtime          = !var.containerization ? var.runtime : null
  handler          = !var.containerization ? var.handler : null
  s3_bucket        = !var.containerization ? var.s3_bucket_name : null
  s3_key           = !var.containerization ? "${var.function_name}.zip" : null ? filebase64sha256("./${var.function_name}.zip") : base64sha256("dummy")

  # Use container image if containerization is true
  image_uri = var.containerization ? var.image_uri : null

  environment {
    variables = var.environment
  }

  tags = var.tags

  depends_on = [
    aws_s3_object.lambda_zip
  ]
}

When I run apply I bump into this issue:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.order_verification_lambda.aws_s3_object.lambda_zip will be created
  + resource "aws_s3_object" "lambda_zip" {
      + acl                    = "private"
      + bucket                 = "order-verification-code"
      + bucket_key_enabled     = (known after apply)
      + content_type           = (known after apply)
      + etag                   = "taLJYlBhI2bqJy/6xtl0Sq9LRarNlqp8/Lkx7jtVglk="
      + force_destroy          = false
      + id                     = (known after apply)
      + key                    = "order_verification.zip"
      + kms_key_id             = (known after apply)
      + server_side_encryption = (known after apply)
      + storage_class          = (known after apply)
      + tags_all               = (known after apply)
      + version_id             = (known after apply)
    }

  # module.order_verification_lambda.null_resource.ensure_zip_exists[0] will be created
  + resource "null_resource" "ensure_zip_exists" {
      + id       = (known after apply)
      + triggers = {
          + "missing_zip" = "missing"
        }
    }

  # module.order_verification_lambda.null_resource.zip_and_upload[0] must be replaced
-/+ resource "null_resource" "zip_and_upload" {
      ~ id       = "8133341959139985635" -> (known after apply)
      ~ triggers = { # forces replacement
          - "combined_status" = "933f13bed7d8850565965991daabb9a3715951e43eaea47b26edb2154aa3ce6e" -> null
          + "zip_hash"        = "07aa0c2506298283c26eb5a825943dd2ae031cb6b8383345c81d1fc6dd4794d2"
        }
    }

Plan: 3 to add, 0 to change, 1 to destroy.
module.order_verification_lambda.null_resource.zip_and_upload[0]: Destroying... [id=8133341959139985635]
module.order_verification_lambda.null_resource.zip_and_upload[0]: Destruction complete after 0s
module.order_verification_lambda.null_resource.ensure_zip_exists[0]: Creating...
module.order_verification_lambda.null_resource.ensure_zip_exists[0]: Provisioning with 'local-exec'...
module.order_verification_lambda.null_resource.ensure_zip_exists[0] (local-exec): Executing: ["/bin/sh" "-c" "      cd lambda_code && \\\n      zip -rX order_verification.zip order_verification.py requirements.txt && \\\n      mv order_verification.zip ../\n"]
module.order_verification_lambda.null_resource.ensure_zip_exists[0] (local-exec):   adding: order_verification.py (deflated 67%)
module.order_verification_lambda.null_resource.ensure_zip_exists[0] (local-exec):   adding: requirements.txt (deflated 16%)
module.order_verification_lambda.null_resource.ensure_zip_exists[0]: Creation complete after 0s [id=1949417230852622518]
module.order_verification_lambda.null_resource.zip_and_upload[0]: Creating...
module.order_verification_lambda.null_resource.zip_and_upload[0]: Provisioning with 'local-exec'...
module.order_verification_lambda.null_resource.zip_and_upload[0] (local-exec): Executing: ["/bin/sh" "-c" "      cd lambda_code && \\\n      zip -rX order_verification.zip order_verification.py requirements.txt && \\\n      mv order_verification.zip ../\n"]
module.order_verification_lambda.null_resource.zip_and_upload[0] (local-exec):   adding: order_verification.py (deflated 67%)
module.order_verification_lambda.null_resource.zip_and_upload[0] (local-exec):   adding: requirements.txt (deflated 16%)
module.order_verification_lambda.null_resource.zip_and_upload[0]: Creation complete after 0s [id=5217531696015301099]
╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for module.order_verification_lambda.aws_s3_object.lambda_zip to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/aws" produced an invalid new value for .source: was null, but now cty.StringVal("./order_verification.zip").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for module.order_verification_lambda.aws_s3_object.lambda_zip to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/aws" produced an invalid new value for .etag: was cty.StringVal("taLJYlBhI2bqJy/6xtl0Sq9LRarNlqp8/Lkx7jtVglk="), but now
│ cty.StringVal("gDlsnAlsiBpZK9s2tACPmCufe5Lk+ubCoUbcLSNBIcA=").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

I need this process to run without errors as it's going to be part of the CI process.

Share Improve this question edited Jan 8 at 22:24 halfer 20.4k19 gold badges108 silver badges201 bronze badges asked Jan 8 at 21:11 Itai GanotItai Ganot 6,29520 gold badges62 silver badges104 bronze badges 1
  • 1 The docs say you should use the following syntax for updating etag: etag = filemd5("path/to/file"). – Marko E Commented Jan 9 at 9:22
Add a comment  | 

1 Answer 1

Reset to default 0

You can do the zipping using normal terraform constructs, see: https://registry.terraform.io/providers/hashicorp/archive/latest/docs/data-sources/file

Probably something like this:

data "archive_file" "lambda_files" {
  type        = "zip"
  output_path = "${path.module}/lambda_code.zip"

  source {
    content  = "${path.module}/lambda_code/order_verification.py"
    filename = "order_verification.py"
  }

  source {
    content  = "${path.module}/lambda_code/requirements.txt"
    filename = "requirements.txt"
  }
}

Which you can then use elsewhere like:

data.archive_file.lambda_files.output_path

And don't forget correct usage of some of the hashed outputs, to get it to correctly detect changes, as in not always.

本文标签: