Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime error "runtime error: invalid memory address or nil pointer dereference" on provisioning instances #321

Closed
idrissneumann opened this issue Jul 29, 2024 · 10 comments · Fixed by #322

Comments

@idrissneumann
Copy link

Hi.

Since few days, we got this error with a two-years running service:

ERROR:grpc._server:Exception calling application: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.UNAVAILABLE
	details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:41035: Failed to connect to remote host: Connection refused"
	debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-07-29T08:19:23.184324972+00:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:41035: Failed to connect to remote host: Connection refused"}"
>
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/pulumi/automation/_server.py", line 80, in run
    loop.run_until_complete(run_in_stack(self.program))
  File "/usr/local/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.9/site-packages/pulumi/runtime/stack.py", line 138, in run_in_stack
    await run_pulumi_func(run)
  File "/usr/local/lib/python3.9/site-packages/pulumi/runtime/stack.py", line 52, in run_pulumi_func
    await wait_for_rpcs()
  File "/usr/local/lib/python3.9/site-packages/pulumi/runtime/stack.py", line 100, in wait_for_rpcs
    log.debug(
  File "/usr/local/lib/python3.9/site-packages/pulumi/log.py", line 45, in debug
    _log(engine, engine_pb2.DEBUG, msg, resource, stream_id, ephemeral)
  File "/usr/local/lib/python3.9/site-packages/pulumi/log.py", line 144, in _log
    engine.Log(req)
  File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 1160, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 1003, in _end_unary_response_blocking
    raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.UNAVAILABLE
	details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:41035: Failed to connect to remote host: Connection refused"
	debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-07-29T08:19:23.183446819+00:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:41035: Failed to connect to remote host: Connection refused"}"
>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/pulumi/automation/_server.py", line 91, in run
    log.debug("Resource monitor has terminated, shutting down.")
  File "/usr/local/lib/python3.9/site-packages/pulumi/log.py", line 45, in debug
    _log(engine, engine_pb2.DEBUG, msg, resource, stream_id, ephemeral)
  File "/usr/local/lib/python3.9/site-packages/pulumi/log.py", line 144, in _log
    engine.Log(req)
  File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 1160, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 1003, in _end_unary_response_blocking
    raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.UNAVAILABLE
	details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:41035: Failed to connect to remote host: Connection refused"
	debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:41035: Failed to connect to remote host: Connection refused", grpc_status:14, created_time:"2024-07-29T08:19:23.183896131+00:00"}"
>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/grpc/_server.py", line 555, in _call_behavior
    response_or_iterator = behavior(argument, context)
  File "/usr/local/lib/python3.9/site-packages/pulumi/automation/_server.py", line 125, in Run
    return ctx.run(run)
  File "/usr/local/lib/python3.9/site-packages/pulumi/automation/_server.py", line 113, in run
    log.debug(f"Cancelling {len(pending)} tasks.")
  File "/usr/local/lib/python3.9/site-packages/pulumi/log.py", line 45, in debug
    _log(engine, engine_pb2.DEBUG, msg, resource, stream_id, ephemeral)
  File "/usr/local/lib/python3.9/site-packages/pulumi/log.py", line 144, in _log
    engine.Log(req)
  File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 1160, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 1003, in _end_unary_response_blocking
    raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.UNAVAILABLE
	details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:41035: Failed to connect to remote host: Connection refused"
	debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-07-29T08:19:23.184324972+00:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:41035: Failed to connect to remote host: Connection refused"}"
>
2024-07-29 08:19:24,217 INFO sqlalchemy.engine.Engine ROLLBACK
INFO:sqlalchemy.engine.Engine:ROLLBACK
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/usr/local/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 276, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.9/site-packages/prometheus_fastapi_instrumentator/middleware.py", line 169, in __call__
    raise exc
  File "/usr/local/lib/python3.9/site-packages/prometheus_fastapi_instrumentator/middleware.py", line 167, in __call__
    await self.app(scope, receive, send_wrapper)
  File "/usr/local/lib/python3.9/site-packages/asgi_correlation_id/middleware.py", line 90, in __call__
    await self.app(scope, receive, handle_outgoing_request)
  File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 69, in app
    await response(scope, receive, send)
  File "/usr/local/lib/python3.9/site-packages/starlette/responses.py", line 174, in __call__
    await self.background()
  File "/usr/local/lib/python3.9/site-packages/starlette/background.py", line 43, in __call__
    await task()
  File "/usr/local/lib/python3.9/site-packages/starlette/background.py", line 28, in __call__
    await run_in_threadpool(self.func, *self.args, **self.kwargs)
  File "/usr/local/lib/python3.9/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "/usr/local/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "/app/src/utils/instance.py", line 103, in create_instance
    result = ProviderDriver().create_instance(instance_id, ami_image, hashed_instance_name, environment, instance_region, instance_zone, instance_type, generate_dns, root_dns_zone)
  File "/app/src/drivers/ScalewayDriver.py", line 129, in create_instance
    up_res = stack.up()
  File "/usr/local/lib/python3.9/site-packages/pulumi/automation/_stack.py", line 314, in up
    up_result = self._run_pulumi_cmd_sync(args, on_output)
  File "/usr/local/lib/python3.9/site-packages/pulumi/automation/_stack.py", line 839, in _run_pulumi_cmd_sync
    result = self.workspace.pulumi_command.run(
  File "/usr/local/lib/python3.9/site-packages/pulumi/automation/_cmd.py", line 238, in run
    raise create_command_error(result)
pulumi.automation.errors.RuntimeError:
 code: 255
 stdout: Updating (dddd-zvngvz):

 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (0s)
@ updating......
 +  scaleway:index:InstanceIp publicIp creating (0s)
@ updating....
 +  scaleway:index:InstanceIp publicIp created (1s)
@ updating....
 +  scaleway:index:InstanceServer dddd-zvngvz creating (0s)
@ updating.........................................
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) panic: runtime error: invalid memory address or nil pointer dereference
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x12c4e37]
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) goroutine 81 [running]:
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) github.com/scaleway/terraform-provider-scaleway/v2/scaleway.resourceScalewayInstanceServerRead({0x1ac09f0, 0xc000adf980}, 0xc000c7e580, {0x13f1420, 0xc000c637e0})
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/github.com/scaleway/terraform-provider-scaleway/v2@v2.10.0/scaleway/resource_instance_server.go:532 +0x4f7
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) github.com/scaleway/terraform-provider-scaleway/v2/scaleway.resourceScalewayInstanceServerCreate({0x1ac09f0, 0xc000adf980}, 0x7faedbc39108?, {0x13f1420?, 0xc000c637e0?})
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/github.com/scaleway/terraform-provider-scaleway/v2@v2.10.0/scaleway/resource_instance_server.go:500 +0x2225
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc000754a80, {0x1ac09b8, 0xc000146008}, 0xd?, {0x13f1420, 0xc000c637e0})
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/github.com/pulumi/terraform-plugin-sdk/v2@v2.0.0-20220824175045-450992f2f5b9/helper/schema/resource.go:712 +0x12e
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc000754a80, {0x1ac09b8, 0xc000146008}, 0x0, 0xc000c7e400, {0x13f1420, 0xc000c637e0})
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/github.com/pulumi/terraform-plugin-sdk/v2@v2.0.0-20220824175045-450992f2f5b9/helper/schema/resource.go:842 +0xa85
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) github.com/pulumi/pulumi-terraform-bridge/v3/pkg/tfshim/sdk-v2.v2Provider.Apply({0x167df3b?}, {0x169af5c, 0x18}, {0x0?, 0x0}, {0x1ac7558?, 0xc000c7e400})
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/v3@v3.40.0/pkg/tfshim/sdk-v2/provider.go:122 +0x19b
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) github.com/pulumi/pulumi-terraform-bridge/v3/pkg/tfbridge.(*Provider).Create(0xc0002e2000, {0x1ac0a28?, 0xc000d4c1b0?}, 0xc000adcb40)
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/v3@v3.40.0/pkg/tfbridge/provider.go:895 +0x643
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Create_Handler.func1({0x1ac0a28, 0xc000d4c1b0}, {0x15b9f20?, 0xc000adcb40})
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.55.0/proto/go/provider_grpc.pb.go:573 +0x78
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1({0x1ac0a28, 0xc000d4c090}, {0x15b9f20, 0xc000adcb40}, 0xc0003d07e0, 0xc000ab7d88)
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x3f9
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Create_Handler({0x1645480?, 0xc0002e2000}, {0x1ac0a28, 0xc000d4c090}, 0xc0003eb3b0, 0xc0002bc040)
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.55.0/proto/go/provider_grpc.pb.go:575 +0x138
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002de000, {0x1ac79e0, 0xc000540680}, 0xc000d405a0, 0xc0002c83f0, 0x2612000, 0x0)
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.51.0/server.go:1340 +0xd23
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) google.golang.org/grpc.(*Server).handleStream(0xc0002de000, {0x1ac79e0, 0xc000540680}, 0xc000d405a0, 0x0)
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.51.0/server.go:1713 +0xa2f
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) google.golang.org/grpc.(*Server).serveStreams.func1.2()
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.51.0/server.go:965 +0x98
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) created by google.golang.org/grpc.(*Server).serveStreams.func1
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (42s) 	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.51.0/server.go:963 +0x28a
 +  scaleway:index:InstanceServer dddd-zvngvz creating (37s) error: error reading from server: EOF
 +  scaleway:index:InstanceServer dddd-zvngvz **creating failed** error: error reading from server: EOF
@ updating..........
 +  pulumi:pulumi:Stack vps-dddd-zvngvz creating (49s) error: update failed
 +  pulumi:pulumi:Stack vps-dddd-zvngvz **creating failed** 1 error; 29 messages
Diagnostics:
  pulumi:pulumi:Stack (vps-dddd-zvngvz):
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x12c4e37]
    goroutine 81 [running]:
    github.com/scaleway/terraform-provider-scaleway/v2/scaleway.resourceScalewayInstanceServerRead({0x1ac09f0, 0xc000adf980}, 0xc000c7e580, {0x13f1420, 0xc000c637e0})
    	/home/runner/go/pkg/mod/github.com/scaleway/terraform-provider-scaleway/v2@v2.10.0/scaleway/resource_instance_server.go:532 +0x4f7
    github.com/scaleway/terraform-provider-scaleway/v2/scaleway.resourceScalewayInstanceServerCreate({0x1ac09f0, 0xc000adf980}, 0x7faedbc39108?, {0x13f1420?, 0xc000c637e0?})
    	/home/runner/go/pkg/mod/github.com/scaleway/terraform-provider-scaleway/v2@v2.10.0/scaleway/resource_instance_server.go:500 +0x2225
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc000754a80, {0x1ac09b8, 0xc000146008}, 0xd?, {0x13f1420, 0xc000c637e0})
    	/home/runner/go/pkg/mod/github.com/pulumi/terraform-plugin-sdk/v2@v2.0.0-20220824175045-450992f2f5b9/helper/schema/resource.go:712 +0x12e
    github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc000754a80, {0x1ac09b8, 0xc000146008}, 0x0, 0xc000c7e400, {0x13f1420, 0xc000c637e0})
    	/home/runner/go/pkg/mod/github.com/pulumi/terraform-plugin-sdk/v2@v2.0.0-20220824175045-450992f2f5b9/helper/schema/resource.go:842 +0xa85
    github.com/pulumi/pulumi-terraform-bridge/v3/pkg/tfshim/sdk-v2.v2Provider.Apply({0x167df3b?}, {0x169af5c, 0x18}, {0x0?, 0x0}, {0x1ac7558?, 0xc000c7e400})
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/v3@v3.40.0/pkg/tfshim/sdk-v2/provider.go:122 +0x19b
    github.com/pulumi/pulumi-terraform-bridge/v3/pkg/tfbridge.(*Provider).Create(0xc0002e2000, {0x1ac0a28?, 0xc000d4c1b0?}, 0xc000adcb40)
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi-terraform-bridge/v3@v3.40.0/pkg/tfbridge/provider.go:895 +0x643
    github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Create_Handler.func1({0x1ac0a28, 0xc000d4c1b0}, {0x15b9f20?, 0xc000adcb40})
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.55.0/proto/go/provider_grpc.pb.go:573 +0x78
    github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1({0x1ac0a28, 0xc000d4c090}, {0x15b9f20, 0xc000adcb40}, 0xc0003d07e0, 0xc000ab7d88)
    	/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x3f9
    github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Create_Handler({0x1645480?, 0xc0002e2000}, {0x1ac0a28, 0xc000d4c090}, 0xc0003eb3b0, 0xc0002bc040)
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.55.0/proto/go/provider_grpc.pb.go:575 +0x138
    google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002de000, {0x1ac79e0, 0xc000540680}, 0xc000d405a0, 0xc0002c83f0, 0x2612000, 0x0)
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.51.0/server.go:1340 +0xd23
    google.golang.org/grpc.(*Server).handleStream(0xc0002de000, {0x1ac79e0, 0xc000540680}, 0xc000d405a0, 0x0)
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.51.0/server.go:1713 +0xa2f
    google.golang.org/grpc.(*Server).serveStreams.func1.2()
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.51.0/server.go:965 +0x98
    created by google.golang.org/grpc.(*Server).serveStreams.func1
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.51.0/server.go:963 +0x28a

    error: update failed

  scaleway:index:InstanceServer (dddd-zvngvz):
    error: error reading from server: EOF

Resources:
    + 2 created

Duration: 51s

 stderr:

We were using the 1.7.0 version package from lbrlabs but we upgraded to the 1.14.0 of this package and have exactly the same issue with this piece of code:

import pulumiverse_scaleway as scaleway

def create_instance(self, instance_id, ami_image, hashed_instance_name, environment, instance_region, instance_zone, instance_type, generate_dns, root_dns_zone):
        def create_pulumi_program():
            region_zone = "{}-{}".format(instance_region, instance_zone)
            instance_ip = scaleway.InstanceIp("publicIp", zone = region_zone)
            new_instance = scaleway.InstanceServer(hashed_instance_name,
                                    type = instance_type,
                                    image = ami_image,
                                    name = hashed_instance_name,
                                    ip_id = instance_ip.id,
                                    zone = region_zone,
                                    user_data = {
                                        "cloud-init": (lambda path: open(path).read())(self.cloud_init_script())
                                    })

            if is_true(generate_dns):
                dns_driver = get_dns_zone_driver(root_dns_zone)
                ProviderDriverModule = importlib.import_module('drivers.{}'.format(dns_driver))
                ProviderDriver = getattr(ProviderDriverModule, dns_driver)
                ProviderDriver().create_dns_records(hashed_instance_name, environment, new_instance.public_ip, root_dns_zone)
            pulumi.export("public_ip", new_instance.public_ip)

        scw_access_key = os.getenv('SCW_ACCESS_KEY')
        scw_secret_key = os.getenv('SCW_SECRET_KEY')
        scw_project_id = os.getenv('SCW_PROJECT_ID')
        cloudflare_api_token = os.getenv('CLOUDFLARE_API_TOKEN')

        stack = auto.create_or_select_stack(stack_name = hashed_instance_name,
                                            project_name = sanitize_project_name(environment['path']),
                                            program = create_pulumi_program)
        stack.set_config("scaleway:access_key", auto.ConfigValue(scw_access_key))
        stack.set_config("scaleway:secret_key", auto.ConfigValue(scw_secret_key))
        stack.set_config("scaleway:project_id", auto.ConfigValue(scw_project_id))
        if is_not_empty(cloudflare_api_token):
            stack.set_config("cloudflare:api_token", auto.ConfigValue(cloudflare_api_token))
        up_res = stack.up()

        return {
            "ip": up_res.outputs.get("public_ip").value
        }

And in the requirements.txt:

pulumiverse-scaleway==1.14.0

Because we got this error also with an old package, I bet Scaleway has changed something in their IaaS API http response which is badly catched in the go code but require an upgrade in this package.

Thanks in advance.

@idrissneumann
Copy link
Author

Other details, an issue has been opened here: pulumi/pulumi-terraform-bridge#2257 (comment)

But like I said I have the feeling it's more an issue of Scaleway changing their IaaS interface contract, so that's why I opened one here.

Also, the resources are still created on Scaleway:

Screenshot 2024-07-29 at 09 25 22

But only the floating ip are attached to the state (when we delete the stack only the floatting ip is deleted):

Screenshot 2024-07-29 at 09 25 58

@idrissneumann
Copy link
Author

This issue might be the cause: scaleway/terraform-provider-scaleway#2664

"Bootscript" seems deprecated and has probably been removed from the response body which might cause the segfault.

This has been released in the 2.42.1 version of the terraform provider, I think an upgrade might resolve the issue.

@idrissneumann
Copy link
Author

Feedback from Scaleway:

The origin of your problem is partly linked to the change in bootscript support:
https://www.scaleway.com/en/docs/compute/instances/troubleshooting/bootscript-eol/#migration-options-for-instances-using-bootscripts

I still think upgrading to the 2.42.1 of terraform provider should solve this issue.

@idrissneumann
Copy link
Author

Btw I also tried to do it myself using:

$ upgrade-provider pulumiverse/pulumi-scaleway --kind provider

But I got this error:

error: failed to gather package metadata: problem gathering data sources: 2 errors occurred:
        * TF data source "scaleway_iam_api_key" not mapped to the Pulumi provider
        * TF data source "scaleway_vpc_routes" not mapped to the Pulumi provider


make: *** [tfgen] Error 255

Not skilled enough yet to know how to fix this, otherwise it'd be with pleasure 😅

@idrissneumann
Copy link
Author

Hi, thanks @dirien for the PR <3

Any idea on when I might be merged ? Because all deployments on Scaleway are broken for everybody ^^

I'm also looking for a workaround in the meantime we can already apply

Thanks!

@dirien
Copy link
Collaborator

dirien commented Jul 30, 2024

@idrissneumann I asked @ringods do have a look as soon he can.

@idrissneumann
Copy link
Author

Thanks a lot @ringods and @dirien <3

Once it's released I'll post the results of our tests here

@dirien
Copy link
Collaborator

dirien commented Jul 30, 2024

@idrissneumann done!

@idrissneumann
Copy link
Author

Feedback: it works like a charm, thanks a lot @ringods and @dirien <3

Screenshot 2024-07-30 at 17 21 23

@dirien
Copy link
Collaborator

dirien commented Jul 30, 2024

@idrissneumann great!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants