You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Server Version: v2.45.0
CLI Version (for CLI related issue): v.2.45.0
Environment
We have Hasura server deployed on AWS (AWS Fargate) and we run github action that install the hasura cli and run hasura metadata apply --skip-update-check command.
What is the current behaviour?
Sometimes (mostly when applying changes in metadata) we get Internal Server Error response from Hasura while executing hasura metadata apply --skip-update-check.
Run cd hasura && hasura metadata apply --skip-update-check
{"level":"info","msg":"Applying metadata...","time":"2025-01-20T08:23:32Z"}
time="2025-01-20T08:23:43Z" level=fatal msg="error applying metadata \n{\"error\":\"Internal Server Error\",\"path\":\"$\",\"code\":\"unexpected\"}"
Error: Process completed with exit code 1.
What is the expected behaviour?
Command hasura metadata apply --skip-update-check should respond with proper answer.
How to reproduce the issue?
Screenshots or Screencast
Please provide any traces or logs that could help here.
It appears that the command executes correctly, and the metadata is updated as expected. However, after applying the metadata, all websocket connections are closed, resulting in a 500 error. I suspect there might be a race condition causing this behavior, as the issue doesn't occur consistently - connection are closed before getting a response.
{"detail":{"info":"Received metadata resource version: 13472, different from the current engine resource version: 13471. Trying to update the schema cache.","thread_type":"processor"},"level":"info","timestamp":"2025-01-24T11:26:21.891+0000","type":"schema-sync"}
{"detail":{"info":"Attempting to insert new metadata in storage","thread_type":"metadata-api"},"level":"info","span_id":"tmp","timestamp":"2025-01-24T11:26:20.886+0000","trace_id":"tmp","type":"schema-sync"}
{"detail":{"info":"Successfully inserted new metadata in storage with resource version: 13472","thread_type":"metadata-api"},"level":"info","span_id":"tmp","timestamp":"2025-01-24T11:26:20.886+0000","trace_id":"tmp","type":"schema-sync"}
{"detail":{"info":"Inserted schema cache sync notification at resource version: 13472","thread_type":"metadata-api"},"level":"info","span_id":"tmp","timestamp":"2025-01-24T11:26:20.886+0000","trace_id":"tmp","type":"schema-sync"}
{"detail":"Closing all websocket connections as the metadata has changed","level":"info","timestamp":"2025-01-24T11:26:20.886+0000","type":"ws-server"}
{"detail":{"error":"Spock Error while handling [\"v1\",\"metadata\"]: ConnectionClosed","location":""},"level":"error","timestamp":"2025-01-24T11:26:20.886+0000","type":"unhandled-internal-error"}
Additionally, it seems this behavior started after upgrading from version v2.33.4 to v2.45.0.
Any suggestions on how to resolve or further debug this issue would be greatly appreciated.
Any possible solutions/workarounds you're aware of?
Right now we added retires to this job, because when no metadata changes are applied, it seems to work perfectly fine.
Keywords
Spock, metadata, ConnectionClosed
The text was updated successfully, but these errors were encountered:
Version Information
Server Version: v2.45.0
CLI Version (for CLI related issue): v.2.45.0
Environment
We have Hasura server deployed on AWS (AWS Fargate) and we run github action that install the hasura cli and run
hasura metadata apply --skip-update-check
command.What is the current behaviour?
Sometimes (mostly when applying changes in metadata) we get Internal Server Error response from Hasura while executing
hasura metadata apply --skip-update-check
.What is the expected behaviour?
Command
hasura metadata apply --skip-update-check
should respond with proper answer.How to reproduce the issue?
Screenshots or Screencast
Please provide any traces or logs that could help here.
It appears that the command executes correctly, and the metadata is updated as expected. However, after applying the metadata, all websocket connections are closed, resulting in a 500 error. I suspect there might be a race condition causing this behavior, as the issue doesn't occur consistently - connection are closed before getting a response.
Additionally, it seems this behavior started after upgrading from version v2.33.4 to v2.45.0.
Any suggestions on how to resolve or further debug this issue would be greatly appreciated.
Any possible solutions/workarounds you're aware of?
Right now we added retires to this job, because when no metadata changes are applied, it seems to work perfectly fine.
Keywords
Spock, metadata, ConnectionClosed
The text was updated successfully, but these errors were encountered: