You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
With the use of logging library, application execution logs can be gathered at various logging level per configuration. However, with the maturing of the App SDK, especially when venturing into clinical deployment (IRB'ed), there are stronger needs to provide additional insights into the application execution, e.g. activity audits, performance metrics both in terms of latency and ideally result accuracy.
This is important when AI applications are deployed and used in imaging workflow, as the IT/PACS admins always need to have insights into the latency, resource, and clinical performance of the applications, so as to better manage the critical workflow stage for operational, clinical, as well as regulatory requirements.
Describe the solution you'd like
A logging framework that will support pre-defined as well as custom categories of metrics collection, while making the aggregation and delivery of the metrics dynamic and configurable, e.g. records gather could be delivered to the stats gathering agent on the host system that runs the MAP.
Describe alternatives you've considered
Native logging library with logging level combined with defined string literal indicating the categories of information, e.g. default being execution log, "PERF" for elapsed time for blocks of execution, etc.
Additional context
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
With the use of logging library, application execution logs can be gathered at various logging level per configuration. However, with the maturing of the App SDK, especially when venturing into clinical deployment (IRB'ed), there are stronger needs to provide additional insights into the application execution, e.g. activity audits, performance metrics both in terms of latency and ideally result accuracy.
This is important when AI applications are deployed and used in imaging workflow, as the IT/PACS admins always need to have insights into the latency, resource, and clinical performance of the applications, so as to better manage the critical workflow stage for operational, clinical, as well as regulatory requirements.
Describe the solution you'd like
A logging framework that will support pre-defined as well as custom categories of metrics collection, while making the aggregation and delivery of the metrics dynamic and configurable, e.g. records gather could be delivered to the stats gathering agent on the host system that runs the MAP.
Describe alternatives you've considered
Native logging library with logging level combined with defined string literal indicating the categories of information, e.g. default being execution log, "PERF" for elapsed time for blocks of execution, etc.
Additional context
The text was updated successfully, but these errors were encountered: