Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: batch audit logs and ensure they are not lost #1080

Open
wants to merge 19 commits into
base: main
Choose a base branch
from

Conversation

de-sh
Copy link
Contributor

@de-sh de-sh commented Jan 7, 2025

Fixes #XXXX.

Description

Collects audit logs into batches, pushes as configured based on threshold of time/size and ensures logs are persisted into a file when the network is down. NOTE: Works in a near synchronous manner where logs will not be accepted if write to disk is blocked or when direct push to network is blocked, this may lead to a situation where requests will not respond properly as they try to send audit logs, but this is a known cost.


This PR has:

  • been tested to ensure log ingestion and log query works.
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added documentation for new or modified features or behaviors.

Summary by CodeRabbit

  • New Features

    • Enhanced audit logging with a new AuditLogger for managing and sending logs, including batch processing and improved error handling.
    • Introduced configurable options for audit log batch size, flush intervals, and log storage directory via environment variables.
  • Refactor

    • Streamlined logging components and shutdown procedures for reliable initialization and graceful termination of the audit logging system.

@coveralls
Copy link

coveralls commented Jan 7, 2025

Pull Request Test Coverage Report for Build 13016443859

Details

  • 0 of 173 (0.0%) changed or added relevant lines in 6 files are covered.
  • 1 unchanged line in 1 file lost coverage.
  • Overall coverage decreased (-0.1%) to 12.855%

Changes Missing Coverage Covered Lines Changed/Added Lines %
src/main.rs 0 2 0.0%
src/cli.rs 0 3 0.0%
src/option.rs 0 5 0.0%
src/audit/builder.rs 0 7 0.0%
src/audit/types.rs 0 7 0.0%
src/audit/logger.rs 0 149 0.0%
Files with Coverage Reduction New Missed Lines %
src/main.rs 1 0.0%
Totals Coverage Status
Change from base Build 13006822034: -0.1%
Covered Lines: 2477
Relevant Lines: 19269

💛 - Coveralls

Copy link
Contributor

@hippalus hippalus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! The implementation looks solid, but I have some questions about the architectural approach:

Audit Log Architecture:

  1. What drove the decision to use another Parseable instance for audit logs rather than local storage?
  2. Do we have plans for a dedicated audit logging solution in the future?

Deployment Considerations:
For multi-node setups (e.g., 3 ingest + 1 query nodes):

  1. What's the recommended log_endpoint configuration?
  2. Which node should own audit log collection?
  3. How do we prevent circular logging dependencies?

Performance & Scalability:

  1. In high-traffic scenarios, how do we handle the additional network load from audit logging?
  2. Have we considered the impact of inter-service dependencies on system reliability?

Would appreciate your insights on these points to better understand the architectural vision.

.open(log_file_path)
.await
.expect("Failed to open audit log file");
let buf = serde_json::to_vec(&logs_to_send).expect("Failed to serialize audit logs");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly it looks like it never happens, but unwrap() can cause panic in production. It would be nice to have proper error handling with context.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be an issue in most places are serialization is not a problem for these types. But still we have expects to document why it is safe to assume it won't

}

// setup the audit log channel
let (audit_log_tx, mut audit_log_rx) = channel(0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the initial size is 0 ?

Copy link
Contributor Author

@de-sh de-sh Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea was to create a "Glienicke Bridge" so that we block at the request level everytime the underlying audit logger is unable to keep-up.

@de-sh
Copy link
Contributor Author

de-sh commented Jan 9, 2025

Thank you for the comments @hippalus, let me note that audit logging is not to be considered as beta yet. We are working on developing this as a feature that is still in development and will need a lot of work before we can consider it ready for production.

With regards to your questions:

LGTM! The implementation looks solid, but I have some questions about the architectural approach:

Audit Log Architecture:

1. What drove the decision to use another Parseable instance for audit logs rather than local storage?

It is an industry practice to send audit logs to an external system, we felt dogfooding would be the least friction option when it comes to doing this.

2. Do we have plans for a dedicated audit logging solution in the future?

Parseable already fulfills most of this and we don't see any reason as to why we would build with anything different. Please let us know of any alternatives, we can document them and plan out how to build support. To make ourselves clear, logs should be sent to an instance outside of the cluster.

Deployment Considerations: For multi-node setups (e.g., 3 ingest + 1 query nodes):

1. What's the recommended log_endpoint configuration?

The log_endpoint should be a separate cluster/standalone instance of parseable.

2. Which node should own audit log collection?

This should not matter in my opinion, in a multi-node setup the expectation is that a load-balancer should take the decision and logs should end up in the same stream no matter the node it is received at.

3. How do we prevent circular logging dependencies?

Circular logs are not expected as pointed above.

Performance & Scalability:

1. In high-traffic scenarios, how do we handle the additional network load from audit logging?

We are only testing the feature and as of our current learnings, batching and writing to file on network slow-down as well as pushing to file when network is down seemed to be the right decision for us to handle high-frequency use-cases.

2. Have we considered the impact of inter-service dependencies on system reliability?

What do you mean by "inter-service dependencies".

Would appreciate your insights on these points to better understand the architectural vision.

@hippalus
Copy link
Contributor

hippalus commented Jan 9, 2025

Thank you for the comments @hippalus, let me note that audit logging is not to be considered as beta yet. We are working on developing this as a feature that is still in development and will need a lot of work before we can consider it ready for production.

With regards to your questions:

LGTM! The implementation looks solid, but I have some questions about the architectural approach:
Audit Log Architecture:

1. What drove the decision to use another Parseable instance for audit logs rather than local storage?

It is an industry practice to send audit logs to an external system, we felt dogfooding would be the least friction option when it comes to doing this.

2. Do we have plans for a dedicated audit logging solution in the future?

Parseable already fulfills most of this and we don't see any reason as to why we would build with anything different. Please let us know of any alternatives, we can document them and plan out how to build support. To make ourselves clear, logs should be sent to an instance outside of the cluster.

Deployment Considerations: For multi-node setups (e.g., 3 ingest + 1 query nodes):

1. What's the recommended log_endpoint configuration?

The log_endpoint should be a separate cluster/standalone instance of parseable.

2. Which node should own audit log collection?

This should not matter in my opinion, in a multi-node setup the expectation is that a load-balancer should take the decision and logs should end up in the same stream no matter the node it is received at.

3. How do we prevent circular logging dependencies?

Circular logs are not expected as pointed above.

Performance & Scalability:

1. In high-traffic scenarios, how do we handle the additional network load from audit logging?

We are only testing the feature and as of our current learnings, batching and writing to file on network slow-down as well as pushing to file when network is down seemed to be the right decision for us to handle high-frequency use-cases.

2. Have we considered the impact of inter-service dependencies on system reliability?

What do you mean by "inter-service dependencies".

Would appreciate your insights on these points to better understand the architectural vision.

Thank you for the clarification @de-sh . I highlighted these questions because I couldn't find explicit documentation or configuration descriptions in the code addressing these architectural decisions. The implementation looks quite solid.

I'm particularly looking forward to seeing the Kafka connector integration. Regarding inter-service dependencies, as you've well explained, we can scale Parseable instances behind a load balancer for audit logging purposes. It's great to see this pathway being established.

IMHO : It would be nice to add more documentation around these architectural decisions directly in the code, making it easier for future contributors to understand the design choices. It would be nice to have to add docker-compose as an example.

@de-sh de-sh mentioned this pull request Jan 20, 2025
Copy link

coderabbitai bot commented Mar 31, 2025

Walkthrough

This pull request refactors the audit logging subsystem. It removes the legacy AuditLogger from the builder module and its related types, replacing it with a new, modular AuditLogger implementation across the audit modules. The changes introduce new asynchronous methods, data structures, and a static channel (AUDIT_LOG_TX) for sending logs. Additionally, CLI options and main initialization are updated for audit log configuration and graceful shutdown, and a new duration validation function is provided.

Changes

Files Change Summary
src/audit/builder.rs Removed the old AuditLogger struct, its methods (including send_log), and several associated audit log types. Updated AuditLogBuilder to check for AUDIT_LOG_TX instead of the removed logger.
src/audit/logger.rs, src/audit/types.rs, src/audit/mod.rs Introduced a new AuditLogger struct with async methods (flush, insert, send_logs, send_logs_to_remote, spawn_batcher), new audit log data structures, and a modular organization that includes a static AUDIT_LOG_TX for batching logs.
src/cli.rs, src/main.rs, src/lib.rs Added new CLI options (audit_batch_size, audit_flush_interval, audit_log_dir), updated logging initialization in main.rs to integrate the new AuditLogger and manage its graceful shutdown, and re-exported AuditLogger for broader use.
src/option.rs Added a new public function duration in the validation module to parse string inputs into Duration values.

Sequence Diagram(s)

sequenceDiagram
    participant Main
    participant AuditLogger
    participant RemoteLogSystem

    Main->>AuditLogger: spawn_batcher(shutdown signal)
    AuditLogger->>AuditLogger: batch incoming audit logs
    AuditLogger->>RemoteLogSystem: send_logs_to_remote(batch)
    RemoteLogSystem-->>AuditLogger: response (success/failure)
    AuditLogger->>AuditLogger: flush logs and handle errors
    Main->>AuditLogger: trigger shutdown for graceful termination
Loading

Poem

I'm a little rabbit with a code hop stride,
New logs in place, with a fresh, modern glide.
Old paths removed, a new trail to track,
Batch and send logs, no detail will slack.
With every line, my burrow feels bright—
Debugging by day and dreaming by night!
🥕🐇 Happy logging on this starry code flight!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c6d764 and c0a7b70.

📒 Files selected for processing (1)
  • src/cli.rs (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/cli.rs
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: Build Default aarch64-apple-darwin
  • GitHub Check: Build Default x86_64-pc-windows-msvc
  • GitHub Check: Build Default aarch64-unknown-linux-gnu
  • GitHub Check: Build Kafka aarch64-apple-darwin
  • GitHub Check: Build Kafka x86_64-unknown-linux-gnu
  • GitHub Check: coverage

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (10)
src/option.rs (1)

138-144: Fix typo in error message and improve parameter validation

The error message has a typo ("pass" instead of "parse") and the function lacks additional validation for extremely large values.

 pub fn duration(secs: &str) -> Result<Duration, String> {
     let Ok(secs) = secs.parse() else {
-        return Err("Couldn't pass as a number".to_string());
+        return Err("Couldn't parse as a number".to_string());
     };

+    // Optionally validate the value is reasonable
+    if secs > 3600 * 24 * 7 {
+        return Err("Duration too large (exceeds one week in seconds)".to_string());
+    }
+
     Ok(Duration::from_secs(secs))
 }
src/audit/mod.rs (1)

19-32: Good structure for the audit module with shared communication channel

The module is well-organized with clear separation of concerns between builder, logger, and types. Using OnceCell for the global sender channel is a good practice for thread-safe initialization.

However, consider providing a helper function to safely initialize and access the sender:

/// Initialize the global audit log sender
pub fn init_audit_log_tx(sender: Sender<AuditLog>) -> Result<(), Sender<AuditLog>> {
    AUDIT_LOG_TX.set(sender)
}

/// Get a reference to the global audit log sender if initialized
pub fn audit_log_tx() -> Option<&'static Sender<AuditLog>> {
    AUDIT_LOG_TX.get()
}
src/main.rs (2)

36-39: Consider removing or consolidating the old logger initialization code.
The new inline logger setup (lines 36–39) replaces the previously defined init_logger() (lines 95–117), which appears unused now. This can introduce confusion or lead to dead code over time.


66-73: Handle possible oneshot::Sender::send errors gracefully.
Using .unwrap() when sending shutdown signals to server_shutdown_trigger and logger_shutdown_trigger can panic if either receiver has been dropped or closed. For resilience, replace .unwrap() with logic that logs and safely continues if sending fails.

src/audit/types.rs (1)

1-84: Assess potential exposure of sensitive data.
The ActorDetails struct includes user and host information, potentially personally identifiable. Ensure that serializing and persisting these fields (e.g., remote_host, username) complies with privacy and data retention policies.

src/audit/logger.rs (4)

68-70: Avoid panics for directory creation failures.
Using .expect("Failed to create audit log directory") can crash the application in production. If you want to handle IO errors more gracefully, consider returning an error, logging it, or using a fallback path instead.


73-90: Improve error handling for directory entries.
Repetitive .unwrap() calls when reading directory entries can lead to panics if any read fails. Consider handling or logging these errors to avoid a crash due to a single malformed file.


104-117: Validate the retry logic for sending logs to remote.
The flush logic only writes logs to disk if there is a backlog or if sending fails. Confirm the approach covers scenarios where intermittent failures occur, and clarify how quickly logs on disk are retried.


200-203: Confirm buffer size for the audit log channel.
Using channel(0) enforces synchronous sending. This can block the sender if the consumer is slow. Verify that this unbuffered approach is intended to throttle requests or if a small buffer might improve performance.

src/audit/builder.rs (1)

42-42: Consider re-checking channel availability at build time vs. usage time.
Currently, enabled is determined once based on AUDIT_LOG_TX.get().is_some() at builder creation. If the channel’s initialization state changes later, the builder may become outdated. If re-checking is desired for dynamic toggling of logging, consider checking the channel upon each method call, or re-initializing the builder.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 382f480 and 3c6d764.

📒 Files selected for processing (8)
  • src/audit/builder.rs (3 hunks)
  • src/audit/logger.rs (1 hunks)
  • src/audit/mod.rs (1 hunks)
  • src/audit/types.rs (1 hunks)
  • src/cli.rs (2 hunks)
  • src/lib.rs (1 hunks)
  • src/main.rs (3 hunks)
  • src/option.rs (2 hunks)
🧰 Additional context used
🧬 Code Definitions (2)
src/audit/logger.rs (2)
src/main.rs (2)
  • oneshot (42-42)
  • oneshot (66-66)
src/audit/builder.rs (1)
  • default (40-61)
src/audit/types.rs (2)
src/audit/logger.rs (1)
  • default (48-98)
src/audit/builder.rs (1)
  • default (40-61)
⏰ Context from checks skipped due to timeout of 90000ms (10)
  • GitHub Check: coverage
  • GitHub Check: Quest Smoke and Load Tests for Standalone deployments
  • GitHub Check: Build Default aarch64-apple-darwin
  • GitHub Check: Quest Smoke and Load Tests for Distributed deployments
  • GitHub Check: Build Default x86_64-apple-darwin
  • GitHub Check: Build Default aarch64-unknown-linux-gnu
  • GitHub Check: Build Kafka x86_64-unknown-linux-gnu
  • GitHub Check: Build Default x86_64-unknown-linux-gnu
  • GitHub Check: Build Default x86_64-pc-windows-msvc
  • GitHub Check: Build Kafka aarch64-apple-darwin
🔇 Additional comments (4)
src/lib.rs (1)

55-55: LGTM - New public export for AuditLogger

The addition of pub use audit::AuditLogger; makes the audit logger implementation available to external modules, which is necessary for the batch audit logging feature.

src/main.rs (1)

41-43: Ensure the logger object’s lifecycle meets your needs.
You are spawning the batcher and then discarding the AuditLogger instance without storing it in a variable. Although you set up a static channel, confirm you will not need direct references to the logger afterward.

src/audit/logger.rs (1)

226-230: Kudos on ensuring a graceful shutdown.
Flushing logs before exiting and logging the “shutting down” message helps ensure minimal log loss and provides clarity about shutdown behavior.

src/audit/builder.rs (1)

19-23: Imports appear consistent with usage and project conventions.
No issues found: each import (Display, current, StorageMetadata, Utc) is used appropriately within the file.

Comment on lines +364 to +370
#[arg(
long,
env = "P_AUDIT_LOG_DIR",
default_value = "./auditlogs",
help = "Path for audit log persistence"
)]
pub audit_log_dir: PathBuf,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ensure the audit log directory exists

The code specifies a default path for audit logs but doesn't ensure this directory exists when used, unlike the staging directory which has a dedicated method that ensures its creation.

Add a method similar to staging_dir() for the audit log directory:

/// Path to audit log directory, ensures that it exists or panics
pub fn audit_log_dir(&self) -> &PathBuf {
    fs::create_dir_all(&self.audit_log_dir)
        .expect("Should be able to create audit log directory if it doesn't exist");
    
    &self.audit_log_dir
}

Comment on lines +348 to +370
#[arg(
long,
env = "P_AUDIT_BATCH_SIZE",
default_value = "100",
help = "Audit log batch size"
)]
pub audit_batch_size: usize,

#[arg(
long,
env = "P_AUDIT_FLUSH_INTERVAL",
value_parser = validation::duration,
help = "Interval to flush into persistence"
)]
pub audit_flush_interval: Duration,

#[arg(
long,
env = "P_AUDIT_LOG_DIR",
default_value = "./auditlogs",
help = "Path for audit log persistence"
)]
pub audit_log_dir: PathBuf,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add default value for audit_flush_interval

Two of the three new audit configuration options have default values, but audit_flush_interval does not. This could lead to runtime errors if not explicitly provided by the user.

 #[arg(
     long,
     env = "P_AUDIT_FLUSH_INTERVAL",
+    default_value = "60",
     value_parser = validation::duration,
     help = "Interval to flush into persistence"
 )]
 pub audit_flush_interval: Duration,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#[arg(
long,
env = "P_AUDIT_BATCH_SIZE",
default_value = "100",
help = "Audit log batch size"
)]
pub audit_batch_size: usize,
#[arg(
long,
env = "P_AUDIT_FLUSH_INTERVAL",
value_parser = validation::duration,
help = "Interval to flush into persistence"
)]
pub audit_flush_interval: Duration,
#[arg(
long,
env = "P_AUDIT_LOG_DIR",
default_value = "./auditlogs",
help = "Path for audit log persistence"
)]
pub audit_log_dir: PathBuf,
#[arg(
long,
env = "P_AUDIT_BATCH_SIZE",
default_value = "100",
help = "Audit log batch size"
)]
pub audit_batch_size: usize,
#[arg(
long,
env = "P_AUDIT_FLUSH_INTERVAL",
default_value = "60",
value_parser = validation::duration,
help = "Interval to flush into persistence"
)]
pub audit_flush_interval: Duration,
#[arg(
long,
env = "P_AUDIT_LOG_DIR",
default_value = "./auditlogs",
help = "Path for audit log persistence"
)]
pub audit_log_dir: PathBuf,

Comment on lines +173 to +181
// NOTE: we are fine with blocking here as user expects audit logs to be sent at all costs
if let Err(e) = AUDIT_LOG_TX
.get()
.expect("Audit logger not initialized")
.send(audit_log)
.await
{
error!("Couldn't send to logger: {e}")
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Assess the blocking behavior under heavy load.
By design, this code blocks while awaiting the send operation. Under significant load or if the channel is full, requests may stall. This aligns with a near-synchronous approach but may impact throughput. Consider:

  1. Using a bounded channel with a fallback strategy to avoid request stalls.
  2. Logging to disk or an internal buffer if the channel blocks, then retrying asynchronously.

Below is a suggested fallback approach, if you want to avoid fully blocking:

 if let Err(e) = AUDIT_LOG_TX
     .get()
     .expect("Audit logger not initialized")
-    .send(audit_log)
+    .try_send(audit_log)  // Non-blocking attempt
     .await
 {
     error!("Couldn't send to logger: {e}")
 }

Would you like assistance in implementing a buffered or retry-based logging mechanism?

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +27 to +29
use super::{
ActorDetails, AuditDetails, AuditLog, AuditLogVersion, RequestDetails, ResponseDetails,
ServerDetails, AUDIT_LOG_TX,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify that all imported entities are still needed after refactoring.
The references (ActorDetails, AuditDetails, AuditLog, etc.) are presumably used within this file, but do ensure none of them are left unused after the structural changes and removals in related modules (e.g., old AuditLogger removal).


🏁 Script executed:

#!/bin/bash
# Description: Search for references of removed or unused structs in the codebase.
fd -e rs --exec rg 'ActorDetails|AuditDetails|AuditLogVersion|...'

Length of output: 65605


Audit Import Cleanup: Remove Unused Entities

The grep search did not reveal any further usage of the imported items (ActorDetails, AuditDetails, AuditLog, AuditLogVersion, RequestDetails, ResponseDetails, ServerDetails, AUDIT_LOG_TX) beyond their inclusion in this file. Please double‑check the implementation in src/audit/builder.rs—if none of these entities are actually referenced (including potential macro or conditional usages), remove the unused imports to keep the file clean and prevent confusion.

  • Verify each imported entity in src/audit/builder.rs is actually referenced.
  • If an import is no longer needed due to the recent refactoring (e.g., after removing the old AuditLogger), please delete it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants