Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MINOR: migrate BrokerCompressionTest to storage module #19277

Open
wants to merge 8 commits into
base: trunk
Choose a base branch
from

Conversation

TaiJuWu
Copy link
Collaborator

@TaiJuWu TaiJuWu commented Mar 24, 2025

There are two change for this PR.

  1. Move BrokerCompressionTest from core to storage
  2. Rewrite BrokerCompressionTest from scala to java

@github-actions github-actions bot added triage PRs from the community core Kafka Broker storage Pull requests that target the storage module build Gradle build or GitHub Actions labels Mar 24, 2025
@TaiJuWu TaiJuWu changed the title MINOR: migrate BrokerCompressionTest to storage MINOR: migrate BrokerCompressionTest to storage module Mar 24, 2025
Copy link
Member

@FrankYang0529 FrankYang0529 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. Leave a minor comment.

/* Configure broker-side compression */
UnifiedLog log = UnifiedLog.create(
logDir,
new LogConfig(logProps),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we change this like following, so we don't need logProps?

            new LogConfig(Map.of(TopicConfig.COMPRESSION_TYPE_CONFIG, brokerCompressionType.name)),

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Thank you.

Copy link
Collaborator

@frankvicky frankvicky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TaiJuWu : Thanks for the patch.
I have a few comments.

Comment on lines 109 to 115
List<Arguments> args = new ArrayList<>();
for (BrokerCompressionType brokerCompression : BrokerCompressionType.values()) {
for (CompressionType messageCompression : CompressionType.values()) {
args.add(Arguments.of(messageCompression, brokerCompression));
}
}
return args.stream();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

        return Arrays.stream(BrokerCompressionType.values())
            .flatMap(brokerCompression -> Arrays.stream(CompressionType.values())
                .map(messageCompression -> Arguments.of(messageCompression, brokerCompression)));

return fetchInfo.records.batches().iterator().next();
}

private static Stream<Arguments> parameters() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please give it a meaningful name.

Copy link
Collaborator Author

@TaiJuWu TaiJuWu Mar 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the name to allCompressionParameters.
Thanks for suggesting.

Copy link
Collaborator

@m1a2st m1a2st left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this patch, left some nit comments

}
}

private static RecordBatch readBatch(UnifiedLog log, int offset) throws IOException {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameter offset is always 0, so I think we can remove it.

Comment on lines +91 to +97
if (brokerCompressionType != BrokerCompressionType.PRODUCER) {
RecordBatch batch = readBatch(log, 0);
Compression targetCompression = BrokerCompressionType.targetCompression(log.config().compression, null);
assertEquals(targetCompression.type(), batch.compressionType(), "Compression at offset 0 should produce " + brokerCompressionType);
} else {
assertEquals(messageCompressionType, readBatch(log, 0).compressionType(), "Compression at offset 0 should produce " + messageCompressionType);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RecordBatch batch = readBatch(log, 0); can outside the if-else condition.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build Gradle build or GitHub Actions ci-approved core Kafka Broker storage Pull requests that target the storage module triage PRs from the community
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants