-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Req/Res] add tostring on request.lua and response.lua for simplying … #42
base: master
Are you sure you want to change the base?
Conversation
…debug; overflow means reach the batch size or batch num;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BruceZhangGit Thanks for your hard work :)
|
||
-- according to the current implementation, each "running timer" will take one (fake) connection record | ||
-- from the global connection record list configured by the standard worker_connections directive in nginx.conf. | ||
-- so limit the timers. global max timer count for all workers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually there can only have two running timers at most.
One is flushing data, another one is acquiring the flush lock ant it will fail soon.
So the running timer is limited:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And we should not limit the pending timers, because the pending timers can be failed to turn running :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And it will be better when we have ngx.timer.every
openresty/lua-nginx-module#856
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can limit the pending timer when we got ngx.timer.every
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fail to turn pending timer running causes an error log which is annoying, :). Here actually limit the total timers including both pending and running timers.
If ngx.timer.every is better, let's look forward to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here actually limit the total timers including both pending and running timers.
I mean we should not limit the pending timer, which may cause data lose.
Fail to turn pending timer running causes an error log which is annoying
this usually happen when the running timer num reached the
lua_max_running_timer
, but we only have two running timer at most in this lib.
function mt.__tostring( self ) | ||
local str = "" | ||
for _,v in ipairs(self._req) do | ||
str = str .. tostring(v) .. " " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all, I'm not very happy to add these code for debug, I think we'd better have these code on higher level.
But I'm fine if you optimized these code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should push the segments to table first and then use table.concat
.
I know this's just for debug, but it's really not a good example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What you mean on higher level? I write these codes not only for debug kafka but also this kafka client. I want to make sure if this client sends request correctly and receive response correctly.
Thanks for your advice ".." is not the best practice, instead, table.concat
is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean you can do this in your own code, over this lib.
…debug;
overflow means reach the batch size or batch num;