Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] High memory consumption #15934

Closed
Mavtti opened this issue Sep 13, 2024 · 9 comments
Closed

[BUG] High memory consumption #15934

Mavtti opened this issue Sep 13, 2024 · 9 comments
Labels
bug Something isn't working Other untriaged

Comments

@Mavtti
Copy link

Mavtti commented Sep 13, 2024

Describe the bug

We are using aws opensearch and since upgrading from 2.11 to 2.15, the memory usage increased and it slows down the whole cluser.
The biggest impact is on write operations which started to timeout after the upgrade.

I've seen this issue but I thought that the issue was supposed to be fixed in 2.15.

Upgrade happened on the 10th.

Max memory utilization more than often reaches 100% since then.
image

There is also an increase in Java Heap memory
image

Any idea ? Thx !

Related component

Other

To Reproduce

  1. Upgrade to 2.15

Expected behavior

No issue when inserting data and lower memory consumption

Additional Details

No modification on our side on memory settings

@Mavtti Mavtti added bug Something isn't working untriaged labels Sep 13, 2024
@github-actions github-actions bot added the Other label Sep 13, 2024
@reta
Copy link
Collaborator

reta commented Sep 13, 2024

@Mavtti by any chance, could you capture class histogram to understand where the heap consumption is coming from? Thank you.

@Mavtti
Copy link
Author

Mavtti commented Sep 16, 2024

@reta Sorry I never had to do it before, would you mind explaining me how to do it ?

Btw, to mitigate the issue, we changed our instances type from t3 to m7g. It gave us more leeway.
But at the same time, we also saw an issue with an ever increasing memory just like this case.

@reta
Copy link
Collaborator

reta commented Sep 16, 2024

@reta Sorry I never had to do it before, would you mind explaining me how to do it ?

Thanks @Mavtti , there are multiple ways to do that (we just need to do this on any node that consumes more heap):

jcmd <pid> GC.class_histogram
jmap  -histo <pid>

The Jackson issue should be fixed in 2.15.0 (and above)

@Mavtti
Copy link
Author

Mavtti commented Sep 18, 2024

Hey since I'm on managed aws opensearch, there is no way for me to run these commands.

And I saw that the jackson issue was supposed to be fixed but we actually are on 2.15 (OpenSearch_2_15_R20240904 to be exact) and still see this issue.
image

@reta
Copy link
Collaborator

reta commented Sep 18, 2024

Hey since I'm on managed aws opensearch, there is no way for me to run these commands.

Got it :(

And I saw that the jackson issue was supposed to be fixed but we actually are on 2.15 (OpenSearch_2_15_R20240904 to be exact) and still see this issue.

Correct, the heap consumption might be caused by another issue

@Mavtti
Copy link
Author

Mavtti commented Sep 18, 2024

So in the end, I can't give you more insight and there's not much you can do without it, correct ?

@reta
Copy link
Collaborator

reta commented Sep 18, 2024

So in the end, I can't give you more insight and there's not much you can do without it, correct ?

I think the best option we have is to engage with AWS support to ask for these details

@Mavtti
Copy link
Author

Mavtti commented Sep 19, 2024

Ok thx @reta for the answers !

@dblock
Copy link
Member

dblock commented Oct 7, 2024

Let's close it here. Do post what you find through working with AWS support please for the next person.

[Catch All Triage - 1, 2, 3, 4]

@dblock dblock closed this as completed Oct 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Other untriaged
Projects
None yet
Development

No branches or pull requests

3 participants