Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #4626 (new check: potential NULL pointer dereference, when memory allocation fails) #7068

Merged
merged 6 commits into from
Dec 8, 2024

Conversation

danmar
Copy link
Owner

@danmar danmar commented Dec 4, 2024

No description provided.

@chrchr-github
Copy link
Collaborator

Maybe the message could be shortened to (if memory allocation failed) or (in case of memory allocation failure)?

@danmar
Copy link
Owner Author

danmar commented Dec 5, 2024

Maybe the message could be shortened to (if memory allocation failed) or (in case of memory allocation failure)?

Thanks. I like the first suggestion better and will change to that.

@danmar
Copy link
Owner Author

danmar commented Dec 5, 2024

@firewave @chrchr-github I have tested this with the test-my-pr.py script. It analysed 256 random packages and generated 5576 nullPointerOutOf.. warnings. That is quite a lot. I assume they are true positives technically.. but still.

how do you feel about this?

I want to have this because our customer wants to have it. My feeling is that in embedded systems you want to handle allocation failures however maybe in normal desktop applications people don't care.

What is your opinion about adding this? Since there are unique ids , users can suppress these.

@firewave
Copy link
Collaborator

firewave commented Dec 5, 2024

how do you feel about this?

At least we need proper handling for the case it will throw an exception if it fails i.e. no NULL pointers are ever encountered. There's most likely several tickets related to this.

A bigger issue which might make this unfeasible is the potential existence of custom allocators or global overrrides for the alloc/dealloc functions. As this might differ per TU or even object it would be hard to even make this depended on some configuration flag. I do not have much experience with this so somebody else needs to chime it.

@danmar
Copy link
Owner Author

danmar commented Dec 5, 2024

At least we need proper handling for the case it will throw an exception if it fails i.e. no NULL pointers are ever encountered. There's most likely several tickets related to this.

so that sounds like the new behavior not the behavior of malloc.

you don't get such warnings for a pointer that is allocated with new.

A bigger issue which might make this unfeasible is the potential existence of custom allocators or global overrrides for the alloc/dealloc functions. As this might differ per TU or even object it would be hard to even make this depended on some configuration flag. I do not have much experience with this so somebody else needs to chime it.

imho it wouldn't be that common to override malloc() but yes if you do that these warnings could be noise.

@firewave
Copy link
Collaborator

firewave commented Dec 5, 2024

so that sounds like the new behavior not the behavior of malloc.

you don't get such warnings for a pointer that is allocated with new.

Yeah - I did not check the changes in full so I missed that it is only about malloc.

Just for completeness of my previous comment - new will throw by default unless there's a different overload or new(std::nothrow) is used.

imho it wouldn't be that common to override malloc() but yes if you do that these warnings could be noise.

I would not assume that. Anything which needs to be performant or allocates a lot of memory is very likely to have it overloaded to implement memory pooling or custom OOM handling. I would not be surprised if it is more common to overload it then not.

@chrchr-github
Copy link
Collaborator

imho it wouldn't be that common to override malloc() but yes if you do that these warnings could be noise.

I would not assume that. Anything which needs to be performant or allocates a lot of memory is very likely to have it overloaded to implement memory pooling or custom OOM handling. I would not be surprised if it is more common to overload it then not.

Wouldn't people rather use a custom allocator (e.g. mymalloc()/myfree())? Redefining malloc() to something else seems like a bad code smell to me.

@danmar
Copy link
Owner Author

danmar commented Dec 5, 2024

I would not assume that. Anything which needs to be performant or allocates a lot of memory is very likely to have it overloaded to implement memory pooling or custom OOM handling. I would not be surprised if it is more common to overload it then not.

I don't know how malloc() is overwritten there is no standard compliant way to achieve that as far as I know.

I am not saying that C developers don't ever write custom memory allocations. If a C developer wants to do it for performance reasons it is entirely possible to create a custom allocation function that is not called malloc().

@firewave
Copy link
Collaborator

firewave commented Dec 5, 2024

Wouldn't people rather use a custom allocator (e.g. mymalloc()/myfree())? Redefining malloc() to something else seems like a bad code smell to me.

Based on the LLVM tickets/discussions I have seen overloaded allocators are not that uncommon.

Also based on my limited knowledge of the matter these allocators are weak symbols which can also be overwritten via injection (i.e. additional library - not override in actual code) and that's how people do it. Otherwise you would have to write custom code and also need to adjust all libraries involved. That ensures that everything in your application goes through the e.g. memory pooling or whatever you use.

Maybe we should start by making this inconclusive and only report it for C code. The first would introduce it in a soft way and the latter makes sure that no exception may be involved at all (I have no idea if an overload of malloc in a C++ application might affect code which only uses new).

@danmar
Copy link
Owner Author

danmar commented Dec 5, 2024

My question from the start was:

It analysed 256 random packages and generated 5576 nullPointerOutOf.. warnings. That is quite a lot. I assume they are true positives technically.. but still.
how do you feel about this?

Either those 5576 warnings are because:

  • the malloc() was overwritten in lots of packages
  • or; developers assume that there is no out of memory

If it is reason 2. There is a risk that the users that assume there will never be out of memory will feel that this warning is noisy. Should I make it inconclusive then, because we cannot determine if there can be out-of-memory?

I feel we should allow this but I realize I could be biased and wanted your feedback. It would mentally be very different for me if there wasn't a customer that wanted this.

if you are strongly against these warnings then I can consider to add these warnings in an alternative way. I do want to make the customer happy one way or another.

@danmar
Copy link
Owner Author

danmar commented Dec 5, 2024

Maybe we should start by making this inconclusive and only report it for C code. The first would introduce it in a soft way and the latter makes sure that no exception may be involved at all (I have no idea if an overload of malloc in a C++ application might affect code which only uses new).

sounds reasonable thank you

@firewave
Copy link
Collaborator

firewave commented Dec 5, 2024

ither those 5576 warnings are because:

* the malloc() was overwritten in lots of packages

* or; developers assume that there is no out of memory

If it is reason 2. There is a risk that the users that assume there will never be out of memory will feel that this warning is noisy. Should I make it inconclusive then, because we cannot determine if there can be out-of-memory?

I never cared about OOM because I was almost exclusively involved in application development. And it was never a critical one so in case you encounter an OOM condition lost of other things would have already failed and it wouldn't have mattered. If it would have been a library that would have been a different thing. We also did not have to deal with something like application-specific quotas which might have solely triggered it.

Handling failures from OOM is actually quite tricky. Even if you handle the NULL and prevent the crash that is far from being enough and it will probably crash somewhere else if you continue to run. Even a graceful exit might be impossible because you are out of memory.

So maybe we should not include that term in the message because it is misleading. It might also return NULL if you passed a size of 0 which is valid although implementation dependent. Well - any other error could lead to that. So handling it would be good in any case.

I guess I got a bit carried away and ended up way off track because of the ID. It is actually an unchecked pointer dereference and has nothing to do with OOM. Any function which returns a pointer (and is not annotated with returns_nonnull) should be assumed to return NULL.

@firewave
Copy link
Collaborator

firewave commented Dec 5, 2024

This should essentially be the "unchecked return value dereference" version of the suggested https://trac.cppcheck.net/ticket/11566 which is the "unchecked parameter dereference" version.

@danmar
Copy link
Owner Author

danmar commented Dec 7, 2024

So maybe we should not include that term in the message because it is misleading

I think it should be part of the ID so it can be properly suppressed. I can understand if desktop application developers don't care about OOM and suppress that. But Out-Of-Resources is far more likely to happen.

Any function which returns a pointer (and is not annotated with returns_nonnull) should be assumed to return NULL.

I don't necessarily agree with that fully. Missing returns_nonnull attribute does not mean that the function can arbitrarily return NULL. Let's say that this is out of scope for this PR.

@danmar danmar merged commit fe98775 into danmar:main Dec 8, 2024
60 checks passed
@danmar
Copy link
Owner Author

danmar commented Dec 8, 2024

My understanding is that it was OK to add this so I merged it..

Now immediately after the merge I realize that we didn't settle if it should be "inconclusive" though. :-(
I don't know I don't have a strong opinion. It should not be inconclusive if the return value is unchecked. It's inconclusive if the return value is NULL.

@danmar danmar deleted the fix-4626 branch December 8, 2024 14:15
danmar added a commit to cppchecksolutions/cppcheck that referenced this pull request Dec 8, 2024
@chrchr-github
Copy link
Collaborator

We should mention this in the release notes, and probably also indicate how to suppress the new warning.

@danmar
Copy link
Owner Author

danmar commented Jan 2, 2025

We should mention this in the release notes, and probably also indicate how to suppress the new warning.

Yes I agree it should be in the release notes. I will look at that.

I don't agree that we should indicate how to suppress it though. It's the same old procedure that has always been used. We don't usually indicate how the user suppress some warnings he does not like.

@danmar
Copy link
Owner Author

danmar commented Jan 2, 2025

releasenotes: #7166

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants