Replies: 2 comments
-
Thank you for your questions, I'll try to answer each individually.
The short answer is yes, even when we can move these protocol definitions to being automatically generated through the compiler, I'd still want to enable each of the crates to have manual additions like convenience methods to make it easier to use and get correct.
The only hard requirements for the protocol libraries is that they remain "Sans I/O" and In regards to opinionated translations, I think that would depend on the opinion, I would not want to lock-out a user from being able to do to do something that is allowed by the spec, because for example, I think one use-case for these libraries, are for testing other implementations, so generating things that are allowed by the spec, but potentially invalid, should continue to be allowed. However I don't have an issue with making it easier for users to generate the correct data through the use of additional types.
When crates like
Similar to what I said above, I would want to preserve the field being an octet string (or if fixed, the fixed size equivalent), however we could add helper methods to make it easy to encode and decode the right data from these fields. For example; pub struct HeaderData {
// ...
pub flags: u8,
// ...
}
bitflags! {
pub struct HeaderDataFlags: u8 {
const AUTH = 0b00000001;
const PRIV = 0b00000010;
const REPORTABLE = 0b00000100;
}
}
impl HeaderData {
fn decode_flags(&self) -> Result<HeaderDataFlags>;
fn encode_flags(&mut self, flags: HeaderDataFlags) -> Result<()>;
}
I think I answered this above.
I don't think it's intentional, I think it was more a result of the fixed sized values not being supported (and when it was written, const generics were not available). So it wasn't possible to create constants that were octet strings. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the detailed reply!
On Fri, Mar 15 2024 at 02:49:49 AM -0700, XAMPPRocky ***@***.***> wrote:
> I know you're looking to move to auto-generating protocol
> definitions, are you still interested in PRs against the current
> manually implemented versions?
>
The short answer is yes, even when we can move these protocol
definitions to being automatically generated through the compiler,
I'd still want to enable each of the crates to have manual additions
like convenience methods to make it easier to use and get correct.
Great to hear, I'll probably have a small PR with some suggestions
sometime next week.
> do you have an opinion on if you want the protocol libraries to stay
> low level and only be concerned with the protocol encoding and
> decoding that can be automatically derived from the ASN.1 definition
> files or if you want to include handmade types based on textual
> descriptions and opinionated 'translations' of the types into more
> friendly structs that don't line up 1:1 with how the specs define
> them?
>
The only hard requirements for the protocol libraries is that they
remain "Sans I/O" and no_std, meaning there's no requirement of
specific I/O methods such as TcpSocket/UdpSocket. no_std is a little
more flexible where I am okay adding some std features, as long it's
behind a feature flag and not required, and features should be no_std
when possible.
In regards to opinionated translations, I think that would depend on
the opinion, I would not want to lock-out a user from being able to
do to do something that is allowed by the spec, because for example,
I think one use-case for these libraries, are for testing other
implementations, so generating things that are allowed by the spec,
but potentially invalid, should continue to be allowed. However I
don't have an issue with making it easier for users to generate the
correct data through the use of additional types.
For sure, I wasn't considering anything that would prevent possibly
valid uses. I just really like when APIs do the whole "make the usual
case obvious, but always make the other case possible" thing. (And I
consider dealing with octet strings directly to be how you'd handle the
"possible" case.)
> Do you have plans for getting allowed value ranges into the types?
> (integer values & string sizes)
>
When crates like snmp were created we didn't have support for these
types at all. If the spec defines smaller integer types we should use
them, but at least right now, this won't have much of an effect on
performance, because (B/C/DER) don't have the concept of fixed width
integer encoding, so it will for now still have to go through a
Integer type. Optimisations like these could be added to the codec
level, but they haven't yet, but we can still move to more strict
types today.
Gotcha. I'm personally, not really too worried about performance at the
moment. I'm literally encoding the same message twice to be able to do
the auth hash that in SNMP is for some reason _inside_ the message that
I have to hash. (-_-) (On that note, LMK if you have any suggestions
for modifying the security params in an already (der) encoded message.
Or really just figuring out the offset of a particular field.)
I was more coming from the perspective of wanting the types to
"document" allowed values so I only have to look it up once.
> There's a bunch of disconnects between types where something is just
> and OctetString. Does rasn have any way to link those to the structs
> that are encoded into those blobs?
>
Similar to what I said above, I would want to preserve the field
being an octet string, however we could add helper methods to make it
easy to encode and decode the right data from these fields. For
example;
pub struct HeaderData {
// ...
pub flags: OctetString,
// ...
}
bitflags! {
pub struct HeaderDataFlags: u8 {
const AUTH = 0b00000001;
const PRIV = 0b00000010;
const REPORTABLE = 0b00000100;
}
}
impl HeaderData {
fn decode_flags(&self) -> Result<HeaderDataFlags>;
fn encode_flags(&mut self, flags: HeaderDataFlags) -> Result<()>;
}
I hadn't thought of doing it that way, but I think we're probably more
or less on the same page.
I had been thinking something like:
pub struct HeaderData {
// ...
pub flags: HeaderDataFlags,
// ...
}
pub enum HeaderDataFlags {
NoAuthNoPriv = 0,
AuthNoPriv = 1,
AuthPriv = 3,
Raw(OctetString).
}
(Or, better yet, something more general like having the field be a
`TypedOctetString<HeaderDataFlags>` which accomplishes the same thing
without having a `Raw` variant in absolutely everything.)
But that's probably more in the weeds than I want to get at the moment
anyway.
> There's a bunch of low level object definitions that aren't defined
> in rasn-snmp, like 'engineId' in all its various forms. Is that an
> intentional choice or just an omission?
>
I don't think it's intentional, I think it was more a result of the
fixed sized values not being supported (and when it was written,
const generics were not available). So it wasn't possible to create
constants that were octet strings.
👍 I may play around with adding a couple of those.
…---
Again thanks for sharing your thoughts with me and for creating the
library in the first place.
|
Beta Was this translation helpful? Give feedback.
-
I'm working on a minimal SNMPv3 manager (and maybe eventually v1/v2c and agent) library, I've actually already got it "working" (just in a really rough state at the moment) using
rasn
andrasn-snmp
for encoding and decoding. It's working great for me, which, Thanks! I probably wouldn't have attempted this if I had to the ASN.1 parsing manually myself too. My biggest struggle was figuring out what needed to go into all theOctetString
s.I see that you're generally interested in outside contributions which I'm hoping I can provide, but I wanted to get a feel for what you do and don't want the rasn libraries to be before I start firing code at you. I know you're looking to move to auto-generating protocol definitions, are you still interested in PRs against the current manually implemented versions?
Assuming so...
Pretty much all of my questions more or less come down to: do you have an opinion on if you want the protocol libraries to stay low level and only be concerned with the protocol encoding and decoding that can be automatically derived from the ASN.1 definition files or if you want to include handmade types based on textual descriptions and opinionated 'translations' of the types into more friendly structs that don't line up 1:1 with how the specs define them? I want the higher level things in a manager(/client) library and if you want them in rasn, I'm happy to contribute some of what I need.
Some specific questions:
Do you have plans for getting allowed value ranges into the types? (integer values & string sizes)
Right now, all the integer types use a BigInt type, despite (as far as I can tell) SNMP types always being limited to 32 or 64 bit values. Is that something you're open to changing so I could use smaller integer sizes directly? Eg.
snmpEngineTime
fits in 31 bits, it would be nice to have that as some sort of range-limitedi32
/u32
.There's a bunch of disconnects between types where something is just and OctetString. Does rasn have any way to link those to the structs that are encoded into those blobs?
Relatedly, are you looking to keep the protocol libraries purely the types are explicitly defined in the protocols' ASN.1 definitions(?) or are you open to defining types for objects that are officially an OctetString, but are further defined in other RFCs or a TEXTUAL-CONVENTION? (Obviously they'd still need an OctetString escape hatch for forward compatibility.)
There's a bunch of low level object definitions that aren't defined in
rasn-snmp
, like 'engineId' in all its various forms. Is that an intentional choice or just an omission? Internally it's just an OctetString, but even if the answer to the last question is "it should just be an OctetString, would it make sense to have a new-type wrapper for named object(? type?) definitions likeSnmpEngineID
(scroll up a few lines from there)?I'm sure I've had many much more specific questions, but that's probably a good start. Let me know if you've got thoughts on any of the above.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions