-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Delay
via shared references
#435
Comments
Thanks for opening this issue. We discussed it a bit in today's meeting (here). The traits take a mutable reference because some implementations may need to ensure they have unique access or otherwise mutate stored data to implement the trait functions. I think ideally HALs should offer some kind of cloneable or otherwise multiply-obtainable Delays. In some frameworks like embassy there's a continuous timer and all the delays can simply check against that (and it's also used for power-saving by configuring interrupts at the right wake-up times). Most HALs aren't designed to provide something like that, though, but perhaps something similar - consuming a hardware timer to configure as free-running and then generating as many Delay instances off it as required - could be done by HALs. We discussed possibly having cortex-m provide some examples for this using the SysTick peripheral and perhaps even the asm::delay method, so that there's some source of multiple Delays (at least on Cortex-M platforms). For right now, either the RefCell wrapper, or writing your own Delay source using a HAL-provided timer, are probably your best immediate options. I don't think we'd want to change the traits to take |
Thanks for the detailed response, the transcript was a good read. I'll bring up the issue on the HAL I use (atsamd). It seems like in general for drivers, the better option is to take a
|
My guess is that most drivers that just want to wait a millisecond here or there in generally-blocking operations will keep taking Delay which is perfect for that, and other drivers might instead use async and use the async Delay from embedded-hal-async. So, it probably doesn't make sense to swap to |
That all makes sense, thanks again for the details. |
@adamgreig do you by chance have an outline of how you would plan to do this from the cortex M side? I brought up the idea to the atsamd repository atsamd-rs/atsamd#670 but there was no clear way forward, so I'm curious what a good solution might look like. Current implementation of delay on atsamd_hal: https://github.com/atsamd-rs/atsamd/blob/7d85efd1f0c8ce136a9bdcb96cc4d0fe672bed6b/hal/src/delay.rs#L52-L74 |
I had three thoughts in mind:
Maybe one or two or all of them could be provided. Any of them should let you have multiple |
Great, thank you for the direction. I will pass it on. |
I don't think it's a good idea provide such an implementation due to how brittle it is. There are issues with flash prefetch on various MCUs where the timing depends on page alignment, flash region (as you mentioned) and other factors. An assembly-only loop in RAM is better, but still iffy. |
Yea, but this isn't something a driver should be using - it's something the end application pulls in in lieu of anything better. With some docs suggesting what scaling factor you might want on different hardware and pointing out these troublesome aspects, I think it could still be a useful tool. Plus, on the higher end platforms that tend to have more variability in how long the loop might take, you probably have CYCCNT available, which is immune to those problems so could be used instead. |
This is mentioned in rust-embedded/book#246: what is the recommended way to share a
Delay
?Currently all the
Delay
methods take a mutable self reference, e.g.Delay::delay_x(&mut self, val: u32)
. This makes it pretty tricky to use a delay in more than one place since you only get one (at least in atsamd-hal). A couple possible solutions I see:Delay
be cloneable (possibly !Send/!Sync in these cases) or otherwise provide a way to get >1impl Delay
objectsimpl Delay
type that is aRefCell
around aDelay
. That is what I currently plan on doing, but it's kind of icky (10+ extra steps toborrow_mut
a delay is deadly)&self
rather thanmut
, and HAL crates internally keep a RefCell or something to make this work. Also icky, since your delays become failable.So, I'm just curious what the recommendation is
The text was updated successfully, but these errors were encountered: