-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Curious why you chose to premix the sound files? #4
Comments
Thanks for the comment and feedback @deckarep! This is a really great suggestion and warrants more investigation. The reason for premixing was primarily simplicity. However, now that I think about it, there's also the question of synchronization. If a beat's tracks are played independently, they need to be precisely aligned in time. I'm unsure how much control rodio -- which delegates to cpal for playback -- provides over playback timing. Alternatively, if a user wanted to change a beat on the fly, the beat's source could be remixed in the background and played when ready. Any interest in investigating? 😃 |
@jonasrmichel - thanks for taking the time to respond. I’m actually trying to figure this out now in my own project. Basically the notion of how do I build a drum machine with precise timing. Some of the frameworks I’ve used currently don’t cut it in terms of precision and so I’m having to find a more real-time language/framework to pull it off. Most recently I may have found a framework which would allow me to do it. It’s also written in Rust and uses cpal under the hood but my solution requires a gui component as well. I will say this: your solution of premixing the track sounds so incredibly tight compared to anything I’ve built so far! |
Sounds really interesting and fun! Please share a link if you plan to open source it, I'd be interested in taking a look. |
Looking through the code, I can see that you wrote some code to premix the files into a single "track". This is cool and it never occurred to me that this approach can work well (which it does clearly!).
I just have a general curiosity/architecture question for you?
Why did you choose to premix in the first place versus playing the individual samples in real time? If they're managed individually you could potentially build a UI/Command-line component where you can view the sequencer and turn on or off each instrument per note?
Really cool lib btw!
-deckarep
The text was updated successfully, but these errors were encountered: