Skip to content

Conversation

@roderickvd
Copy link
Member

Add buffer_size() method to Stream trait that returns the number of frames passed to each data callback invocation (actual size or upper limit depending on platform).

AAudio improvements:

  • BufferSize::Default now explicitly configures using optimal burst size from AudioManager, following Android low-latency audio best practices
  • buffer_size() query falls back to burst size if frames_per_data_callback was not explicitly set
  • Refactored buffer configuration to eliminate code duplication

Addresses #1042
Relates to #964, #942

@roderickvd
Copy link
Member Author

@will3942 @marcpabst @j-n-f as you've requested or contributed to similar in the past, would you give your view?

@pgaskin
Copy link

pgaskin commented Dec 28, 2025

Note that the burst size can be smaller than the actual buffer size, and on some devices (e.g., the Pixel 9), it can be low enough that using it will cause underruns in many cases.

c.f. cmus/cmus#1386

@roderickvd
Copy link
Member Author

Note that the burst size can be smaller than the actual buffer size, and on some devices (e.g., the Pixel 9), it can be low enough that using it will cause underruns in many cases.

c.f. cmus/cmus#1386

Thanks for that nudge. With this default buffer thing it seems that no matter how you try do it right, you never do it right.

While I can see a 80 ms minimum working for many scenarios, setting minimums is precisely what we've been trying to steer away from since v0.17 because in the end, they are rather arbitrary and use-cases dependent.

It seems that the best thing to do would be to dynamically optimizing the buffer by monitoring underruns like here: https://developer.android.com/ndk/guides/audio/aaudio/aaudio#tuning-buffers.

I'll save that for another PR - I think that this PR won't make things on Pixel 9 better or worse from what we already had. Can you confirm?

@roderickvd roderickvd force-pushed the feat/query-buffer-size branch from b3270f3 to f9eb872 Compare December 29, 2025 13:41
@will3942
Copy link
Contributor

@roderickvd Will this set the same buffer size as AAudio would default to if we don't specify a buffer size (as in the latest master)? If so then I'm happy, but I can't test until 12 Jan.

Agreed that we need to adjust in response to underruns, or give the developer the ability to. At the moment we've measured an acceptable buffer size for our application in production and set that manually.

@roderickvd
Copy link
Member Author

@roderickvd Will this set the same buffer size as AAudio would default to if we don't specify a buffer size (as in the latest master)? If so then I'm happy, but I can't test until 12 Jan.

I'd also need someone to confirm for me.

@pgaskin
Copy link

pgaskin commented Dec 30, 2025

Will this set the same buffer size as AAudio would default to if we don't specify a buffer size

Not entirely sure. In cmus, I ended up taking a shortcut and just choosing a relative high buffer size based on a fixed period to reduce CPU usage and to avoid underruns (especially as newer devices are capable of lower latency streams) since I wasn't concerned about latency.

When testing AAudio with cmus, the default buffer size on the 9 was 3 times the burst size, but on the 8, it was 2 times the burst size.

I haven't fully followed how it's set in AOSP, but some random thoughts:

  • In AAudioServiceEndpointPlay, it defaults to getAAudioMixerBurstCount or hardcoded of 2. This is multiplied by the burst size from the HAL to set the buffer size.
  • In AudioSystem, getAAudioMixerBurstCount ends up calling getAAudioMixerBurstCountFromSystemProperty, which uses aaudio.mixer_bursts (set in the vendor props on some devices) or defaults to 2.
  • Although not directly relevant, the logic used in the OpenSLES implementation might be interesting to look at.

You get the value of PROPERTY_OUTPUT_FRAMES_PER_BUFFER, which comes from getPrimaryOutputFrameCount. You can see the HAL frame count in dumpsys media.audio_flinger under HAL frame count. I think a default of 256 is a bit low, as my devices are all at least double.

@roderickvd
Copy link
Member Author

roderickvd commented Jan 2, 2026

Will this set the same buffer size as AAudio would default to if we don't specify a buffer size

Not entirely sure.

You may be right. Your pushback makes me think that we probably cannot reliably make assumptions about what devices do when we set nothing at all.

When testing AAudio with cmus, the default buffer size on the 9 was 3 times the burst size, but on the 8, it was 2 times the burst size.

I haven't fully followed how it's set in AOSP, but some random thoughts: [...]

These are very valuable resources, thank you very much for pointing to these lines so specifically. Key pattern seems that nobody actually calls setFramesPerDataCallback at all - they only set the buffer capacity to mixer_bursts * framesPerBurst and let AAudio decide the optimal callback size.

Google seems to set a large capacity but start with minimal active size, then grow it on underruns. So our current implementation is likely over-constraining AAudio by explicitly setting the callback size when we should just be setting the capacity and staying out of the way.

This seems to be confirmed over at https://developer.android.com/ndk/reference/group/audio:

For the lowest possible latency, do not call this function. AAudio will then call the dataProc callback function with whatever size is optimal. That size may vary from one callback to another.

Only use this function if the application requires a specific number of frames for processing. The application might, for example, be using an FFT that requires a specific power-of-two sized buffer.

So I implemented this equivalently in 5656daa, what do you think?

On top of this, we'd still want the dynamic buffer tuning.

I think a default of 256 is a bit low, as my devices are all at least double.

Yeah, I know. I took it from https://developer.android.com/ndk/guides/audio/audio-latency#buffer-size. Oboe sets 16 as floor.

@roderickvd roderickvd force-pushed the feat/query-buffer-size branch 2 times, most recently from f3fcca3 to 5656daa Compare January 2, 2026 16:33
@roderickvd
Copy link
Member Author

Now that I'm working on a tuning implementation, I'm realizing that we should be setting buffer size instead of capacity.

@roderickvd roderickvd force-pushed the feat/query-buffer-size branch from 90cfd27 to 3cb1c06 Compare January 2, 2026 23:04
Add buffer_size() method to Stream trait that returns the number of
frames passed to each data callback invocation (actual size or upper
limit depending on platform).

AAudio improvements:
- BufferSize::Default now explicitly configures using optimal burst size
  from AudioManager, following Android low-latency audio best practices
- buffer_size() query falls back to burst size if frames_per_data_callback
  was not explicitly set
- Refactored buffer configuration to eliminate code duplication

Addresses #1042
Relates to #964, #942
Replace `Stream` enum with a struct containing `inner: Arc<Mutex<AudioStream>>`
and `direction: DeviceDirection` fields. This eliminates code duplication while
maintaining the same functionality.
@roderickvd roderickvd force-pushed the feat/query-buffer-size branch from f4d1b0e to 400a8ed Compare January 3, 2026 11:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants