Microphone stream package recommendations?

I’m currently working on an app for iOS/macOS that acts as a music visualizer based on sound picked up by the microphone.

However I can’t find a package that reliably handles microphone input, on Apple specifically. I’ve tried both record and mic_stream, but neither of them allows me to reliably set/get the sampling rate of the input. Both have open issues regarding this, but they haven’t seen much work done to that end.

Does anyone have a tested, reliable package for this use?

1 Like

Hi @kerberjg, I am working on flutter_recorder plugin. It is available on all platforms and it uses under the hood the miniaudio C library.
It is possible to choose the input device, samplerate and the number of channels. It is also possible to get audio and FFT data (not sequentially), it misses for now a stream of them.

The plugin can record to WAV and, as an option, it can skip silence while recording.

I didn’t tested well on mac and iOS and if you would like to do it I’ll be grateful!

I think that should still be enough for your visualizer. I will upload soon on pub.dev.

7 Likes

Hi, that looks nice! Very cool that you have FFT implemented there as well :hugs:
I could look into the macOS side of testing, but first what would be really useful is being able to get a Stream of sample data, rather than having to poll for it manually.

If that is solved I’ll be more than happy to contribute! :blue_heart:

3 Likes

I’d love to add streaming of audio data because. A lovely use case would be to “talk” to an Text To Speech AI (send PCM data to an API service and get back the PCM data for the reply) or a voice chat GPT.
What would your use case be?

I’m working on a music-based light show app! I have very tight latency limits I have to fit into for this to work.
Your use case sounds interesting too!

3 Likes

Streaming audio data from miniaudio into Dart could be tricky. A while back to tried to get audio DSP working for Dart (I was doing a music synth project, so I needed the other way round: generate audio PCM samples in Dart to provide to miniaudio rendering callback) and the roadblock I ran into was no support for sync callbacks into Dart via FFI and the need to share mem between worker and main Isolates. At the time Dart team members suggested one possibility was to just have Dart access shared mem (via a FFI pointer) in both Isolates though I guess this could also work with miniaudio with its audio callback filling a ring mem buffer with incoming audio and then even just the main Dart thread accessing that and exposing it to the rest of your Dart code as a stream.

1 Like

Hi @maks, nice to see you here too!

Given that I have a circular buffer ready under the hood as a fallback :), I am now trying to send the audio data directly from the mic audio thread to Dart and then to a Stream. I do not yet have a control system that verifies that all the chunks arrive, but listening to the result seems ok to me. Even using a 44100 audio format, 2 channels with an f32 format.
All this without using an Isolate, although one could be easily used, and profiling it I do not notice any performance degradation on Linux and Android (Samsung S20).

I am using the audio_stream branch for this and and it would be wonderful if someone would like to try!

All this work was born to achieve a way to voice chat with an AI.
Ie:

  • listen to the mic and send in real-time the audio to the AI APIs
  • then with flutter_soloud use the BufferStream to receive and play the response while the data is coming (now the BufferStream supports buffering with auto play/pause)

That said, I agree that to have a DSP it would take more work and maybe some more low-level features on the Dart side. But it could be a starting point.

1 Like

I’m using the record package and it works well; IMO it’s decent if you pair it with an LLM that can analyze sound and fairly easy to use.

I’m calculating the amplitude because I need to transform the data and update the u.i on the fly.

This is how I initialize it, I can’t share the whole thing because we have some proprietary code.

      final stream = await _recorder.startStream(
        RecordConfig(
          autoGain: true,
          noiseSuppress: true,
          echoCancel: TargetPlatform.iOS != defaultTargetPlatform,
          encoder: AudioEncoder.pcm16bits,
          bitRate: bitRate,
          sampleRate: sampleRate,
          numChannels: channels,
        ),
      );

Demo:

What OS are you running this on? I’m encountering a known bug in macOS where the format/samplerate are ignored

iOS, Android & Web atm but I’ll give it a shot on macOS and let you know.

Awesome work @MarcoB ! I should have known you’d already be into that idea! :+1::+1:

Your approach sounds like exactly what I had in mind when I wrote that, brilliant stuff! Thanks so much for continuing to work on it. I’m mostly working in embedded microcontroller land at the moment, but I’m super excited to check out your work. I’ll pull your branch to try it out and will report back to let you know.

2 Likes

I have the same issue on my end so I guess it’s not usable on that platform yet.

Thank you for checking! Seems like I’ll have to just stick with iOS for testing directly since it’s the intended target platform anyway, at least for the time being.