I’ve been trying to implement streaming audio chunks (Uint8List
) from an openAI tts-1 model in a Flutter Web app and playing them in sequence. The idea is to progressively fetch and buffer the audio as it’s generated (e.g., from a TTS endpoint) and then play each buffered portion as soon as the previous one finishes.
What’s Happening:
- The first chunk plays successfully.
- Subsequent chunks fail to start, leading to errors like
DEMUXER_ERROR_COULD_NOT_OPEN
andNotSupportedError
.
Errors:
Playing audio... PlayerState.completed
AudioPlayers Exception: AudioPlayerException(
BytesSource(bytes: 7b444, mimeType: audio/mpeg),
PlatformException(WebAudioError, Failed to set source. For troubleshooting, see https://github.com/bluefireteam/audioplayers/blob/main/troubleshooting.md,
MediaError: DEMUXER_ERROR_COULD_NOT_OPEN: FFmpegDemuxer: open context failed (Code: 4), null)
Error setting audio source: PlatformException(WebAudioError, Failed to set source. For troubleshooting, see https://github.com/bluefireteam/audioplayers/blob/main/troubleshooting.md,
MediaError: DEMUXER_ERROR_COULD_NOT_OPEN: FFmpegDemuxer: open context failed (Code: 4), null)
Error: PlatformException(WebAudioError, Failed to set source.
NotSupportedError: The element has no supported sources.
What I’ve Tried:
- audioplayers: Using
setSourceBytes
on each buffered chunk works for the first chunk but fails on subsequent chunks. - just_audio: I attempted to use
just_audio
, but a streaming source is not available for the web. - JS Interop for Streaming: On the web, I’m not using
http
ordio
for fetching. Instead, I rely on the browser’s Fetch API via JS interop (getReader()
) to continuously read chunks as they become available. These chunks are then added to a queue and played in sequence.
Relevant Code:
import 'dart:async';
import 'dart:collection';
import 'dart:typed_data';
import 'package:example/audio_player_controller.dart';
import 'package:example/tts_service_web.dart';
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: AudioStreamScreen(),
);
}
}
class AudioStreamScreen extends StatefulWidget {
const AudioStreamScreen({super.key});
@override
State<AudioStreamScreen> createState() => _AudioStreamScreenState();
}
class _AudioStreamScreenState extends State<AudioStreamScreen> {
// TODO: Add your API key
final openAIKey = 'YOUR_OPENAI_API_KEY';
final Queue<Uint8List> _bufferQueue = Queue();
final BytesBuilder _currentBuffer = BytesBuilder();
bool _isPlaying = false;
final int _bufferSize = 64 * 1024; // Adjust this as needed
AudioPlayerController? _controller;
@override
void initState() {
super.initState();
_controller = AudioPlayerController(onError: (e, s) {
debugPrint('Error: $e');
});
}
@override
void dispose() {
_controller?.dispose();
super.dispose();
}
Future<void> _fetchAndPlayAudio() async {
final stream = TTSServiceWeb(openAIKey).tts(
'https://api.openai.com/v1/audio/speech',
{
'model': 'tts-1',
'voice': 'alloy',
'speed': 1,
'input': 'Lorem ipsum ...',
'response_format': 'opus',
'stream': true,
},
);
try {
await for (final chunk in stream) {
_addToBuffer(chunk);
if (_currentBuffer.length >= _bufferSize) {
debugPrint('New Buffer: ${_currentBuffer.toBytes().lengthInBytes} / $_bufferSize');
_flushBufferToQueue();
}
debugPrint('Last chunk: ${chunk.lengthInBytes / 1024} KB');
_playNextInQueue();
}
_flushBufferToQueue(finalFlush: true);
} catch (e) {
debugPrint('Error fetching audio: $e');
}
}
void _addToBuffer(Uint8List chunk) {
_currentBuffer.add(chunk);
}
void _flushBufferToQueue({bool finalFlush = false}) {
if (_currentBuffer.isNotEmpty) {
_bufferQueue.add(_currentBuffer.toBytes());
_currentBuffer.clear();
}
if (finalFlush) {
_playNextInQueue();
}
}
Future<void> _playNextInQueue() async {
if (_isPlaying || _bufferQueue.isEmpty) return;
final nextChunk = _bufferQueue.removeFirst();
_isPlaying = true;
try {
debugPrint('Playing chunk: ${nextChunk.lengthInBytes / 1024} KB');
await _controller?.play(nextChunk);
} catch (e) {
debugPrint('Error playing chunk: $e');
} finally {
_isPlaying = false;
if (_bufferQueue.isNotEmpty) {
_playNextInQueue();
}
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Audio Stream Example'),
),
body: Center(
child: ElevatedButton(
onPressed: _fetchAndPlayAudio,
child: const Text('Play Audio'),
),
),
);
}
}
Video Demo:
Vimeo link:
GitHub Repository:
flutter_audio_streaming_prototype
Questions:
- Has anyone successfully implemented real-time streaming and playback of audio chunks on Flutter Web?
- Are there alternative libraries or approaches that can handle a continuous stream of audio data on both web & mobile platforms?
Any insights, suggestions, or code examples would be greatly appreciated!