admin管理员组文章数量:1296241
I am currently working on a project that consists of a chart that shows audio levels picked up by another device. The charts are made through the flot API and I have zooming and selecting capabilities in order to select a time range on the chart and zoom into that selected region. My next step is to allow the user to listen to the audio that corresponds to that region of the chart. I have the audio files stored on a shared server and all of the files are in individual, minute by minute, RAW data files. I have no experience with using audio in a webpage and am currently struggling on how to plete this task. As far as I have found, the <audio>
HTML tag is incapable of processing RAW data files for playback. I have been looking into the Web Audio API but am confused about how it works and how to implement it.
My first question is how do I go about decoding RAW audio files from a server and displaying them on an HTML page for a client to listen to?
My second task is to grab all of the audio files corresponding to the selected range and bine them into one audio output. For example, if the client selected a time range of 1:00pm - 1:50pm, I would need to access 50 RAW data audio files each a minute in length. I would then want to bine them together to produce one single playback sound. Therefore, my second question is if anyone knows a way to acplish this smoothly.
Thank you for any help anyone has to offer!
I am currently working on a project that consists of a chart that shows audio levels picked up by another device. The charts are made through the flot API and I have zooming and selecting capabilities in order to select a time range on the chart and zoom into that selected region. My next step is to allow the user to listen to the audio that corresponds to that region of the chart. I have the audio files stored on a shared server and all of the files are in individual, minute by minute, RAW data files. I have no experience with using audio in a webpage and am currently struggling on how to plete this task. As far as I have found, the <audio>
HTML tag is incapable of processing RAW data files for playback. I have been looking into the Web Audio API but am confused about how it works and how to implement it.
My first question is how do I go about decoding RAW audio files from a server and displaying them on an HTML page for a client to listen to?
My second task is to grab all of the audio files corresponding to the selected range and bine them into one audio output. For example, if the client selected a time range of 1:00pm - 1:50pm, I would need to access 50 RAW data audio files each a minute in length. I would then want to bine them together to produce one single playback sound. Therefore, my second question is if anyone knows a way to acplish this smoothly.
Thank you for any help anyone has to offer!
Share Improve this question edited May 29, 2020 at 23:57 anthumchris 9,0902 gold badges34 silver badges57 bronze badges asked May 29, 2020 at 19:36 CollegeStudentCollegeStudent 831 gold badge1 silver badge5 bronze badges 2- Is this for a top-secret espionage program to listen in on recorded conversations? – anthumchris Commented May 30, 2020 at 0:42
- no it is not... – CollegeStudent Commented May 31, 2020 at 20:39
2 Answers
Reset to default 6RAW files are already decoded PCM audio, but Audio
elements can't play PCM directly. You'll need to append a RIFF/WAV header to the PCM bytes first. Multiple RAW files could be bined, setting the total sample/frame length in the header. 50 minutes of decoded audio will take up a lot of memory in the browser, so keep an eye on that and measure/optimize accordingly.
initAudio()
async function initAudio() {
// specify your file and its audio properties
const url = 'https://dev.anthum./audio-worklet/audio/decoded-left.raw'
const sampleRate = 48000
const numChannels = 1 // mono or stereo
const isFloat = true // integer or floating point
const buffer = await (await fetch(url)).arrayBuffer()
// create WAV header
const [type, format] = isFloat ? [Float32Array, 3] : [Uint8Array, 1]
const wavHeader = new Uint8Array(buildWaveHeader({
numFrames: buffer.byteLength / type.BYTES_PER_ELEMENT,
bytesPerSample: type.BYTES_PER_ELEMENT,
sampleRate,
numChannels,
format
}))
// create WAV file with header and downloaded PCM audio
const wavBytes = new Uint8Array(wavHeader.length + buffer.byteLength)
wavBytes.set(wavHeader, 0)
wavBytes.set(new Uint8Array(buffer), wavHeader.length)
// show audio player
const audio = document.querySelector('audio')
const blob = new Blob([wavBytes], { type: 'audio/wav' })
audio.src = URL.createObjectURL(blob)
document.querySelector('#loading').hidden = true
audio.hidden = false
}
// adapted from https://gist.github./also/900023
function buildWaveHeader(opts) {
const numFrames = opts.numFrames;
const numChannels = opts.numChannels || 2;
const sampleRate = opts.sampleRate || 44100;
const bytesPerSample = opts.bytesPerSample || 2;
const format = opts.format
const blockAlign = numChannels * bytesPerSample;
const byteRate = sampleRate * blockAlign;
const dataSize = numFrames * blockAlign;
const buffer = new ArrayBuffer(44);
const dv = new DataView(buffer);
let p = 0;
function writeString(s) {
for (let i = 0; i < s.length; i++) {
dv.setUint8(p + i, s.charCodeAt(i));
}
p += s.length;
}
function writeUint32(d) {
dv.setUint32(p, d, true);
p += 4;
}
function writeUint16(d) {
dv.setUint16(p, d, true);
p += 2;
}
writeString('RIFF'); // ChunkID
writeUint32(dataSize + 36); // ChunkSize
writeString('WAVE'); // Format
writeString('fmt '); // Subchunk1ID
writeUint32(16); // Subchunk1Size
writeUint16(format); // AudioFormat
writeUint16(numChannels); // NumChannels
writeUint32(sampleRate); // SampleRate
writeUint32(byteRate); // ByteRate
writeUint16(blockAlign); // BlockAlign
writeUint16(bytesPerSample * 8); // BitsPerSample
writeString('data'); // Subchunk2ID
writeUint32(dataSize); // Subchunk2Size
return buffer;
}
body {
text-align: center;
padding-top: 1rem;
}
[hidden] {
display: none;
}
audio {
display: inline-block;
}
<div id="loading">Loading...</div>
<audio hidden controls></audio>
An alternative that might be a bit easier with Web Audio, you can basically do the same as above, but don't use an Audio
element. If necessary convert the raw audio data to a float array, say, f
, and do something like this:
// Only need to do this once when setting up the page
let c = new AudioContext();
// Do this for each clip:
let b = new AudioBuffer({length: f.length, sampleRate: c.sampleRate});
b.copyToChannel(f, 0);
let s = new AudioBufferSourceNode(c, {buffer: b});
s.connect(c.destination);
s.start();
This is rough sketch of how to use Web Audio to do the playback. It can be refined to reuse AudioBuffers
. And you have to take care of calling s.start()
with the right time values. But I hope this is enough to get you started. If not, please ask more questions.
本文标签: javascriptHow to Play RAW Audio FilesStack Overflow
版权声明:本文标题:javascript - How to Play RAW Audio Files? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1741633060a2389479.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论