admin管理员组文章数量:1335124
I want to get the audio buffer while talking , I did this method to detect it , but I receive message this method onaudioprocess is deprecated and is not fired, is there any alternative for it with an example.
audioContext = new AudioContext({ sampleRate: 16000 });
scriptNode = (audioContext.createScriptProcessor || audioContext.createJavaScriptNode).call(audioContext, 1024, 1, 1);
scriptNode.onaudioprocess = function (audioEvent) {
if (recording) {
input = audioEvent.inputBuffer.getChannelData(0);
// convert float audio data to 16-bit PCM
var buffer = new ArrayBuffer(input.length * 2);
var output = new DataView(buffer);
for (var i = 0, offset = 0; i < input.length; i++, offset += 2) {
var s = Math.max(-1, Math.min(1, input[i]));
output.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7fff, true);
}
ws.send(buffer);
}
};
I want to get the audio buffer while talking , I did this method to detect it , but I receive message this method onaudioprocess is deprecated and is not fired, is there any alternative for it with an example.
audioContext = new AudioContext({ sampleRate: 16000 });
scriptNode = (audioContext.createScriptProcessor || audioContext.createJavaScriptNode).call(audioContext, 1024, 1, 1);
scriptNode.onaudioprocess = function (audioEvent) {
if (recording) {
input = audioEvent.inputBuffer.getChannelData(0);
// convert float audio data to 16-bit PCM
var buffer = new ArrayBuffer(input.length * 2);
var output = new DataView(buffer);
for (var i = 0, offset = 0; i < input.length; i++, offset += 2) {
var s = Math.max(-1, Math.min(1, input[i]));
output.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7fff, true);
}
ws.send(buffer);
}
};
Share
Improve this question
edited Dec 25, 2020 at 11:15
Mario Petrovic
8,35215 gold badges43 silver badges66 bronze badges
asked Dec 25, 2020 at 10:27
developerdeveloper
1451 gold badge1 silver badge11 bronze badges
3
- How are you capturing your audio? Are you intending to capture the sound from your microphone and send it to a websocket? – Emiel Zuurbier Commented Dec 25, 2020 at 10:49
- @EmielZuurbier yes, it's from my microphone to a websocket . – developer Commented Dec 25, 2020 at 12:32
- Could you give some feedback on the answer below? – Emiel Zuurbier Commented Jan 8, 2021 at 22:33
2 Answers
Reset to default 5Sidenote: Although my earlier answer did help some people, it didn't provide an alternative to the deprecated onaudioprocess
event and ScriptProcessorNode Interface. This is answer should provide an alternative to the question by OP.
The answer should be using Audio Worklets with enables us to create custom audio processing nodes to which can implemented like a regular AudioNode
.
The AudioWorkletNode interface of the Web Audio API represents a base class for a user-defined AudioNode, which can be connected to an audio routing graph along with other nodes. It has an associated AudioWorkletProcessor, which does the actual audio processing in a Web Audio rendering thread.
It works by extending the AudioWorkletProcessor
class and providing the mandatory process
method. The process
method exposes the inputs
, outputs
and parameters
set in the static parameterDescriptors getter.
In here you can insert the same logic as in the onaudioprocess
callback. But you do have to make some modifications to work properly.
One catch of using worklets is that you have include this script as a file from the worklets interface. This means that any dependencies, like the ws
variable, needs to be injected at later stage. We can extend the class to add any values or dependencies to the instance of the worklet.
Note: The process
needs to return a boolean to let the browser know if the audio node should be kept alive or not.
registerProcessor('buffer-detector', class extends AudioWorkletProcessor {
process (inputs, outputs, parameters) {
if (this.#socket === null) {
return false;
}
if (this.#isRecording === true) {
const [input] = inputs;
const buffer = new ArrayBuffer(input.length * 2);
const output = new DataView(buffer);
for (let i = 0, offset = 0; i < input.length; i++, offset += 2) {
const s = Math.max(-1, Math.min(1, input[i]));
output.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7fff, true);
}
this.#socket.send(buffer);
}
return true;
}
static get parameterDescriptors() {
return [{
name: 'Buffer Detector',
}]
}
#socket = null;
#isRecording = false;
constructor() {
super();
}
get socket() {
return this.#socket;
}
set socket(value) {
if (value instanceof WebSocket) {
this.#socket = value;
}
}
get recording() {
return this.#isRecording;
}
set recording(value) {
if ('boolean' === typeof value) {
this.#isRecording = value;
}
}
});
Now all we have to do is include the worklet in your script and create an instance of the node. We can do this with the addModule
method that exists on the BaseAudioContext.audioWorklet
property.
Important: Adding the module only works in secure (HTTPS) contexts.
When the module has been added successfully, create the new node with the AudioWorkletNode
constructor. Assign the WebSocket instance, set the recording flag and you're good to go.
const ws = new WebSocket('ws://...');
const audioContext = new AudioContext();
const source = new MediaStreamAudioSourceNode(audioContext, {
mediaStream: stream // Your stream here.
});
(async () => {
try {
// Register the worklet.
await audioContext.audioWorklet.addModule('buffer-detector.js');
// Create our custom node.
const bufferDetectorNode = new AudioWorkletNode(audioContext, 'buffer-detector');
// Assign the socket and the recording state.
bufferDetectorNode.socket = ws;
bufferDetectorNode.recording = true;
// Connect the node.
source.connect(bufferDetectorNode);
} catch (error) {
console.error(error);
}
})();
With the MediaStream Recording API and the MediaDevices.getUserMedia()
method you're able to stream audio from your microphone and stream that into a recorder. The recorder can then send Blob
objects through WebSockets whenever the ondataavailable
event fires on the recorder.
The function below creates a stream and passes that to a MediaRecorder
instance. That instance will record your microphone audio and is able to send that to your WebSocket. The instance of the MediaRecorder
is returned to control the recorder.
async function streamMicrophoneAudioToSocket(ws) {
let stream;
const constraints = { video: false, audio: true };
try {
stream = await navigator.mediaDevices.getUserMedia(constraints);
} catch (error) {
throw new Error(`
MediaDevices.getUserMedia() threw an error.
Stream did not open.
${error.name} -
${error.message}
`);
}
const recorder = new MediaRecorder(stream);
recorder.addEventListener('dataavailable', ({ data }) => {
ws.send(data);
});
recorder.start();
return recorder;
});
That way you can also stop recording if you'd like by calling the stop()
method on the recorder.
(async () => {
const ws = new WebSocket('ws://yoururl.');
const recorder = await streamMicrophoneAudioToSocket(ws);
document.addEventListener('click', event => {
recorder.stop();
});
}());
本文标签: javascriptscriptNodeonaudioprocess is deprecatedany alternativeStack Overflow
版权声明:本文标题:javascript - scriptNode.onaudioprocess is deprecated , any alternative? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1741075610a2335244.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论