admin管理员组文章数量:1344305
The bug preventing getting microphone input per for Chrome Canary is now fixed. This part does seem to be working. I can assign the mic input to an audio element and hear the results through the speaker.
But I'd like to connect an analyser node in order to do FFT. The analyser node works fine if I set the audio source to a local file. The problem is that when connected to the mic audio stream, the analyser node just returns the base value as if it doesn't have an audio stream at all. (It's -100 over and over again if you're curious.)
Anyone know what's up? Is it not implemented yet? Is this a chrome bug? I'm running 26.0.1377.0 on Windows 7 and have the getUserMedia flag enabled and am serving through localhost via python's simpleHTTPServer so it can request permissions.
Code:
var aCtx = new webkitAudioContext();
var analyser = aCtx.createAnalyser();
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true}, function(stream) {
// audio.src = "stupid.wav"
audio.src = window.URL.createObjectURL(stream);
}, onFailure);
}
$('#audio').on("loadeddata",function(){
source = aCtx.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(aCtx.destination);
process();
});
Again, if I set audio.src to the mented version, it works, but with microphone it is not. Process contains:
FFTData = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(FFTData);
console.log(FFTData[0]);
I've also tried using the createMediaStreamSource and bypassing the audio element - example 4 - .html. Also unsuccessful. :(
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true}, function(stream) {
var microphone = context.createMediaStreamSource(stream);
microphone.connect(analyser);
analyser.connect(aCtx.destination);
process();
}
I imagine it might be possible to write the mediasteam to a buffer and then use dsp.js or something to do fft, but I wanted to check first before I go down that road.
The bug preventing getting microphone input per http://code.google./p/chromium/issues/detail?id=112367 for Chrome Canary is now fixed. This part does seem to be working. I can assign the mic input to an audio element and hear the results through the speaker.
But I'd like to connect an analyser node in order to do FFT. The analyser node works fine if I set the audio source to a local file. The problem is that when connected to the mic audio stream, the analyser node just returns the base value as if it doesn't have an audio stream at all. (It's -100 over and over again if you're curious.)
Anyone know what's up? Is it not implemented yet? Is this a chrome bug? I'm running 26.0.1377.0 on Windows 7 and have the getUserMedia flag enabled and am serving through localhost via python's simpleHTTPServer so it can request permissions.
Code:
var aCtx = new webkitAudioContext();
var analyser = aCtx.createAnalyser();
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true}, function(stream) {
// audio.src = "stupid.wav"
audio.src = window.URL.createObjectURL(stream);
}, onFailure);
}
$('#audio').on("loadeddata",function(){
source = aCtx.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(aCtx.destination);
process();
});
Again, if I set audio.src to the mented version, it works, but with microphone it is not. Process contains:
FFTData = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(FFTData);
console.log(FFTData[0]);
I've also tried using the createMediaStreamSource and bypassing the audio element - example 4 - https://dvcs.w3/hg/audio/raw-file/tip/webaudio/webrtc-integration.html. Also unsuccessful. :(
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true}, function(stream) {
var microphone = context.createMediaStreamSource(stream);
microphone.connect(analyser);
analyser.connect(aCtx.destination);
process();
}
I imagine it might be possible to write the mediasteam to a buffer and then use dsp.js or something to do fft, but I wanted to check first before I go down that road.
Share Improve this question asked Jan 9, 2013 at 8:44 NewmuNewmu 1,9602 gold badges20 silver badges25 bronze badges 2- Have you inspected the entire FFTData array? I noticed you're just console.logging the first element. The AnalyserNode should work with microphone input... – Matt Diamond Commented Jan 9, 2013 at 18:58
- The process function further passes it onto a canvas visualizer, all the values are -100. I'm checking things like variable scoping and am going to try it on a mac soon. Really don't know what's happening. – Newmu Commented Jan 9, 2013 at 20:38
1 Answer
Reset to default 9It was a variable scoping issue. For the second example, I was defining the microphone locally and then trying to access its stream with the analyser in another function. I just made all the Web Audio API nodes globals for peace of mind. Also it takes a few seconds for the analyser node to start reporting non -100 values. Working code for those interested:
// Globals
var aCtx;
var analyser;
var microphone;
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true}, function(stream) {
aCtx = new webkitAudioContext();
analyser = aCtx.createAnalyser();
microphone = aCtx.createMediaStreamSource(stream);
microphone.connect(analyser);
// analyser.connect(aCtx.destination);
process();
});
};
function process(){
setInterval(function(){
FFTData = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(FFTData);
console.log(FFTData[0]);
},10);
}
If you would like to hear the live audio, you can connect the analyser to destination (speakers) as mented out above. Watch out for some lovely feedback though!
本文标签: javascriptWeb Audio API Analyser Node Not Working With Microphone InputStack Overflow
版权声明:本文标题:javascript - Web Audio API Analyser Node Not Working With Microphone Input - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1743784946a2538516.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论