平台
RK3288 + Android 7.1
开发
在开始之前需要先了解下当前的音频设备情况:
查看当前支持的声卡设备(找到card[0, N])
rk3288:/proc/asound # ll total 0 lrwxrwxrwx 1 root root 5 2020-04-15 13:51 Camera -> card1 dr-xr-xr-x 5 root root 0 2020-04-15 13:51 card0 dr-xr-xr-x 3 root root 0 2020-04-15 13:51 card1 -r--r--r-- 1 root root 0 2020-04-15 13:51 cards -r--r--r-- 1 root root 0 2020-04-15 13:51 devices -r--r--r-- 1 root root 0 2020-04-15 13:51 hwdep -r--r--r-- 1 root root 0 2020-04-15 13:51 pcm lrwxrwxrwx 1 root root 5 2020-04-15 13:51 rockchipes8316c -> card0 -r--r--r-- 1 root root 0 2020-04-15 13:51 timers -r--r--r-- 1 root root 0 2020-04-15 13:51 version
查看cards中的内容:
rk3288:/proc/asound # cat cards 0 [rockchipes8316c]: rockchip_es8316 - rockchip,es8316-codec rockchip,es8316-codec 1 [Camera ]: USB-Audio - USB Camera Generic USB Camera at usb-ff540000.usb-1.2, high speed
当前使用的声卡信息
rk3288:/proc/asound # tinypcminfo -D 0 Info for card 0, device 0: PCM out: Access: 0x000009 Format[0]: 0x000044 Format[1]: 00000000 Format Name: S16_LE, S24_LE Subformat: 0x000001 Rate: min=8000Hz max=96000Hz Channels: min=2 max=2 Sample bits: min=16 max=32 Period size: min=32 max=65536 Period count: min=2 max=4096 PCM in: Access: 0x000009 Format[0]: 0x000044 Format[1]: 00000000 Format Name: S16_LE, S24_LE Subformat: 0x000001 Rate: min=8000Hz max=96000Hz Channels: min=2 max=2 Sample bits: min=16 max=32 Period size: min=32 max=65536 Period count: min=2 max=4096
采集API:AudioRecord
//audioSource: MediaRecorder.AudioSource.CAMCORDER, MediaRecorder.AudioSource.MIC 等 // 一般情况下会用到这两个 //sampleRateInHz: 常用的有: 8000,11025,16000,22050,44100,96000; //channelConfig: AudioFormat.CHANNEL_CONFIGURATION_DEFAULT, 单声道和立声 //audioFormat: AudioFormat.ENCODING_PCM_16BIT, 测试中要捕获PCM数据, 其它参数未研究. //bufferSizeInBytes: 由AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig, audioFormat)获得 public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes)
播放API: AudioTrack
//streamType: AudioManager.STREAM_MUSIC, 输出通道, 也可以选择ALARM等 //sampleRateInHz: 常用的有: 8000,11025,16000,22050,44100,96000; //channelConfig: AudioFormat.CHANNEL_CONFIGURATION_DEFAULT, 单声道和立声 //audioFormat: AudioFormat.ENCODING_PCM_16BIT, 测试中要捕获PCM数据, 其它参数未研究. //bufferSizeInBytes: //mode: AudioTrack中有MODE_STATIC和MODE_STREAM两种分类。 // STREAM的意思是由用户在应用程序通过write方式把数据一次一次得写到audiotrack中。 // 这个和我们在socket中发送数据一样,应用层从某个地方获取数据,例如通过编解码得到PCM数据,然后write到audiotrack。 // 这种方式的坏处就是总是在JAVA层和Native层交互,效率损失较大。 // 而STATIC的意思是一开始创建的时候,就把音频数据放到一个固定的buffer,然后直接传给audiotrack, // 后续就不用一次次得write了。AudioTrack会自己播放这个buffer中的数据。 // 这种方法对于铃声等内存占用较小,延时要求较高的声音来说很适用。 public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode)
边采集边播放PCM代码(仅核心代码, 不完整)
//Audio Parameter static class AudioParam{ static final int FREQ_8000 = 8000; static final int FREQ_11025 = 11025; static final int FREQ_16000 = 16000; static final int FREQ_22050 = 22050; static final int FREQ_44100 = 44100; static final int FREQ_96000 = 96000; int device; int channel; int bitFormat; int buffSize; //int doubleBuffSize; int buffSizeOut; int rate; public AudioParam(int device, int rate, int channel, int bitFormat){ this.device = device; this.rate = rate; this.channel = channel; this.bitFormat = bitFormat; initBufferSize(); } private void initBufferSize() { buffSize = AudioRecord.getMinBufferSize(rate, channel, bitFormat); buffSizeOut = AudioTrack.getMinBufferSize(rate, channel, bitFormat); Logger.d("AudioPCM", "buffSize(" + buffSize + "), buffSizeOut(" + buffSizeOut + ")"); if(buffSizeOut < 0)buffSizeOut = buffSize; //doubleBuffSize = buffSize * 2; } static AudioParam getDefaultInParam(){ return new AudioParam(MediaRecorder.AudioSource.MIC, FREQ_44100, AudioFormat.CHANNEL_CONFIGURATION_DEFAULT, AudioFormat.ENCODING_PCM_16BIT); } static AudioParam getDefaultOutParam(){ return new AudioParam(AudioManager.STREAM_MUSIC, FREQ_44100, AudioFormat.CHANNEL_CONFIGURATION_DEFAULT, AudioFormat.ENCODING_PCM_16BIT); } } //Thread for capture audio class CaptureThread extends Thread{ @Override public void run() { AudioRecord mic = new AudioRecord(audioParamIn.device, audioParamIn.rate, audioParamIn.channel, audioParamIn.bitFormat, audioParamIn.buffSize); mic.startRecording(); byte[] pcmBuffer = new byte[2048]; while (!Thread.interrupted()) { int size = mic.read(pcmBuffer, 0, pcmBuffer.length); Logger.d(TAG, "read " + size + " bytes"); if (size <= 0) { break; }else{ if(playThd != null){ playThd.write(pcmBuffer, size); } } } mic.stop(); mic.release(); } } //Thread for play class PlayThread{ private AudioTrack mAudioTrack; PlayThread(){ try { audioParamOut = new AudioParam(AudioManager.STREAM_MUSIC, AudioParam.FREQ_44100, AudioFormat.CHANNEL_CONFIGURATION_DEFAULT, AudioFormat.ENCODING_PCM_16BIT); createAudioTrack(); mAudioTrack.play(); } catch (Exception e) { e.printStackTrace(); } } void write(byte[] bs, int size){ Logger.d(TAG, "write " + size + " bytes"); if(mAudioTrack != null){ mAudioTrack.write(bs, 0, size); } } void stop(){ if(mAudioTrack != null){ mAudioTrack.stop(); } } private void createAudioTrack() throws Exception{ // STREAM_ALARM:警告声 // STREAM_MUSCI:音乐声,例如music等 // STREAM_RING:铃声 // STREAM_SYSTEM:系统声音 // STREAM_VOCIE_CALL:电话声音 mAudioTrack = new AudioTrack(audioParamOut.device, audioParamOut.rate, audioParamOut.channel, audioParamOut.bitFormat, audioParamOut.buffSizeOut, AudioTrack.MODE_STREAM); } }
扩展
需要确定能正常的识别到USB 摄像头的MIC输入设备(这个过程走了许多弯路)
在测试过程中发现 MediaRecorder.AudioSource.CAMCORDER, MediaRecorder.AudioSource.MIC对应的设备并不是固定的
当未接USB摄像头的时候 MediaRecorder.AudioSource.MIC 对应的是 主板的MIC
当接上USB摄像头后, MediaRecorder.AudioSource.MIC 对应的是USB摄像头上的MIC.
参考
关于USB-Audio(USB麦克风)设备的录音验证
Android usb audio录音
Android下音频的测试程序tinyalsa(录音,放音,查看声卡信息)
Android语音采集