binaural audio rendering using head tracking -凯发k8网页登录
track head orientation by fusing data received from an imu, and then control the direction of arrival of a sound source by applying head-related transfer functions (hrtf).
in a typical virtual reality setup, the imu sensor is attached to the user's headphones or vr headset so that the perceived position of a sound source is relative to a visual cue independent of head movements. for example, if the sound is perceived as coming from the monitor, it remains that way even if the user turns his head to the side.
required hardware
arduino uno
invensense mpu-9250
hardware connection
first, connect the invensense mpu-9250 to the arduino board. for more details, see (sensor fusion and tracking toolbox).
create sensor object and imu filter
create an arduino
object.
a = arduino;
create the invensense mpu-9250 sensor object.
imu = mpu9250(a);
create and set the sample rate of the kalman filter.
fs = imu.samplerate;
imufilt = imufilter('samplerate',fs);
load the ari hrtf dataset
when sound travels from a point in space to your ears, you can localize it based on interaural time and level differences (itd and ild). these frequency-dependent itd and ild's can be measured and represented as a pair of impulse responses for any given source elevation and azimuth. the ari hrtf dataset contains 1550 pairs of impulse responses which span azimuths over 360 degrees and elevations from -30 to 80 degrees. you use these impulse responses to filter a sound source so that it is perceived as coming from a position determined by the sensor's orientation. if the sensor is attached to a device on a user's head, the sound is perceived as coming from one fixed place despite head movements.
first, load the hrtf dataset.
aridataset = load('referencehrtf.mat');
then, get the relevant hrtf data from the dataset and put it in a useful format for our processing.
hrtfdata = double(aridataset.hrtfdata); hrtfdata = permute(hrtfdata,[2,3,1]);
get the associated source positions. angles should be in the same range as the sensor. convert the azimuths from [0,360] to [-180,180].
sourceposition = aridataset.sourceposition(:,[1,2]); sourceposition(:,1) = sourceposition(:,1) - 180;
load monaural recording
load an ambisonic recording of a helicopter. keep only the first channel, which corresponds to an omnidirectional recording. resample it to 48 khz for compatibility with the hrtf data set.
[heli,originalsamplerate] = audioread('heli_16ch_acn_sn3d.wav'); heli = 12*heli(:,1); % keep only one channel samplerate = 48e3; heli = resample(heli,samplerate,originalsamplerate);
load the audio data into a signalsource
object. set the samplesperframe
to 0.1
seconds.
sigsrc = dsp.signalsource(heli, ... 'samplesperframe',samplerate/10, ... 'signalendaction','cyclic repetition');
set up the audio device
create an audiodevicewriter
with the same sample rate as the audio signal.
devicewriter = audiodevicewriter('samplerate',samplerate);
create fir filters for the hrtf coefficients
create a pair of fir filters to perform binaural hrtf filtering.
fir = cell(1,2); fir{1} = dsp.firfilter('numeratorsource','input port'); fir{2} = dsp.firfilter('numeratorsource','input port');
initialize the orientation viewer
create an object to perform real-time visualization for the orientation of the imu sensor. call the imu filter once and display the initial orientation.
orientationscope = helperorientationviewer; data = read(imu); qimu = imufilt(data.acceleration,data.angularvelocity); orientationscope(qimu);
audio processing loop
execute the processing loop for 30 seconds. this loop performs the following steps:
read data from the imu sensor.
fuse imu sensor data to estimate the orientation of the sensor. visualize the current orientation.
convert the orientation from a quaternion representation to pitch and yaw in euler angles.
use
interpolatehrtf
to obtain a pair of hrtfs at the desired position.read a frame of audio from the signal source.
apply the hrtfs to the mono recording and play the stereo signal. this is best experienced using headphones.
imuoverruns = 0; audiounderruns = 0; audiofiltered = zeros(sigsrc.samplesperframe,2); tic while toc < 30 % read from the imu sensor. [data,overrun] = read(imu); if overrun > 0 imuoverruns = imuoverruns overrun; end % fuse imu sensor data to estimate the orientation of the sensor. qimu = imufilt(data.acceleration,data.angularvelocity); orientationscope(qimu); % convert the orientation from a quaternion representation to pitch and yaw in euler angles. ypr = eulerd(qimu,'zyx','frame'); yaw = ypr(end,1); pitch = ypr(end,2); desiredposition = [yaw,pitch]; % obtain a pair of hrtfs at the desired position. interpolatedir = squeeze(interpolatehrtf(hrtfdata,sourceposition,desiredposition)); % read audio from file audioin = sigsrc(); % apply hrtfs audiofiltered(:,1) = fir{1}(audioin, interpolatedir(1,:)); % left audiofiltered(:,2) = fir{2}(audioin, interpolatedir(2,:)); % right audiounderruns = audiounderruns devicewriter(squeeze(audiofiltered)); end
cleanup
release resources, including the sound device.
release(sigsrc) release(devicewriter) clear imu a