Recently I’ve been toying around with a YouTube VR project that involves the viewer in the middle of a square room with four different videos playing on each wall. The idea is that the videos have some connective tissue, but the material will range from somewhat to very different, allowing for the viewer to focus on whatever side they want at their leisure.
I recently uploaded a video that betas this concept. It’s me playing through a Beat Saber song, showcasing three different difficulties: Hard chart on the left, the Expert chart on the middle, and the Expert+ chart on the right.
The problem I’m encountering has to do with audio isolation. The concept that I have for the project is that when someone faces a particular wall, I want the listener to hear only the audio for the wall that they’re facing with all other audio muted. That’s not generally how 360/ambisonic audio is treated – it’s easy enough to make whatever direction you’re facing the strongest audio in a 360 mix, but you still hear all of the other audio from the direction that it’s coming from and there doesn’t seem to be a built in process to change the levels that you hear depending which way you face.
For the Beat Saber video above, this isn’t a huge deal since the audio is for all intents and purposes the same on all three sides, but if I were to create a video where, say, I wanted to highlight my favorite three tracks in Beat Saber in a single video, it would be an aural mess.
Not sure what the solution for this is, but I’ll continue to do research. In the meantime, these Beat Saber vids work out pretty well, so I’ll be uploading more of those periodically in the immediate future.