Apple Vision Pro has been out for the select customers who are on the bleeding-edge of technology. With the technology comes a lot of technical advances in regard to remote working, VR working enhancements, and entertainmentt consumption. Currently the audio is rendered in an open-ear (directed out loud) fashion or to AirPods/Beats.
If you use wearables (buds) then you’re occluding a lot of environmental audio – which may or may not be something you’re interested in as a user. I have a few thoughts about these particular types of users. People who may want the audio experience to be more tailored for their conditions.
- You want beam-formed voice pickup by means of microphones on AVP and some Apple Intelligence to sculpt the signal.
- You’d like to selectively remove certain sound categories from the audio rendering while maintaining transparency to your surroundings – using Apple Intelligence and digital signal processing.
Here is a concept video of some low-fidelity ideas I made in Unity for the Quest 2. The audio in the recordings is spatial, but playback in the recording is straight stereo (just a FYI). These are edge cases that involve users wanting AR experiences (aware of something in their environment) while wearing AVP.
This video has been greatly compressed in order to make it more-streaming friendly. But I think it communicates some of the story.