• Please make sure you are familiar with the forum rules. You can find them here: https://forums.tripwireinteractive.com/index.php?threads/forum-rules.2334636/

RiccardoTheBeAst

Grizzled Veteran
Sep 19, 2009
578
126
Italy
Guys, if you have a decent stereo-headphones please listen this:

YouTube - Virtual Barber Shop (Audio...use headphones, close ur eyes)

So, this is not 5.1 or 7.1 sound. This is simply stereo. And, as for Avatar's 3D, where they used cameras with two objective each camera (as human's eyes), for this video were used two microphones (as human's ears) and as you can see(hear) the result is something extremely realistic and exceptional.

So, can be created something like that in RO2? Is it possible? Actually i have Sennheiser HD555 and Asus Xonar DX, so, i can emulate surround sound, but it's not the same thing. I have also tryed Zalman 5.1 headset, but again, the 3D sound is not perfect as in the video.

Let's discuss about that :D
 
The point is that the audio comes to your ears in different times, and then your brain elaborate the sound and can perfectly understand from where the sound comes.

Is this possible with UE3?

It sounds like it should be possible with any engine, really. When an entity in the game emits a sound, the game records the location of that entity, then the server sends that information to the client. The client then traces a straight line from that location to the player's current location. Then, based on the direction of the line and the direction of the player, the sound is played at different amplitudes and with at different times on each channel.
 
Upvote 0
What RiccardoTheBeAst is talking about, perhaps without knowing is HRTF, and what's sometimes called Binaural Processing. To answer your question Riccardo: yes, it can be done, but not with UE3's default audio backplane; Tripwire would need to use FMOD or code an interface, and then create, license or use existing HRTF models that for the interface, or license middleware like QSound or Rapture 3D.

I like Blue Ripple Sound's Rapture 3D post processer and driver better then QSound, because they includes five HRTF models: two from the CIAIR HRTF Data Set, one from IRCAM LISTEN HRTF Data Set and two more from the MIT KEMAR Data set and give credit to who's data they're using for their models...

The problem with HRTF/Binaural Processing is that a specific model used will only 'work' and create the precise spacial positioning for certain individuals with head shapes similar to the model; but when it works, it is the most accurate positional system in terms of fidelity, positional accuracy, and wowee impressiveness of any surround technology, and well worth the effort to implement.

Even more amazing is how long it's been around...

:)
 
Last edited:
  • Like
Reactions: {Core}Craig
Upvote 0
Yes when the HRTF model used really closely approximates your head shape; spacial precision is nearly exact and close proximity sounds can make it feel like something is touching your face and even make some people sneeze. Using really elaborate HRTF models that even include auricle data, and you get vertical localization precision as well.

The really cool thing about this with game sound is games can offer a nearly idealized environment where you can get more spacial precision, and better fidelity then you can't even get with the most elaborate surround system, and almost for free.

QSound uses a good generic model that works for a lot of people; but if the recording above didn't work for you, you can try a Google search for Binaural Processing and/or Binaural Recording and can find a lot of sound files that depending on the model used may give a nearly exact 'you are there' experience that others are talking about that you missed with the virtual barber shop recording.

:)
 
Last edited:
  • Like
Reactions: {Core}Craig
Upvote 0
So, can be created something like that in RO2? Is it possible? Actually i have Sennheiser HD555 and Asus Xonar DX, so, i can emulate surround sound, but it's not the same thing.
They recorded thios wiht an artificial head and microphones where the ears are.
The calculation power needed to simluate this in real time could be very high.

But I always like it when they use this technology on some music albums and while listening with head phones you think that sound is in your environment.
 
Last edited:
Upvote 0
They recorded thios wiht an artificial head and microphones where the ears are.
Yes.

:)

The calculation power needed to simluate this in real time could be very high.
Acutally it's not, the DSP to crunch regular stereo to simplified ORTF, or even one of the more detailed HRTF head models is rather simple math and low cost, it's also supported on most audio hardware directly so the CPU impact is no more then rendering common stereo.

:)
 
Upvote 0