There are two components to VR which Godot currently lacks: stereoscopic cameras and head tracking. But if the furthest your mind has gone on the matter is making a Slenderman survival horror game then you're either a visionary or very childish. In any case, I'm working on doing some VR stuff using what Godot is currently capable of and it's coming along rather well, so in my infinite generosity I'll do my best to explain how this works to you and anyone else who is interested. Also this is a fun topic. Sorry if I sound like a dick. I'm a dick.
First: Stereoscopic cameras. Stereo just means you have 2 at once. In Godot the scene hierarchy is such that every viewport can only have one camera. So you make 2 Viewports, one that covers the left half of the screen and one that covers the right. In each Viewport you make a Spatial, which is an empty 3D object. This Spatial serves as the pivot point between the left camera and the right camera, so both Spatials should be in the exact same place. Remember, Viewports do not inherit 3D data from parent nodes, so each Spatial is effectively the root of its own scenegraph. (English translation: When you move a 3D object that contains a Viewport the Viewport won't move, nor will any of its child nodes.) Within each Spatial you put a camera and position one camera to the left and one camera to the right. So it looks like this:
ViewportL
- Spatial
- - Camera
ViewportR
- Spatial
- - Camera
So now you need to be able to move it around as one, but like quarreling siblings the nodes won't (or can't) cooperate with each other with this hierarchy. So what do you do? You make a third Spatial outside of the Viewports in the exact same position as the others, then parent the Viewports to it and give it a script... Your hierarchy should look like below:
Spatial (with a script)
- ViewportL
- - Spatial
- - - Camera
- ViewportR
- - Spatial
- - - Camera
The script should basically position the child Spatials where it is every frame, so something like:
onready var SpatialL = getnode( 'ViewportL/Spatial' )
onready var SpatialR = getnode( 'ViewportR/Spatial' )
func _ready():
set_process( true )
func _process( d ):
# Player/camera movement code goes here
var trans = get _ global _ transform()
SpatialL.set _ global _ transform( trans )
SpatialR.set _ global _ transform( trans )
And now you have stereo cameras suitable for VR. You might want to put a black vignette around each eye to sort of round it out and not have 2 ugly rectangles side by side, but I'll let you figure that part out on your own. Basically it's just textureframes on top of your viewports.
That was the easy part. The hard part is getting head tracking working. I'm still working on that code for my own project (though I temporarily put it aside in favor of other things). The basic idea is that every phone or Rift or other doohickey has any combination of three sensors: Accelerometer, Magnetometer, and Gyroscope (in order of most to least common). The data from these sensors is combined using some magical code that you yourself may have to write, which is a technique called sensor fusion (which basically just means math). But before you get to fuse anything you actually need to be able to get data from these sensors. (PROTIP: This isn't going to work on your laptop. Try testing on your phone.)
Godot currently only supports one of these sensors: accelerometer. That's because those with commit access in the repository aren't hip with fresh ideas. Don't worry, I'm sure they'll be impeached soon. Oh wait, it's a dictatorship. You're powerless to stop them. Gyroscopes add stability but only the magnetometer can tell you where you're facing relative to Earth's axis, because it's a compass. So you need to combine data from Input.getaccelerometer() with data from Input.getmagnetometer(), in the process hopefully smoothing it out unless you're trying to make an earthquake simulator. Accelerometer gives you roll and pitch while magnetometer gives you yaw. Here's the current discussion on magnetometers, which I think reflects just how little VR is on everyone's minds right now: https://github.com/godotengine/godot/pull/4154
Godot is not an easy program to compile, so I'd wait until this gets merged, but when it gets merged support will only be there for Android, because I'm the one who wrote the magnetometer code and I've only ever developed on Android. I'm very sorry if Android is not you. Someone is working on (or planning to work on) the same thing for iOS, but that still doesn't cover the plethora of other mobile operating systems. Blackberry, PalmOS, Windows Phone, Symbian OS, NokiOS, and CheeriOS are out of luck. But if you just so happen to be an Android developer (the poorest, most unemployed kind of developer) then just say the word and I can upload some slightly outdated Android export template APKs containing magnetometer support because because somebody with commit access decided the only thing missing from his life was a powerful enemy.
Like I said, I'm still working out the bugs in my own sensor fusion implementation (mainly to do with the smoothing) but here's a quick function which returns an angle from the accelerometer and magnetometer without any smoothing. I'm not sure how necessary the wrap function is but I drink a lot so IDGAF.
func xRotateVector( vct, angle ):
var ss = sin( angle )
var cc = cos( angle )
return Vector3( vct.x, vct.y * cc - vct.z * ss, vct.y * ss + vct.z * cc )
func zRotateVector( vct, angle ):
var ss = sin( angle )
var cc = cos( angle )
return Vector3( vct.x * cc - vct.y * ss, vct.x * ss + vct.y * cc, vct.z )
func wrap( val ):
var circ = PI + PI
return val - ( int( val / circ ) * circ )
func quick_orientation():
var a = Input.get _ accelerometer()
var m = Input.get _ magnetometer()
var v = Vector3( 0.0, 0.0, wrap( PI + atan2( a.x, a.y ) ) )
a = zRotateVector( a, v.z )
v.x = wrap( PI / 2 + atan2( a.y, a.z ) )
m = xRotateVector( zRotateVector( m, v.z ), v.x )
v.y = wrap( atan2( m.x, m.z ) )
return v
Now I've given you everything. Sorry I didn't bring lube.