C# Corner

Virtual Reality in .NET, Part 3: 3D With Distortion and Head Tracking

Go deeper into the Oculus Rift SDK.

More on this topic:

Welcome to Part 3 of this series on Virtual Reality programming with Oculus Rift in the .NET Framework. In Part 1, I did an overview of the Oculus SDK, and in Part 2 of the series I showed how to render a stereoscopic 3D scene.

Since those articles were published, a lot's changed with the SDK. For example, it's changed from using a pixel shader for distortion to using a vertex mesh-based shader. In addition, it now supports both a new SDK renderer and client-side rendering. The renderer handles the details of setting up both the pixel and vertex shaders, as well as the final flush and sync draw calls. It's now the recommended path of game integration.

Luckily for us, there's a new .NET Framework wrapper around the new Oculus SDK 0.3.2 --  SharpOVR. This article will demonstrate the use of SharpOVR in conjunction with the SharpDX Toolkit to quickly create a 3D scene with distortion and head tracking.

To get started, open up Visual Studio (either 2012 or 2013) and install the SharpDX Visual Studio Extension, as seen in Figure 1.

[Click on image for larger view.] Figure 1. Installing SharpDX Toolkit for Visual Studio.

Next you'll need to create a new ShaprDX project using the newly installed Toolkit Game template (Figure 2).

[Click on image for larger view.] Figure 2. Creating a toolkit game.

Next you'll be prompted to select the options for the Toolkit Game: check the 3D Model checkbox, as seen in Figure 3.

[Click on image for larger view.] Figure 3. Selecting toolkit sample options.

Once the game project's been created, install the SharpOVR NuGet package, as shown in Figure 4.

[Click on image for larger view.] Figure 4. Installing the SharpOVR NuGet package.

Now it's time to update the game sample to support the Oculus Rift. First open up your game class file, then add an HMD member variable:

private readonly HMD _hmd;
Then add an eye render viewport of type Rect[]:
private Rect[] _eyeRenderViewport;
Next add an eye texture array:
private D3D11TextureData[] _eyeTexture;
Then add a 2D render target:
private RenderTarget2D _renderTarget;
Now, add a shader resource view and a depth stencil buffer:
private ShaderResourceView _renderTargetSrView;
private DepthStencilBuffer _depthStencilBuffer;
Then add an eye render description array:
private EyeRenderDesc[] _eyeRenderDesc;
Next add the eye position vector:
private readonly Vector3 _eyePos = new Vector3(0, 0, 7);
Then comes the eye yaw value:
private const float EyeYaw = (float)Math.PI;
Now add a keyboard manager that will be used to get the current state of the keyboard later:
private readonly KeyboardManager _kbKeyboardManager;
It's time to initialize the OVR SDK in the game constructor and create an HMD object for the Rift. If you don't have an Oculus Rift, a debug rift object is created:
OVR.Initialize();
_hmd = OVR.HmdCreate(0) ?? OVR.HmdCreateDebug(HMDType.DK1);
After that, set the preferred back buffer width and height to the resolution of the created HMD:
graphicsDeviceManager.PreferredBackBufferWidth = _hmd.Resolution.Width;
graphicsDeviceManager.PreferredBackBufferHeight = _hmd.Resolution.Height;
Finally, create the KeyboardManager for the game:
_kbKeyboardManager = new KeyboardManager(this);

The completed game constructor should look like Listing 1.

Listing 1: The Constructor
public VSMVrGame()
{
    var graphicsDeviceManager = new GraphicsDeviceManager(this);
    Content.RootDirectory = "Content";
    OVR.Initialize();
    _hmd = OVR.HmdCreate(0) ?? OVR.HmdCreateDebug(HMDType.DK1);
    graphicsDeviceManager.PreferredBackBufferWidth = _hmd.Resolution.Width;
    graphicsDeviceManager.PreferredBackBufferHeight = _hmd.Resolution.Height;
    _kbKeyboardManager = new KeyboardManager(this);
}
Next update the Initialize method to initialize the HMD via the InitHmd method that will be defined later:
InitHmd();
Then set the window position to the suggested position from the SDK:
var window = this.Window.NativeWindow as Form;
if (window != null)
{
    window.SetDesktopLocation(_hmd.WindowPos.X, _hmd.WindowPos.Y);
}

The completed Initialize method should now be like Listing 2.

Listing 2: The Initialize Method
protected override void Initialize()
{
    // Modify the title of the window
    Window.Title = "VSMVrGame";

    InitHmd();

    var window = this.Window.NativeWindow as Form;
    if (window != null)
    {
        window.SetDesktopLocation(_hmd.WindowPos.X, _hmd.WindowPos.Y);
    }

    base.Initialize();
}

Now to implement the InitHmd method, which prepares the Rift for rendering and initializes its sensor. First, set the render target size to the default render target size reported from the HMD:

var renderTargetSize = _hmd.GetDefaultRenderTargetSize();

Then create the render target with the render target size:

_renderTarget = RenderTarget2D.New(GraphicsDevice, renderTargetSize.Width, renderTargetSize.Height,
    new MipMapCount(1), PixelFormat.R8G8B8A8.UNorm);
Next, assign the shader resource view to the render target:
_renderTargetSrView = _renderTarget;
Then create the depth stencil buffer with the same size as the render target:
_depthStencilBuffer = DepthStencilBuffer.New(GraphicsDevice, renderTargetSize.Width,
    renderTargetSize.Height, DepthFormat.Depth32, true);
Now, set the render target size to be the same as the render target:
renderTargetSize.Width = _renderTarget.Width;
renderTargetSize.Height = _renderTarget.Height;
Then create the eye render viewport array, with each index representing an eye viewport with half the width of the Rift:
_eyeRenderViewport = new Rect[2];
_eyeRenderViewport[0] = new Rect(0, 0, renderTargetSize.Width / 2, renderTargetSize.Height);
_eyeRenderViewport[1] = new Rect((renderTargetSize.Width + 1) / 2, 0, _eyeRenderViewport[0].Width,
    _eyeRenderViewport[0].Height);
Next, create the eye textures:
_eyeTexture = new D3D11TextureData[2];
_eyeTexture[0].Header.API = RenderAPIType.D3D11;
_eyeTexture[0].Header.TextureSize = renderTargetSize;
_eyeTexture[0].Header.RenderViewport = _eyeRenderViewport[0];
_eyeTexture[0].pTexture = ((SharpDX.Direct3D11.Texture2D)_renderTarget).NativePointer;
_eyeTexture[0].pSRView = _renderTargetSrView.NativePointer;
_eyeTexture[1] = _eyeTexture[0];
_eyeTexture[1].Header.RenderViewport = _eyeRenderViewport[1];
In this case I'm using the same viewport and texture for each eye. Next get the graphics device:
var device = (Device)GraphicsDevice;
Then create the Direct3D 11 rendering configuration:
var d3D11Cfg = new D3D11ConfigData();
d3D11Cfg.Header.API = RenderAPIType.D3D11;
d3D11Cfg.Header.RTSize = _hmd.Resolution;
d3D11Cfg.Header.Multisample = 1;
d3D11Cfg.pDevice = device.NativePointer;
d3D11Cfg.pDeviceContext = device.ImmediateContext.NativePointer;
d3D11Cfg.pBackBufferRT = ((RenderTargetView)GraphicsDevice.BackBuffer).NativePointer;
d3D11Cfg.pSwapChain = ((SharpDX.DXGI.SwapChain)GraphicsDevice.Presenter.NativePresenter).NativePointer;
Next I get the eye render configuration data through the ConfigureRendering SDK method. If there's an error, I throw an exception:
 _eyeRenderDesc = new EyeRenderDesc[2];
  if (!_hmd.ConfigureRendering(d3D11Cfg, _hmd.DistortionCaps,
      _hmd.DefaultEyeFov, _eyeRenderDesc))
  {
      throw new Exception("Unable to configure rendering!");
  }
Last, I start the HMD sensor to indicate that the device supports orientation and yaw correction, and that, at a minimum, orientation tracking is needed:

_hmd.StartSensor(SensorCapabilities.Orientation | SensorCapabilities.YawCorrection,
     SensorCapabilities.Orientation);

The completed InitHmd method is in Listing 3.

Listing 3: The InitHmd Method
private void InitHmd()
 {
     var renderTargetSize = _hmd.GetDefaultRenderTargetSize();
     _renderTarget = RenderTarget2D.New(GraphicsDevice, renderTargetSize.Width, renderTargetSize.Height,
         new MipMapCount(1), PixelFormat.R8G8B8A8.UNorm);
     _renderTargetSrView = _renderTarget;

     _depthStencilBuffer = DepthStencilBuffer.New(GraphicsDevice, renderTargetSize.Width,
         renderTargetSize.Height, DepthFormat.Depth32, true);

     renderTargetSize.Width = _renderTarget.Width;
     renderTargetSize.Height = _renderTarget.Height;

     _eyeRenderViewport = new Rect[2];
     _eyeRenderViewport[0] = new Rect(0, 0, renderTargetSize.Width / 2, renderTargetSize.Height);
     _eyeRenderViewport[1] = new Rect((renderTargetSize.Width + 1) / 2, 0, _eyeRenderViewport[0].Width,
         _eyeRenderViewport[0].Height);

     _eyeTexture = new D3D11TextureData[2];
     _eyeTexture[0].Header.API = RenderAPIType.D3D11;
     _eyeTexture[0].Header.TextureSize = renderTargetSize;
     _eyeTexture[0].Header.RenderViewport = _eyeRenderViewport[0];
     _eyeTexture[0].pTexture = ((SharpDX.Direct3D11.Texture2D)_renderTarget).NativePointer;
     _eyeTexture[0].pSRView = _renderTargetSrView.NativePointer;

     _eyeTexture[1] = _eyeTexture[0];
     _eyeTexture[1].Header.RenderViewport = _eyeRenderViewport[1];

     var device = (Device)GraphicsDevice;
     var d3D11Cfg = new D3D11ConfigData();
     d3D11Cfg.Header.API = RenderAPIType.D3D11;
     d3D11Cfg.Header.RTSize = _hmd.Resolution;
     d3D11Cfg.Header.Multisample = 1;
     d3D11Cfg.pDevice = device.NativePointer;
     d3D11Cfg.pDeviceContext = device.ImmediateContext.NativePointer;
     d3D11Cfg.pBackBufferRT = ((RenderTargetView)GraphicsDevice.BackBuffer).NativePointer;
     d3D11Cfg.pSwapChain = ((SharpDX.DXGI.SwapChain)GraphicsDevice.Presenter.NativePresenter).NativePointer;

     _eyeRenderDesc = new EyeRenderDesc[2];
     if (!_hmd.ConfigureRendering(d3D11Cfg, DistortionCapabilities.Chromatic,
         _hmd.DefaultEyeFov, _eyeRenderDesc))
     {
         throw new Exception("Failed to configure rendering");
     }

     _hmd.SetEnabledCaps(HMDCapabilities.LowPersistence);
     _hmd.StartSensor(SensorCapabilities.Orientation | SensorCapabilities.YawCorrection,
         SensorCapabilities.Orientation);
 }
It's now time to update the Update method to read and react to the user's keyboard input. First, I get the current keyboard state:
var kbState = _kbKeyboardManager.GetState();
Next I check if the user has pressed the escape key, and if so exit the game:
if (kbState.IsKeyDown(Keys.Escape))
 {
     Exit();
 }
Then I check if the user's pressed the space key, and restart the sensor if they have:
if (kbState.IsKeyDown(Keys.Space))
 {
     _hmd.ResetSensor();
 }

The completed Update method should now look link Listing 4.

Listing 4: The Update Method
protected override void Update(GameTime gameTime)
 {
     base.Update(gameTime);
     _view = Matrix.LookAtRH(new Vector3(0.0f, 0.0f, 7.0f), new Vector3(0, 0.0f, 0), Vector3.UnitY);
     _projection = Matrix.PerspectiveFovRH(0.9f, 
         (float)GraphicsDevice.BackBuffer.Width / GraphicsDevice.BackBuffer.Height, 0.1f, 100.0f);

     var kbState = _kbKeyboardManager.GetState();
     if (kbState.IsKeyDown(Keys.Escape))
     {
         Exit();
     }
     
     if (kbState.IsKeyDown(Keys.Space))
     {
         _hmd.ResetSensor();
     }
 }
Next I add the DrawModel method, which renders the space ship model at { 0, -1.5, 2.0 } with a y-axis rotation and scale of 0.003:
protected virtual void DrawModel(Model model, GameTime gameTime)
 {
     var time = (float)gameTime.TotalGameTime.TotalSeconds;
     var world = Matrix.Scaling(0.003f) *
                 Matrix.RotationY(time) *
                 Matrix.Translation(0, -1.5f, 2.0f);
     model.Draw(GraphicsDevice, world, _view, _projection);
     base.Draw(gameTime);
 }

comments powered by Disqus

Featured

Subscribe on YouTube