Skip to main content

The February London VR Meetup

February 17th saw the fifth London VR Developer Meetup, hosted at the Inition offices, and it was a special one for me. This was the first time that the Oculus Rift port of Proteus was shown to the public, but more on that later.

This month was also interesting for other reasons. We had a presentation on what was discussed at the recent Steam Dev Days conference along with two hardware demonstrations. The first bit of hardware was presented by Mike Nichols, the former Exec Producer with the original Kinect, and now VP of Content and Applications at SoftKinetic.

The busy Inition offices

His demo concerned a ToF camera developed at SoftKinetic and then perched on top of the top of a Rift. Once you’re wearing the Rift, the field of view for the depth camera is trained on the area that your hands occupy if you hold them up in front of you.

The spacial calibration of the camera and the Rift are aligned in such a way that you can see your hands via the Rift in exactly the same location they occupy in real space. This allows for some extremely natural interactions with the virtual environment. What you may lack in haptic feedback is made up for with proprioception.

SoftKinetic offer a middleware library called iisu to help prototype with this arrangement of camera and headset, I’m not sure whether the Unity code they were showing is part of the package, but it was an impressive system, especially as the latency was low enough for it to be easily usable.

The Avengant Glyph

The next demo was for a current Kickstarter project, the Glyph. As I write this there are 15 hours left in the campaign, and they have $1.4M from over 3100 backers. The unit itself looks and feels like a fairly bulky set of headphones, with the idea that you can use them purely for audio when you’re not using them for video.

Once the headband is situated in front of your eyes, the image is bright and clear, with only a tiny amount of chromatic aberration from the current prototype lenses. The field of view is however extremely narrow, so it’s definitely not in the same league as the Oculus Rift. They are also significantly more expensive, weighing in at $200 more than the Kickstarter for Rift prototype.

They’re obviously chasing a different market, and I could see these working in space constrained environments like planes and trains. Having tried them, I find the full periphery devices a more enjoyable experience, but it’s a fascinating bit of technology and it’ll be one to watch over the next 12 months.

Proteus

In a darkened room at the back, two of us regulars were demonstrating games on the Rift. Albert Bentall was there with his excellent Sandman, a hybrid of a canoe simulator and character-based narrative story set in a beautifully realised fantasy world. It’s mad and brilliant, and I can’t wait for the final game.

And last but not least I got a chance to show off the hard work that Aubrey and I have put in over the last year, and that is the working demo of Proteus!

Proteus

Days since last report of motion sickness: 1̶5̶ 0

People were generally very positive, with only one report of motion sickness experienced (although I can possibly blame the fact that the debug build enables the shift key to sprint!). This build didn’t demo any of our control system experiments, and it wasn’t rendering the full size generative landscape, but we’ll be addressing both those areas after the upgrade to SDL2.

One of the areas that Aubrey and I share a passion for is the methods by which you control your experience while wearing the Rift. Aside from the technical achievement of getting Proteus to render stereoscopically with the right distortion  shaders, the real challenge of VR comes in how we move an interact in the virtual world.

I’m particularly interested in what happens when you turn your smartphone into a single touch six axis controller. The next stage I want to get Proteus to is one where you have a companion app running on iOS which allows you to use swipes or taps across the screen to indicate movement in the game world. This is similar to, but not quite, the same system used in Papa Sangre.

In Papa Sangre, the entire game takes place aurally, so the controls presented on the phone are simplified to the point where you needn’t look down at the phone at all during play, and this is exactly the same for VR.

This is one potential for movement, but there’s still a lot of other potential solutions, and interaction poses another layer of complexity. Do you stick with crosshairs and pointers? Do you require virtual proximity? Can you easily change the type of interaction?

This is something I’ll be returning to in future updates, and I’ll also be at the next London VR meetup.