(by Jon Collins, on behlaf of Soma Games and Code-Monkeys)
This article is part of a series that documents our ‘Everyman’ experience with the new RealSense hardware and software being developed by Intel.
Full disclosure, Intel does pay us for some of this stuff but one of my favorite aspects of working with them is that they aren’t asking us to write puff-pieces. Our honest, sometimes critical, opinions are accepted…and even seem to be appreciated…so we got that going for us.
A First Look
There’s no denying that the Pre-alpha SDK is exactly what it says on the box, a pre-alpha, that said there’s a surprising amount of useful functionality which can be gleaned from looking deeper into the C# samples that are present and taking lessons learnerd from previous SDKs.
First off, the kit includes a Unity3D sample (there is just the one in the current package) is the Nine Cubes sample within the frameworks folder of the samples directory structure.
This gives us a good starting point to look into how to take advantage of the camera & SDK, although a few red-herrings are present which may be hangover from development versions, it gave us enough of an idea to further explore and adapt some of the separate C# samples bringing that functionality into our initial Unity3D project. (CS: We use Unity3D almost exclusively here at Soma Games so having this bridge to RalSense was a practical pre-requiste for us to consider adoption of RealSense)
For this exercise we were primarily concerned with being able to track & record finger joint positioning within Unity3D. The available methods and documentation suggest there is an planned ability to load, save, and recognize gestures from a pre-defined library but after a little digging and running questions up to the dev team it appears that feature has been ‘delayed’ 🙁 So with our hopes dashed at not finding the C# gesture viewer sample we wanted to see how, or even if, we would be able to access the joints to explore developing our own approach to logging finger & hand poses.
Rolling Our Own
We’ll now look at several pertinent pieces of code we found to be helpful in our quest to track the finger joints.
The first useful clue on our investigation was tracking the center of mass for the hand from SenseInput.cs in the 9 Cubes Unity Project
/* Retrieve hand tracking data if ready */ PXCMHandAnalysis hand=sm.QueryHandAnalysis(); //hand.LoadGesturePack ("test"); if (hand!=null) { PXCMHandAnalysis.HandData data; pxcmStatus sts=hand.QueryHandData... (PXCMHandAnalysis.AccessOrder.AccessOrder_ByTime,0,out data); if (sts>=pxcmStatus.PXCM_STATUS_NO_ERROR) OnHandData(data); } }
OnHandData is passed the pretty much all the pertinent data for the actual hand, and from that we were able to derive the hands center of mass.
And from Handrecognizer.cs in the C# samples we found the following:
/* Displaying current frames hand joints */ private void DisplayJoints(PXCMHandAnalysis handAnalysis) { if(this.form.GetJointsState()||this.form.GetSkeletonState()) { //Iterate hands PXCMHandAnalysis.JointData[][] nodes = ... new PXCMHandAnalysis.JointData[][] { new PXCMHandAnalysis.JointData[0x20],... new PXCMHandAnalysis.JointData[0x20] }; for (uint i = 0; i < handAnalysis.GetNumberOfHands(); i++) { //Get hand by time of appearance PXCMHandAnalysis.HandData handData = ... new PXCMHandAnalysis.HandData(); if (handAnalysis.QueryHandData(... PXCMHandAnalysis.AccessOrder.AccessOrder_ByTime,... i, out handData)==pxcmStatus.PXCM_STATUS_NO_ERROR) { //Iterate Joints for (int j = 0; j < 0x20; j++) { nodes[i][j] = handData.trackedJoints[j]; } } } this.form.DisplayJoints(nodes); } else { this.form.DisplayJoints(null); } }
This provided us with the clues we needed to start implementing an on-screen feedback of the various finger joints being tracked.
In the next portion of this post we’ll explore how we took these various examples to help us build out a simple script to help us track the positioning of the finger joints in 3d space.
hi..
i want to do drag and drop with multiple objects how to do this in unity3d with realsense3d.please help me.
great post, 🙂
I really want to know about voice recognition with Intel RealSense in Unity. Could you please give us an example about it ?
thanks 🙂
Hi Irsal – you betcha. We’re using voice recognition commands in the RealSense version of Magic & Magnums. (https://www.facebook.com/codemonkeys) where voice commands provide several key features like weapon selection and squad retreat.
From a programming perspective voice recognition is some of the easiest to implement. The nuance system basically just needs to be started and then you provide a list of words you want it to listen for and those words get attached to actions. As an example, we listen for the words one, two, and three. When a word is recognized RealSense knows that “one” selects weapon one in exactly the same action as a button click.
I will say, however, that voice recognition can be tricky in real world settings however. It’s not uncommon for the system to overhear the conversation at the next table and trigger actions you didn’t want. Or in other places, with certain words, it has a hard time recognizing the word and you wind up shouting at it (as if that makes it better). So I think it’s smart to do some field testing to find out where voice recognition is a good fit, and where it can cause unintended issues.
Either way – welcome to RealSense!
hey, It’s time that you post the next part ! Go 3D !