In a recent post I described the long and winding road that got us to the launch of Magic & Magnums and how its weird development path allowed us some atypical freedoms than if we were concerned with things like…oh, say…making money.
The biggest effect on gameplay came from our work with RealSense and the effort to really reimagine a spatial game interface. But all of that has been covered elsewhere so I won’t do it again here. (But yes, there will be a RealSense version whenever that hardware hits the streets…hopefully that’s RealSoon.) I want this blog to be about the money stuff.
Monetization and the Curse of Free2Play
We’re in no way out in front of this conversation. Many folks, including some friends, have written on their decision to forsake the Free To Play model. But It’s worth saying that I don’t hate F2P. There are F2P games that I quite enjoy. But despite liking them and investing time in them, I realize that it’s vanishingly rare that I convert to a paying customer. Now of course I don’t feel guilty about it…much. After all, it’s the developers choice to offer the fruit of their hard labor for free – right? So it’s not exactly like I’m stealing or taking advantage of them…am I?
(This white paper is also published in Intel Developer Zone here)
If the goal of virtual input devices like those that can be created with Intel® RealSense™ technology merged with an appropriate Natural User Interface (NUI), is to be competitive with, or a replacement for, established physical inputs like mouse and keyboard, they must address the matter of text input. Where current-gen NUI technology has done a reasonable job of competing with the mouse, a visual and spatially contextualized input method, it has fallen notably short of competing with the keyboard.
The primary problem facing a virtual keyboard replacement is speed of input, and speed is necessarily related to accuracy. However, as sensor accuracy continues to improve, we can see other challenges arise that may prove more difficult to address.
In this paper I start with the assumption that sensor accuracy does, or soon will, allow the detection of small hand movements of a degree similar to that of keystrokes. Then I examine the opportunities, challenges, and some possible solutions for virtualized textual input without the need for a physical peripheral beyond the camera sensor.
A Few Ground Rules For This Discussion
For the sake of this paper, I’ll be discussing the potential replacement of a western style keyboard. The specific configuration, QWERTY or otherwise, is irrelevant to the main point. With that in mind, keyboards can be considered in an hierarchy of complexity, from a numeric 10-key, through extended computer keyboards that include letters, numbers, punctuation, and various macro and function keys.
As noted above, factors most likely to make or break any proposed replacement to the keyboard are speed, followed by accuracy. Sources disagree on the average typing speed of a modern tech employee, and Words Per Minute (WPM) may be an improper measure of keyboarding skills for code writing, but it will serve as a useful comparative metric. I will assume that 40WPM is a reasonable speed, and methods that cannot realistically reach that goal given an experienced user should be discarded.
I will also be focused on using the Latin alphabet; however, other alphabetic languages are similar in application. What is not considered here, though well worth exploring, is the virtualization of logographic input. It’s conceivable that gestural encoding of conceptual content would be faster and superior to gestures that encode phonemes and may even represent a linguistic evolutionary advance from logographic systems like Kanji. That said, a very likely use case for this kind of technology is writing computer code, in which case letter-by-letter input is an overarching consideration. Continue Reading…
This week we wrapped up V 1.0 of a new game and soon, Lord willing, Code-Monkeys will launch Magic & Magnums. (Dec 5 update: we locked up beta late last night and submitted to iOS. Now we’ll get various other builds wrapped up and submitted in the following days…just in time for Christmas..yippee!) It’s a goofy, spoofy, arcade game that is also an evolution of our game Santa’s Giftship (on iOS and Kindle) from 2011. And, as a sad matter of fact, this is the first real game release we’ve had since Suitor Shooter (iOS) in 2012…yikes!
But we have a good excuse…we really do…and it’s all connected.
Oh SG1 Gunship, where art thou?
Shortly after Suitor Shooter was in the store we got a call from a friend who was working on a project that sounded pretty fantastic. He had inked a deal to make a few games based on the Stargate SG1 TV show and he needed a quick and simple game that could be set in the Stargate world and be ready for ComicCon…in 3 weeks. It was meant to be a wham-bam quick project for marketing purposes, not a thoughtful, deep exploration of game design principles. Given the parameters we thought an SG1 version of Suitor Shooter could be put together quickly and off we went on a 3-week sprint without any expectation of it going any farther. ComicCon went off as planned but our friend wasn’t able to put the rest of the details together in that short time and wound up staying out of it – and so did our game. That was the first of a long chain of ‘almost launch’ events for what was then being called Stargate Gunship…a game that would, in the end, never come to be.
IDF 2014 was not the first time we’ve been honored to have a tech demo on the floor. And as we’ve been to the rodeo we’ve learned a thing or two about floor demos. First among those things: Keep It Simple Sherlock. So with that lesson in mind we created Cloak&Badger – a very simple game mechanic that splurged on the eye candy (a wonderfully animated badger guard) and did exactly one thing: it used the recently updated RealSense (Beta) SDK and its emotion sensing API. The entire game worked as you made faces at the camera…that’s it…and it was a blast!
Cueing the player to what emotion drove the RPG-Style dialog tree in a particular direction was straight-forward and folks had tons of fun trying on the various emotive states that the API supports. (Includes: Joy, Sadness, Disgust, Contempt, and Surprise plus the more general “sentiments” that are Positive, Neutral, and Negative.) By the rules of our KISS constraint it was an unqualified success and we had tons of smiles, laughs and genuine fun all week
Soma Games wrote our first line of game code at the tail end of 2008, just as the iPhone was really blowing up and as it happened, we were in the right place at the right time. It wasn’t on purpose, it was partly opportunistic, but it worked out. We rode that mobile wave for years and were part of the Indie Game Renaissance it helped generate. (See: Polygon, GDC, Wired)
One Rig to Rule Them All…
What made mobile so attractive was, of course, the low barrier to entry but that was only what got us interested. What kept us interested was the demonstrated market for indie games. Hardware constraints initially leveled the playing field so big studios had a much less pronounced quality and scope advantage over small shops so nimble little shops like Soma Games could compete and still land a feature from Apple or get covered by Kotaku.
Now it’s 2014 and as far as I can tell, the mobile space is no longer interesting for indies. I’m not the only one either. (See: Gamasutra, and this…for a start.) In fact, just about everybody I got to know as other indie mobile developers in the last several years is coming to the same conclusion. Mobile is over, let’s do PC games.
IDF is always a great place to get a glimpse of upcoming technology and while some portion of what you see there never quite makes it to the real market a trained eye can start to sense what ideas really have legs and are likely to keep going. This year, the stars that caught my attention were the consumer scale robots with Edison tech, and wireless everything. Continue Reading…
It was sixteen months ago that we posted our first blog regarding Redwall, or Project Mouseworks. Shortly thereafter we launched our AbbeyCraft kickstarter, it funded, and then roughly a year ago this month AbbeyCraft was released. All going well so far. The plan at that point, as far as we could see it, though shrouded in some pretty dense fog, was to wrap up a modest private funding effort, build a modest adventure game and then see what happened. It was a pretty straightforward plan and while Redwall was obviously a big thing, our goals were fairly short term and limited. But something happened on the way to that pivot and while it’s cost us some time I hope you’ll see it as something overall quite positive – I know we do.
Setback #1: If I’m honest, I was just horribly naive about how the private funding world works. I’d never done it before but with all things considered it felt like the right play as opposed to either a traditional publishing deal or taking a second draught at the crowd funding trough. I’ll certainly write more about this experience in the future but suffice it to say that I underestimated the time this was going to take. On its surface that sounds like a bad thing, it was certainly wretchedly frustrating at times, but as I’ll describe below I think it was actually a blessing in disguise.
As part of the continuing series covering our experience with the RealSense technology from Intel, I’ve been thinking about gestures…
I’ve been saying for a long time that one of the keys to Apple’s success in getting developer buy-in for iOS was the very approachable and well designed tool kit they provided in X-Code. It was as if they polled 100 random potential coders and asked, “If you made an iPhone app, what’s the first thing you would want to tinker with?” and then they made all of those APIs easy to find and easy to use. The result was a tool kit that rewarded you early for modest effort and thereby encouraged developers to try more, to get better, to learn more again and keep exploring the tool kit for the next cool thing. It made adoption of something totally new feel manageable and rewarding. That not only encouraged the curiosity crowd, but also the business-minded crowd who has to ask, “How long will it take to adopt this tech? And is it likely to be worth it?” So long as the first answer is “Not too much.” then the second question is less acute. The point being: it enabled early adopters to show off quickly. That drew in the early followers and the dominoes fell from there.
RealSense would benefit greatly from this lesson. Hardware appears to be in the pipe and were adequately impressed by the capability – check. A Unity3d SDK (among several others) is looking really sharp – check. So now I’m thinking about the question, “…What’s the first thing I want to tinker with?” and probably 75% of my ideas revolve around gestures. In fact, gestures are probably the essential component of this input schema and as such, it will be make-or-break for Intel to make gestures easy to get started with and also deep enough to explore, experiment, and mod. But Easy needs to come first…
Coming back recently from CGDC has me thinking again about something I always think about at CGDC – whether or not we’re the “black sheep” of that group…and if we are, is that a good thing or a bad thing.
Last year at the end-of-conference Town-Hall part, where everybody can basically bring up anything they want, Mikee Bridges from GameChurch said something that brought this idea back to the front of my mind. I don’t remember exactly what he said but it was something along the lines of “Are all of our [game projects] actually serving the function of outreach?”
It’s a perfect question for Mikee. After all, GameChurch’s mission statement is one of outreach – specifically an outreach to gamers. But I was surprised at how quickly my mouth popped open and I said “that’s not what we’re doing…” And I’ve been pondering that brief exchange ever since.
Continuing our series on Intel’s new/upcoming RealSense technology we recently got the alpha build of their Unity3D enabled SDK and a much improved version of the camera. While the package is cool and opens up a lot of interesting theoretical possibilities it got us thinking about the practical question surrounding this tech.
RealSense is, at its bottom line, an input device. In that sense it will be measured against things like joysticks, mice and game controllers and as a developer trying to make a living with this software we’ll be looking at several things beyond the “cool” factor. Things like:
Typical hardware profile
Time/cost to implement
When we’re being compensated to experiment and do basic R&D (And – full disclosure again – we are.) then we can ignore basically all of these considerations but when we move past that and start to explore actually deploying such tech…suddenly the calculus for deployment changes dramatically.