Since I already have a series going about Music Games, I decided that it can’t hurt to start another one; this new series will revolve around the experimental systems I have made as a game designer / developer. This will be a little more technical than my past articles, but it will still provide critical ideas about the design of the prototype. So without further adieu, here is my intent statement for this first project:
“Using only audio loops and inputs, I intend to make a meditative sound experience based solely on the players own timing and triggering of music. By doing so I expect to present players with a more immersive sound experience taking place in their mind with no visuals; in essence, creating a game that blind people can play with their ears.”
I think it’s fitting that the first article in this series is about music games! As you all know from past articles, I have been researching music games for a while. My main goal in constructing this experience was to make a rhythm game that didn’t rely on the heavily influenced visual queues that most music games rely on. To accomplish this goal I spent most of my time coming up with a system that has no visuals but still provided ways to indicate to players when to press a key: this lead to the audio serving as the primary means to communicate with players, rather than being used for supporting information.
I started by setting a few rules to help create the experience. The first was to lock the BPM (Beats Per Minute) of all the audio tracks to be produced to 120. In addition, all audio loops would span 2 seconds, or 2 ½ measures, which would in theory make it easier to queue players into the certain timing of the loops. In the process of doing this though, I realized that there wouldn’t be much of a challenge after the player figures out the initial mechanic. I later decided to break this strict time span rule for testing purposes.
With all this in place, I still needed a way to make sure that players could easily identify when to press the correct inputs. Having dabbled in spatialized audio in VR, I decided to make the game based on left and right ear queues. Thus giving my game a headphone requirement. In addition, it helped to inform how players would be playing the game, as I could divide the input up into left and right hand keys.
A side goal for this game was to make the game accessible for blind people. I chose to map the starting keys to [ F ] and [ J ], as those are typically considered to be the most easily found keys on a regular QWERTY keyboard.
At this point, I had a pretty good understanding about how the game would work: by pressing a key with your right hand, you would wait for the audio queue to bounce to your left ear, and then press the corresponding key with your left hand, and try to play all the patterns you could. Originally I had eight loops planned, but as people only have five fingers, I decided to cut it in half (this isn’t supposed to be a full game anyway).
So all this being said, I sat down to prototype it. I knew this would be a real pain to hard code if I wanted to change any of the audio tracks, (which I did eventually), so instead of programming a game, I programmed a tool that would help me make the game. Using the Unity Editor I constructed a state machine with four variables that would make up the customization that I deemed essential to the creation of this experience.
The customization functionality I decided upon was the ability to set an audio clip, mute an audio clip, and re-position the object based on a selected time (usually in accordance with the audio clips length). Note that I am using an object field for the audio clip, this technically breaks the game if left unset, so I implemented a warning system that will remind users (even though only I would be using it), that the game would not function if the clip was unset.
This is what it looks like in the Unity Editor. With this simple framework, I could make just about any pattern with minimal effort. Note that some states are muted, because the first audio clip is long enough to just play over them. With this framework I could really make some complicated bouncing sounds for the player if the experience needed it. Fortunately it didn’t.
So how did I do? While the majority of people understood the game, I feel that in order to prioritize the zen nature that I originally anticipated the game to have, I would need to majorly rework how the mechanic and tutorial are presented to the player. Something less game-like, and more free-form, like a toy. The good news is that the feedback I received was incredibly focused, testers from the focus group session were keen to point out how the game was almost hitting the mark of being extremely tranquil, instead of frustrating.
Overall, the experience is a pretty solid representation of what I set out to do, as it still gives players audio loops and rules to construct without relying on visuals. The game is successful in proving that it is possible for players to recognize timing without visual indicators. If I were to pursue this idea more, I might create some visualization of the gameplay while making sure that the game is completable with just audio. Additionally, dropping the ‘game’ tag and just making it a freeform experience without punishment might be a better path for this prototype.
More interesting systems to come!