Latest Entries »
The open-source code released today will allow developers to experiment with interactive art installations. While previously these interactive expriences were developed from the ground up, Google’s Interactive Spaces will give developers the tools they need to easily innovate in the space. Read the exerpt from the Google Open Source blog below:
Today, we announce the release of Interactive Spaces, a new API and runtime which allows developers to build interactive applications for physical spaces. Imagine walking into a room where the room recognizes where you are and responds based on your position.
You can see an example above. There are cameras in the ceiling which are doing blob tracking, in this case the blobs are people walking on the floor. The floor then responds to the blobs by having colored circles appear underneath the feet of someone standing on the floor and then having the circles follow that person around.
Interactive Spaces works by having “consumers” of events, like the floor, connect to “producers” of events, like those cameras in the ceiling. Any number of “producers” and “consumers” can be connected to each other, making it possible to create quite complex behavior in the physical space.
Interactive Spaces is written in Java, so it can run on any operating system that supports Java, including Linux and OSX and soon Windows.
Interactive Spaces provides a collection of libraries for implementing the activities which will run in your interactive space. Implementing an activity can require anything from a few lines in a simple configuration file to you creating the proper interfaces entirely from scratch. The former gets you off the ground very quickly, but limits what your activity can do, while the latter allows you the most power at the cost of more complexity. Interactive Spaces also provides activities’ runtime environment, allowing you to deploy, start, and stop the activities running on multiple computers from a central web application in your local network.
Sound like fun? Check it out on Google Code.
Here’s famed TED Talker Jane McGonigal’s latest Ted talk. In it she recounts the very personal tale of her recovery from a head concussion and how she designed what would eventually become “Super Better”, a game played in real life designed to help you leverage four types of post-traumatic growth resilience in your everyday life.
Some of it is quite interesting, her four types of resilience as a core activity will obviously help you live longer but I maintain certain doubts onto the ability of Super Better as a game to facilitate them. Such broad stroke activities and desired outcomes begin to fall apart when the mechanics of the game are only tied to the activity through diligent user input. Fuel Band succeeds as a feedback mechanism because it is on you all the time, measuring your ‘activity’ in the background (of course, it fails in its game design, hardware durability, and platform support). Before we can begin discussing the possibility of a game like Super Better to help you live longer, we need to do more in depth understanding of how a game can tie itself to real world action without laborious user input.
Of course, Jane McGonigal’s determinant driver was her understandably extreme desire to recover from the post traumatic stress of a concussion, but for many average billy-bob’s that factor just doesnt exist. Lengethening one’s life, or even living a more eco-life are both hard to define and harder to measure. The other side of game’s which aren’t covered by McGonigal’s behaviorist approach to game design is the fictional factor – game’s can exaggerate progress expressed through statistics, aesthetics, or mechanics. McGonigal is headed in the right direction, I just hope Super Better gets super better.