The open-source code released today will allow developers to experiment with interactive art installations. While previously these interactive expriences were developed from the ground up, Google’s Interactive Spaces will give developers the tools they need to easily innovate in the space. Read the exerpt from the Google Open Source blog below:
Today, we announce the release of Interactive Spaces, a new API and runtime which allows developers to build interactive applications for physical spaces. Imagine walking into a room where the room recognizes where you are and responds based on your position.
You can see an example above. There are cameras in the ceiling which are doing blob tracking, in this case the blobs are people walking on the floor. The floor then responds to the blobs by having colored circles appear underneath the feet of someone standing on the floor and then having the circles follow that person around.
Interactive Spaces works by having “consumers” of events, like the floor, connect to “producers” of events, like those cameras in the ceiling. Any number of “producers” and “consumers” can be connected to each other, making it possible to create quite complex behavior in the physical space.
Interactive Spaces is written in Java, so it can run on any operating system that supports Java, including Linux and OSX and soon Windows.
Interactive Spaces provides a collection of libraries for implementing the activities which will run in your interactive space. Implementing an activity can require anything from a few lines in a simple configuration file to you creating the proper interfaces entirely from scratch. The former gets you off the ground very quickly, but limits what your activity can do, while the latter allows you the most power at the cost of more complexity. Interactive Spaces also provides activities’ runtime environment, allowing you to deploy, start, and stop the activities running on multiple computers from a central web application in your local network.
Sound like fun? Check it out on Google Code.
Here’s famed TED Talker Jane McGonigal’s latest Ted talk. In it she recounts the very personal tale of her recovery from a head concussion and how she designed what would eventually become “Super Better”, a game played in real life designed to help you leverage four types of post-traumatic growth resilience in your everyday life.
Some of it is quite interesting, her four types of resilience as a core activity will obviously help you live longer but I maintain certain doubts onto the ability of Super Better as a game to facilitate them. Such broad stroke activities and desired outcomes begin to fall apart when the mechanics of the game are only tied to the activity through diligent user input. Fuel Band succeeds as a feedback mechanism because it is on you all the time, measuring your ‘activity’ in the background (of course, it fails in its game design, hardware durability, and platform support). Before we can begin discussing the possibility of a game like Super Better to help you live longer, we need to do more in depth understanding of how a game can tie itself to real world action without laborious user input.
Of course, Jane McGonigal’s determinant driver was her understandably extreme desire to recover from the post traumatic stress of a concussion, but for many average billy-bob’s that factor just doesnt exist. Lengethening one’s life, or even living a more eco-life are both hard to define and harder to measure. The other side of game’s which aren’t covered by McGonigal’s behaviorist approach to game design is the fictional factor – game’s can exaggerate progress expressed through statistics, aesthetics, or mechanics. McGonigal is headed in the right direction, I just hope Super Better gets super better.
really not so sure what this is doing for anyone. apart from saying some rich director chump burned a hole in the sky on nike marketing dollars. not impressed.
Kansas City Sports stadium might be the last place Id expect a next gen CRM platform. Yet its new Chief Information Officer is taking the analogue sports-viewing experience and layering it with social technology to augment and enhance your viewing pleasure. There are several initiatives the CIO has integrated to boost ad/merch/ticket sales.
Seat Check-Ins: QR codes on seats lets you check into KC’s in-house social network. Link your credit card to order pick-ups of merch or food, and earn points redeemable online or at the stadium
Sporting Explore App: A live-play app that allows fans to earn Sporting Club points by guessing what will happen next in the game. They are planning to launch a series of apps like these to further enhance the fan experience.
Jumbotron: Twitter-stream of tweets with the hashtag #sportingKC. Mentions are up 25%.
Foot Traffic: KCSS tracks stadium foot traffic which is sold to stadium designers, and handset data sold to phone carriers. Ethical questions aside, this has helped raise ad dollars for the venue.
Command Center: To track online fans, KCSS built a dashboard to monitor fan’s online activity during the game – as many as 1700 people are on their phones at once tweeting, facebooking, and sharing photos.
Whats revolutionary about this stadium is not the check ins or the QR codes – its the feedback system this CIO has developed, one which can effectively monitor and understand consumer behaviors during the game, and respond to them effectively. Different ad units within the 350 screens across the stadium are sold to different advertisers based upon the attributes of the fan. This feedback system allows them to flexibly monitor and responds to the actual online activity of users within the stadium. And in no business is a CRM platform more relevant than a stadium, where the core business is fans & loyalty.