A glimpse at the 2006/2007 Interactive Multimedia Program at Sheridan College; exploring Interactivity beyond Navigation

Thursday, March 29, 2007

Education Trail

My time in educational institutions is coming to an end (for now at least). I took a trip down memory lane and visited the schools I once attended - St. Thomas More, Blessed Kateri, John Paul II, The University of Western Ontario, and Sheridan College.
I experimented with the colour levels in Premiere in order to show how I remember these schools. (The song is Young Folks, by Peter Bjorn & John)


For this Vlog I really just wanted to take advantage of some interesting footage that I took in London, Ontario and the effects in Premiere. I see it as a short commentary on pathways of personal growth.

Tuesday, March 27, 2007

My trip to Australia

I had such a phenomenal time working and travelling in Australia and I have always wanted to compile some of the footage I captured on my digital camera. I want to share this short Vlog of some of my experiences.

New Zealand Road Trip

I was lucky enough to spend 10 days in New Zealand with my friend Nicole. We rented a car and were quickly astonished at the beauty of this country. We loved the friendly people, the unpredictable wild animals and the beautiful views!

Saturday, February 10, 2007

Mobile Technologies – Flash Lite on the Go

James Eberhardt visited the IMM group this past Thursday and offered an abundance of information about current Mobile Technologies and creating Flash Lite applications for our small computerized extensions.

James works for a company that deals with mobile media who are currently working on many interesting projects. In collaboration with other mobile industry associations, they are working towards the creation of 10 short films for distribution on cell phones. In addition to this, deafplanet.com is another remarkable project; it is the first TV Show and Website in American Sign Language. The website features plenty of videos, games, and experiments with the unique option to have the cursor turn into a hand when rolling over a button and spelling out the feature in sign language.

The remainder of James' presentation consisted of information about the evolution of the mobile landscape; building Flash Lite applications for mobile devices such as cell phones, PDAs, iPods, and mp3 players. Flash Lite Actionscript and WML (subset of HTML) are two of the languages James discussed, noting that these technologies are easy for web developers to migrate to. The applications are also easy to use for people who are not technically savvy. However, for Flash to be more successful for mobiles the prices need to decrease and the speed needs to increase. To see some information and examples of Flash Lite, please visit: http://www.adobe.com/devnet/devices/flashlite.html#examples

The future of cell phone technology seems quite promising. As James mentioned, Steve Jobs, the cofounder and CEO of Apple, assures everyone that the iPhone will be complete with the capability to search YouTube (seeing as YouTube uses Flash Video). Of course developers still need to be respectful in regards to the end user’s resolution and bandwidth since they are paying for each download. Check out these YouTube videos; one featuring the iPhone video capability and the other featuring a spoof from Mad TV about Steve Jobs and the iPhone's current publicity.

All of these new cell phone features are innovative and useful but on the flip-side, there is also a spontaneous and social quality of life that could potentially diminish. Playing games and enjoying features of a cell phone is usually done in our spare time or in traveling from point A to point B. If a vast majority of us become engrossed in these hand-held devices, will we socialize or even make eye contact with the people around us? Don’t get me wrong, I myself enjoy the ease of text messaging. It is a very useful and convenient tool to send quick messages instead of spending quite a bit more on vocal conversations. Nevertheless, texting lacks a human quality; we are missing out on hearing a friend’s tone of voice, sound of laughter or conveyance of emotion.

Additionally, if I were to have a GPS or location recognition on my phone, will I ever choose to ask for directions or learn about a particular location from an actual person? I could just read about it on my phone and never get lost again. Although I rarely approach strangers in day-to-day life, I have while traveling and have met some interesting people and learnt some interesting facts by doing so.

James talked briefly about the popularity of SMS and MMS – text messaging. Although he did not bring up any social issues with mobile technologies, there are many critiques concerning the younger generation and their linguistic abilities as a result of SMS. For instance, as mentioned in the article: Generation Txt? The sociolinguistics of young people’s text-messaging, the youth of today “are often understood to be - or rather accused of - reinventing and/or damaging the (English) language. As a dialect, text ('textese'?) is thin and unimaginative. It is bleak, bald, sad shorthand. Drab shrinktalk. The dialect has a few hieroglyphs (codes comprehensible only to initiates) and a range of face symbols. … Linguistically it's all pig's ear. … Texting is penmanship for illiterates.”

I don’t mean to sound too skeptical or negative about this technology. I think that it’s wonderful and ‘forward-moving’, and there is always a choice on how and when we use the mobile devices. It may even be increasing our communication with one another because of the convenience aspect. There are two sides to every coin and I just wanted make note of some possible outcomes.

Sunday, January 28, 2007

Web 2.0 Presentation - Wayne MacPhail

The IMM-ers are well into the second term and are enjoying a new set of practical and hands-on challenges. It’s always refreshing to learn a bit of theory and participate in discussions based on what is currently occurring in the technological world. Last Thursday in Multimedia Pioneering, Wayne MacPhail joined our class and presented a thorough description of the Web 2.0 movement. He concentrated fully on the societal point of view, bringing a human interactive side to our technical practices.

Wayne explained Web 2.0 as a marketing term for a suit of websites that encourage community, content creation and collaboration, most often between total strangers. Aesthetically, these websites combine simple design elements such as centered orientation, limited use of 3D effects, soft neutral background colours, a few “cute” icons, plenty of white space and large text. For example, the blog spot – Vox.com applies these ‘fresh’ designs, perhaps with the intention of offering a friendly and simple way for people to create and share information without unnecessary distractions or complicated web elements.

Apart from design, Wayne talked more about the fundamental societal power involved in tagging, social bookmarking and RSS feeds. It was interesting when Wayne pointed out the organic quality to these notions, in particular, how humans naturally figure out fads by mere communication. For example, by simply utilizing an email application, internet fads exist and spread like wildfire. A video on YouTube called Interfad illustrates this perfectly. It is a compilation some of the many internet fads and jokes, many of which I recognized, shared and accepted as entertainment made by ‘ordinary’ people all over the world.

Apart from entertaining videos or animations, these fads often involve more serious communal efforts such as experts combining their intelects to create a larger intelligence. For instance, Wayne and a group of 5 other experts from the Alzheimer’s Society built their own internet, creating a newsfeed pulling important articles about the disease. In general, there is an enormous appreciation of anything that is time-saving, immediate and non-authoritative. Dynamic websites and web tools allow people to create, share and learn, all in real-time.

This creation of content has been important for years – in an article posted in 2000, usability expert, Jakob Nielsen noticed the next phase of the internet: “To take the Internet to the next level, users must begin posting their own material rather than simply consuming content or distributing copyrighted material. Unfortunately most people are poor writers and even worse at authoring other media. Solutions include structured creation, selection-based media, and teaching content creation in schools.” So currently we are at an influx of these creation tools – blogs, RRS feed capabilities, even sharing design creations in virtual worlds such as Second Life. As Wayne stated, we have moved from learning what we shouldn’t do with the web and now we are learning what we can do with the web.

I was introduced to Second Life, the 3D virtual world built and owned by over 3 million residents from all over the world. Their aim combines a number of things: allowing you to "explore a boundless world of surprise and adventure, create anything you can imagine, connect with new and exciting people and compete for fame, fortune or victory.”
It is interesting that the company embraces sharing content and open source standards so much so that they have recently announced the availability of their client source code. They invite the world to help create the back-end of this virtual world: “releasing the source now is our next invitation to the world to help build this global space for communication, business, and entertainment. We are eager to work with the community and businesses to further our vision of our space”. (Second Life Blog)

Wayne's presentation was incredibly informative and anticipates the future of the internet (and other communcation mediums) as a dynamic tool to “listen and speak, share and create” .

Saturday, October 28, 2006

Visualization Design Institute

Today we visited the Visualization Design Research Institute at Sheridan College. The Research team consists of a number of designers and programmers who collaboratively work on projects dealing with scientific, medical, engineering, educational, cultural and environmental research. The team often takes complex concepts and tries to visually model the topics in order to aid in educational or research development. In the visual culture that we live in today, this practice would certainly be popular and useful especially for e-learning and entertainment value.

We were introduced to some of the research team’s key projects including many educational based projects. The research team seems to be heavily involved in creating e-learning applications. Interactive multimedia is a fairly new inclusion to the learning environment but is a very useful teacher’s aid. Us Mob is great example of an engaging multimedia application that won the 2005 Australian Interactive Media Industry Award for Best Learning. The website “uses online characters and friendships to spark an exchange of culture, creativity and experience between Indigenous and non-Indigenous young people”. A student’s interaction with these characters will certainly aid in social development. In addition, the website allows young indigenous people to explore the web and encourages them to develop new media skills. The 7 part choose your own adventure series is set in the Australian Desert, so the interface and character backdrops are aesthetically pleasing environments as well as visualizations of the student’s familiar surroundings. By creating a familiar environment, students will likely feel a greater sense of comfort and ease with the technology presenting this interactive story.

Song Ho Ahn showed us the Immersion Theatre which involved a large, multiple screen display of a video where we could make choices about the story’s direction and character outcomes. This is a commonplace entertainment experience whereby the audience is constantly engaged and likely possesses a greater attention span. Unfortunately there were some technical difficulties during the presentation and the video that we watched was created quite a few years ago. But with the growing popularity of Flash video, I can see this idea of branching being pushed a lot further. For instance, the audience members could be recorded before the movie begins and actually watch themselves displayed on the screen acting out the movie. Video no longer has to be visualized in a linear fashion. It can branch out and encourage a lot more involvement from the audience.

As speculated in the article; “Homes of the future to be totally immersive multimedia experience”, our domestic spaces will eventually be transformed into immersive, constantly changing TV and projection screens. “TV on demand would mean that viewers can decide on plot twists in a soap opera and cast new virtual actors. Little chips worn on your clothes can encode what programme you're watching, so that screens change to keep up with your movements from room to room; they would also change the wallpaper effects, lighting, and object positioning on the walls to your personal preferences.” This would definitely change the way we live from day to day. The fact that researchers are visualizing these sorts of changes in living; there must be various innovative projects in the works. What is described here as a very unrealistic way of living could quickly become a reality.

As we have seen in class, Flash video can certainly be used as a tool in visualizing complex concepts. I came across a very interesting Flash web movie, Epic 2015, created by the Museum of Media History. In my opinion, Epic 2015 is an elegant Flash feature simulating a documentary-like style. It outlines the history of the internet and its large enterprises. The movie concludes with a forecast up until the year 2015, with potential implications of the direction in which the web is heading. There are obvious social commentaries and cultural assumptions presented. The authors of the video are using the authoritative characteristic of video to create credibility and believability in their stance on the future of the web and its news wars. Even though the visual elements in this piece are quite historical and logo-based, I think that the narration coupled with the simplicity of visualization is very effective in simple fact that the commentary is complex enough.

Thursday, October 26, 2006

Gesture Technology

I can’t think of a better way to motivate interaction with multimedia than to let users do it naturally. Using a keyboard, a mouse and a small screen has become a thing of the past. Now multimedia pioneers are working in larger formats and initiating the use of human gestures to experience the technology.

Our second Multimedia Pioneering guest speaker experience involved a class trip to GestureTek in downtown Toronto. We explored the GestureTek showroom and viewed many different examples of this technology projected on multiple surfaces. This innovative company emerged in the mid eighties and was co-founded by Vincent John Vincent, who is the creative force behind many of GestureTek’s pioneering technologies. Mr. Vincent was kind enough to meet with us, and provided us with a thorough background of the company and described the interactive pieces in the showroom.

A number of us were able to immerse ourselves into the gaming and artistic platforms that all involved video camera control technology. By placing our real-time video captures into the GestureTek environments and gaming interfaces, we immediately became completely consumed in the simulated environments. This engaging and interactive experience will be quite powerful in the future. I could see it even becoming popular to help motivate the completion of everyday, mundane tasks. For instance, a physically involved, fast-paced gesture activity could lead to an exciting exercise routine. Another example is illustrated by the GroundFX applications where the consumer becomes part of the experience of large advertising floor projections. We are apt to just brush advertisements aside; they often become an invisible and dull part of our reality. But by displaying interactive advertisements on a large floor format, the consumer will pay more attention to something that is so dynamic and that they feel they have control over. (See "Let's Get Physical:Gesture Technology engages consumers", an article by Vincent John Vincent).

As Mr. Vincent mentioned, this type of immersive interaction also becomes important in the physical rehabilitation sector. LIRT, the Laboratory for Innovations in Rehabilitation Technology believes that virtual reality is extremely relevant and useful in improvments relating to cognitive and motor rehabilitation techniques. They are collaboratively researching and implementing the Gesture Xtreme VR system, which involves the same real-time video gesture process that we saw at GestureTek, where the user is completely engaged in a simulated task. By making the experience as natural as possible (i.e. no extra electronic extensions to wear), there is little potential for a confounding factor of disorientation in patients who already lack balance and coordination skills. LIRT’s other benefits of using a gesture based technology for rehabilitation include: the user is actively participating rather than seeing a representation of him/herself, the user controls movements in a natural and intuitive manner, the stimulation of multiple body parts or the restriction of a specific problem area can be addressed, and the simulated scenarios are meaningful and functionally relevant to daily performance skills.

A key aspect about gesture technology that interests me is the technologies’ underlying duality between Art and Science. The technology is a great application of how the two worlds can work together, enhance each other or compliment each other. An article about the SIGGRAPH2005 Computer graphics and Interactive Techniques Conference touches upon this relationship in regards to a presented technology, TouchLight. TouchLight, as well as other gesture platforms, “blurs the boundaries between art and science as it challenges future designers to think about the relationship between the user/observer and the image, principally the domain of art, and the display. Similarly, it challenges our traditional idea of the shell. Does the future really include the kind of interface seen in "Minority Report?"

The interface seen in Minority Report has become a starting point for many researchers and developers. A video on YouTube, entitled "Minority Report becomes reality", presents a remarkable demonstration by Jeff Han of a gesture based technology. He amazes the audience with his fluid gestures to create anything from appealing lava lamp patterns, to photograph sorting, selecting and zooming. His marketing features to this trade show audience include characteristics such as high resolution, low cost and scalable, but also important capabilities such as it's completely intuitive and multi-user orientation.

I think it's great that technology is becoming increasingly user friendly and intuitive while giving control back to the user. Many individuals are still very intimidated and even fear the thought of touching a computer. With gesture interaction, perhaps the experience could be a more natural and comfortable one. Not to mention, it is extremely visually impressive! We can see this in Jeff Han’s demonstration of WorldWind, NASA’s open source version of Google Map. The seamless, three-dimensional visualization of Nasa’s collected data is not only visually stimulating, but it also looks like a fun way to interact with data that may otherwise just be sitting in a bunch of tables on a database.

Tuesday, October 17, 2006

Big Interactivity

In today's Multimedia Pioneering class, we were presented with an informative presentation by Dorian Lebreux, the studio assistant at Interaccess Electronic Media Arts Centre. Dorian compiled some great examples of Large Scale Electronic Art, both of historical and current origin. An important point that was continually mentioned was the fact that the definitions and categories of electronic art are often blurred or overlapping. A key focus of Interaccess exhibits seems to be how electronic art constantly pushes the boundaries to create meaning and reach target audiences.

One of the examples that I found interesting was David Rokeby's, Very Nervous System, 1986-2004. The video documentation shows how the technology creates sounds (and possibly music) depending on particular movements of particular body parts. The possibilities inherent from the interaction of this piece are endless. In a sense, this large number of possibilities could be the "big" part of classifying it as big interactivity. But perhaps a more interesting possibility was mentioned by Dorian, what would happen if a large number of people synchronized or randomized their body movements to make music? This would definitely extend the possibilities of this piece.

Dorian structured her talk using some key categories of electronic art, a particular one being, Robotics. The Norman White piece, Helpless Robot is an interactive piece where a large robot responds to the user’s interaction with it by a series of synthesized speech with various emotive tones. The more you interact with it, the more persistent and annoying it becomes.

I came across an interesting parallel of this idea within the realm of big interactive advertising. In the article, “Big Brother says: Buy this!”, Krysten Crawford discusses “The Human Locator”, a technology developed by a Canadian advertising agency, Freeset Interactive. The technology detects when humans are close by, tracks their movement and broadcasts messages directly specific to them on large surfaces (along the same lines as the Minority Report ads tracking Tom Cruise). The ads can even move along with the individual, begging “don’t go” as it launches the next marketing pitch. As Crawford says, this particular technology “can’t yet identify obese pedestrians and bombard them with images of cheeseburgers”. As horrifying as this may sound, it is likely possible by now and would possibly be more annoying than Norman White’s needy robot. We are bombarded with copious amounts of visual advertisements daily but we are used to ignoring them. So it is understandable that the advertising industry is using big interactive and personally localized ads in order appear fresh and current.

Another category that was discussed today was Public Space Big Interactivity. We saw the merging of old and new art forms with the examples of media technology mixing with architecture. Allowing the public to text a message that would later be displayed on the sides of a large building is quite remarkable. It is not only visually stunning, but it offers the public more than ‘fifteen characters of fame’ along with an immediate sense of interaction with the media. A similar public use of big interactivity is outlined in Smart Mobs – Interactive Wall Display for Community Info. Dynamo is a technology developed in England with the hopes that in public places of leisure, where it’s likely to just have small mobile technologies on hand, people can plug in their personal devices and interact with their personal large wall/table/bulletin board display of files. Files, music and pictures could be shared amongst people while “keeping a level of spontaneity and fluidity” in these places of leisure.

Technological advancements are occurring extremely quickly, and so it is hard to predict what could happen in the future, when the future is basically within the next hour. OLED technology seems to be the way of the future. Most of the websites describing features and possibilities of OLED display screens offer very practical future uses of it. The flexibility of the material point towards wearable computing, displays that conform to different surfaces like aircraft and car windshields, and roll-up displays like a daily refreshable newspaper. Other practical applications of it would be on office walls, windows or partitions or in the supermarket as electronic shelf pricing methods. I could see the benefits to all of these uses for the large displays, especially in public places where interactivity with it could possibly save time and provide useful information.

The Wired magazine is constantly featuring new technologies and every year, the company offers an interactive festival called Wired NextFest displaying technologies that “are transforming our world”. It forecasts the future of certain areas such as communication, design, entertainment, medicine, etc. For example, the future of design involves projections of holographic displays in hopes to “see how form and function can create new realities” with large displays. In addition, the future of entertainment involves ultimate immersive experiences such as digital climbing walls, kinetic karate trainers and virtual spherical treadmills. It is refreshing to see a move towards increased physical activity within Interactive Multimedia. Interacting with electronics has long been associated with passively sitting in one position for long periods of time. It would be great to get some exercise as we explore the large media forms.