I have always been interested in automation. In the wake of cheap computing and other technological advancement, more and more repetitive tasks are being automated, from tax preparation to checkout services to manufacturing to product handling. Recently, I’ve found that maintaining a social media activity feels repetitive. As a result, I’ve wondered, using natural language generation, [...]
In traditional storytelling, we rely on words to conjure images in our minds. But what happens when we’re provided with visuals that represent each of the story’s words, but not its larger context? And what if the story itself is collaborative and nonlinear—and the images that represent it keep changing?
The exquisite corpse model is rooted in the surrealist movement, and we are inspired by how many experiments currently in public domain play with its framework (or lack thereof). Our take on the model—in which we essentially asked a group of collaborators to submit sentences/fragments—was to create a dynamic visualization for the “exquisite” story our writers had crafted. These collective fragments formed a base on which we layered sensory artifacts, from voice-over to tagged visuals, and we were curious as to how far we could take the experience.
Check out the site here.
*** This application calls flickr a ton, close to the max allowed by yahoo. Please be patient. ***
Find out more about the background of the project, how it works and our learnings:
- The key elements that created the site:
- An event: We hosted an event in Boston with co-founder/Cooper Hewitt director Bill Moggridge where he presented a talk on social media and the power of crowds.
- 150 people: We invited our guests to submit a twitter length sentence with their digital rsvp to the event.
- 1600 words: Which is akin to that 3 page essay you once submitted.
- One stream of consciousness: The fragments were assembled to the 150 sequential submissions.
- One Voice: The team applied voice over talent to link the fragmentation.
- Visuals – Flickr’s database* was sourced to fuel the story as an ever-changing illustration of key words.
How it works
visual of key word and words that were not searched
We built an application using Openframeworks, an open-source programming toolkit, that allowed us to
sample and sync each spoken word of the story to its written counterpart as it appeared onscreen in a web browser. We were able to click each word as we heard it play back. If we skipped over a few words, the words in between were interpolated based on length of each word; this helped to speed up the processing for keying out each word. Our app then created a time stamp for every word in the audio file. Think karaoke utility.
screen shot of the application to tab out each word in the story
This data was then loaded into Adobe Flash, where we built a visualization of the story. Using Flickr, we searched for the most recent photograph uploaded with a tag that matched each word in the story. We then played back our voice-over in sync with the images, creating a dynamic movie. Every time someone watches the movie, it changes based on the latest corresponding tagged photos posted to Flickr.
The end result is a never-ending visual story.
Learnings: Building to Think
We believe that building upon various elements of any project expands our conceptual and tactile thought processes. In creating our exquisite corpse, each step of the process challenged our grasp on what makes a story “complete”. We considered:
- The collective influence of a particular crowd: Submissions came from people with a common interest (they wanted to attend an event hosted by Bill Moggridge). Did that influence the tone/stream of the final corpse’s consciousness?
- The power of curation: We wondered what effect curating has on crowd-sourced content. It seems that contributions solicited for a specific purpose encourage a level of similarity and richness—such as multiple perspectives on the single topic—whereas undirected, open-ended contributions lead to more free-form ideas.
- The impact of crowd sourcing: The project left us wondering how we might use crowd-sourced visualizations in traditional stories and whether given writers exquisite direction can produce a coherent story through this model. Does the expression need to be static in order to generate meaning?
- Not all visuals are ready for prime time: We had to apply to few filters to make sure we we’re pulling imagery suitable for the average viewer. Who knew our tags would yield more ‘adult themed imagery.’