6

May You Live Until You Die

 3 years ago
source link: https://leemartin.dev/may-you-live-until-you-die-6f45b37ca048
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

May You Live Until You Die

Conducting an Online Seance for Lord Huron

The pandemic has greatly reduced the opportunity to conduct séances around the world, so you can imagine my excitement when Lord Huron reached out about conducting one online in support of their incredible new record Long Lost. The band and I have history. It was only three years ago when Republic Records and I were building the Follow The Emerald Star geo listening campaign. The world-building Lord Huron produces is incredible fuel for developing marketing concepts and this record is no exception.

After speaking to the client, I envisioned a simple app which brought together learnings from previous campaigns for Behemoth, Trivium, and Slipknot. The app would invite fans to recite an incantation a specified number of times until a long lost broadcast was heard, all the while a séance inspired visual evolved and became more chaotic. We set our goal conservatively at 5000 utterances but the fans blew us away, reciting the incantation over 20k times in 15 minutes. The first séance is now complete but you can still hear the transmission by visiting the app.

The project involved a bunch of interesting technologies, from

’s Speech To Text service to the 3D JavaScript library Three.js. Read on to learn about how some of the key components came together.

The Incantation

Incantation UX example

At the core of any well conducted séance is an incantation participants should recite to make contact with the spirits. Our incantation was the following lyric used throughout Long Lost marketing efforts:

“May you live until you die.”

In order to confirm that a participant recited a certain amount of words in the phrase, we employed the use of IBM Watson’s Speech to Text service. Speech to text transcribing requires we gain access to the user’s microphone via WebRTC, send an audio recording to the STT service, and wait for a transcription result. It’s a complicated ritual. Luckily, IBM Watson has developed a Speech JS SDK to handle the heavy lifting. Check out that SDK link for a bunch of great documentation and I’ll just briefly discuss our app’s configuration and handling of results.

First, we’ll need to generate an access token for each call to the Speech to Text service. I’m using a

function for this and have provided my exact code on this gist. Once you’ve added that function (and the required environment variable,) you can simply fetch the access token and make it part of your SDK configuration.
recognizeMic(Object.assign(token, {
keywords: ["may", "you", "live", "until", "die"],
keywordsThreshold: 0,
inactivityTimeout: 5,
maxAlternatives: 10,
objectMode: true,
wordAlternativesThreshold: 0
}))

First, we tell the SDK the exact array of words we are trying to spot in the audio, our incantation. We then reduce the threshold for keyword spotting to 0 so the results are very forgiving, making it easier for users to participate. Next, we lower the inactivity timeout to 5 seconds so it closes the connection quickly if silence is detected. (This saves us money) Then, we increase the amount of alternative transcripts the service returns and lower the alternative word threshold, to again, increase the likelihood that the app will spot the keywords.

Next, we need to listen to results and decide if the user successfully uttered our incantation. To do this, we check to make sure the result is final, create a unique array of all matched words, and see if the matched amount is over a threshold we specify. In the case of our experience, we checked to make sure the participant said 50% of the words.

this.incantation.on('data', data => {
let results = data.results[0] // final result
if (results && results.final) {
// matched words
let matches = Object.keys(results.keywords_result).length // check to see if enough words were said
if (matches / keywords.length > 0.5) {
// success, create utterance
} else {
// failure, inform user
}
}
})

If the participant successfully uttered the phrase, we add an entry to our DynamoDB database, increment the total amount of utterances of our séance, and then inform all users of the updated count using

.

Our Crystal Television

1*2CGrtAxmdqkhYwB1m2ho4w.jpeg?q=20
may-you-live-until-you-die-6f45b37ca048
The TV UV map

Traditionally, séance’s employ the use of crystal balls or other spiritual artifacts to channel the energy necessary to make contact but we decided it would be more appropriate for our campaign to use a vintage television. There are many vintage 3D TV models for purchase on various services but I was adamant about keeping the app’s overall footprint small and decided to model the object in Blender out of only two shapes: a rectangle for the TV and plane for the screen. I found this YouTube tutorial by Darren Lile on UV mapping in Blender to be very helpful in wrapping my head around the basics of texturing objects. Since users would only see the front and sides of the television, I kept the texture simple with a nice vintage TV photo on the front and wooden paneling on all other sides. Once I was happy with the TV in Blender, I exported it as a GLTF for future importing into my Three.js scene using the GLTF loader.

Séance Counter

Telethon inspired incantation counter

At this point, we had a sweet vintage TV in my Three.js scene but the screen was blank. We knew we wanted to visualize the incantation count on this screen but I wasn’t entirely sure how it should look. So, I thought about our variables: vintage broadcast television, group events, counters… what other content has these variables in common? How about gameshows and telethons? Yes, telethons! I found some great telethon graphics by researching The Jerry Lewis telethon series and the above graphic was born. Luckily, Ben and the band were onboard as soon as I presented the concept.

In order to make the image dynamic, I used HTML5 canvas to first update the graphic with the newest séance countdown using fillText() as soon as the data came through via

. I thought about animating the numbers using Anime.js or GSAP but in reality, fans recited the incantation so fast, it looked like an animated counter. I could then use this dynamic canvas as a texture for my screen plane using the CanvasTexture functionality of Three.js. As someone who lives for HTML5 canvas, this functionality from Three.js is amazing and I used it again as part of the map background. One little easter egg is that the clock in the graphic is actually accurate to your local time.

Participants Map

In addition to the dynamic TV screen, we wanted to incorporate a map which displayed where utterances were originating. This would further help to visualize the group effort and global impact of our experience. Each new utterance which was added to the database was accompanied with a set of coordinates, provided through use of Maxmind’s GeoIP service (which converts a user’s ip address to a set of coordinates.) Once again, as this data came into the app via

, we generated a dynamic graphic using HTML5 canvas. First, a square mercator map image was drawn onto the canvas. Then, we converted each of the utterance coordinates into screen pixels.
let coordinateToPixel = (height, width, coordinate) => {
let x = (coordinate[0] + 180) * (width / 360)
let latRad = coordinate[1] * Math.PI / 180
let mercN = Math.log(Math.tan((Math.PI / 4) + (latRad / 2)))
let y = (height / 2) - (width * mercN / (2 * Math.PI))

return [x, y]
}

Finally, we used the x and y pixel positions to add simple white gradient circles for each of the utterance locations.

// Create gradient
let g = context.createRadialGradient(p[0], p[1], 0, [[0], p[1], 3)// Add stops
g.addColorStop(0, 'rgba(255,255,255,1)')
g.addColorStop(1, 'rgba(255,255,255,0)')// Set style
context.fillStyle = g// Draw coordinate
context.beginPath()
context.arc(p[0], p[1], 3, 0, 2 * Math.PI)
context.fill()

In order to prevent the visual from falling over due to the amount of new utterances coming through the app, I established a max number of utterances that should be displayed at any given time and handled things accordingly.

// Shift for any utterances over the max
if (utterances.length > maxUtterances) {
utterances.shift()
}// Add utterance
utterances.push(utterance)

Long Lost Objects

Three.js séance scene

The visuals of our campaign were inspired by the Twilight Zone intro and has serious Haunted Mansion vibes. As such, we knew we wanted to have some objects floating around the television. I mean, is it really a séance if you don’t have floating objects? As it turns out, the band already had conceptualized many of these long lost objects and sold them on Craigslist as part of the world-building. Things like Tubb’s hat and the hefty lefty drumsticks. All we needed to do was add them to our Three.js scene.

Building off the genius of the Twilight Zone intro, I knew 2D images for each object with reduced opacity which always faced the participant should be fine. Luckily, Three.js provides a Sprite plane object for exactly this purpose. So, we loaded all the textures and then created Sprite objects and materials for each. Originally, I thought these should simply fly vertically past the scene on occasion but then I had the thought that different objects should appear, orbiting the TV, as the séance progressed.

The question was, how the hell can I make these things orbit the TV? Luckily, I was able to find this incredibly informative post on

by TheJim01 which described exactly what we needed. Here’s his explanation of the technique.
  1. Subtract the rotation point position from the object’s position.
  2. Use the object’s orbit speed, and angle to update the temp position.
  3. Add the rotation point position back to the object’s position.

We even adjusted the orbiting speed as the séance progressed. Thanks Jim!

Vintage Processing

The final touch to completing our experience visually was making the entire scene black and white and adding a bit of noise so it looked like a vintage broadcast. I was very excited to learn about the post-processing capabilities of Three.js. As it turns out, Three already had a FilmPass effect that achieved exactly what we required. All we had to do was setup a little rendering pipeline to replace the current rendering process.

// Initialize effects composer
let composer = new EffectComposer(renderer)// Add new render pass
let renderPass = new RenderPass(scene, camera)
composer.addPass(renderPass)// Add film pass
let filmPass = new FilmPass(0.5, 0.0, 1024, true)
composer.addPass(filmPass)

And then render with the composer instead.

composer.render(deltaTime)

This was a pretty basic use case of post-processing but I can’t wait to learn more about this function of Three.js.

Thanks

Lord Huron

What can I say? Ben and the rest of the band are absolute class acts. They are the kind of artists who are WILDLY talented both sonically and visually but welcome the kind of creative collaboration Republic Records, LoyalT Management, and I live for. This interactive experience is just one small piece of an incredible campaign everyone can be super proud of. I feel very lucky to be a small part of it. Oh, and, the music is really fucking good too! Stream Long Lost now and may you live until you die.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK