Dawn of an Old Age - Dance performance using Theta V

Dawn of an Old Age is a live projection dance piece that premiered at Glitch in October.

It used the Theta V for the first section to provide low-latency live-streaming of performers in another location. I’m happy to answer any questions about how the technology was set up.

2 Likes

Very cool. Are you using something like OBS to stream the THETA V to Glitch? Would love to learn how you put this together.

Is the performer in the front reacting to the video only using visual cues from the screen?

How did you get the black and white “shadow” effect?

What type of interaction is going on below?

Hi Craig! Long post incoming…

Are you using something like OBS to stream the THETA V to Glitch?

I used a custom setup. The basics are: modified Theta livestreaming app → Golang RTMP server → UE4 plugin

The modified plugin produces raw dual paraboloid images (vs stitched equirectangular images) because of temperature issues. I wrote a Go RTMP server that runs locally on a private network, and the camera streams to that rtmp address. A custom client plugin for UE4 connects to the RTMP server and feeds the frames into textures. The textures are then post-processed in a UE4 material to stitch the paraboloid halves back together again. The stitching isn’t perfect (and not as good as the Theta’s internal stitching), as you can see by this image:

But doing the stitching later in the rendering engine cuts down on the heat that the Theta produces substantially, and makes it less likely to shut down after 5 minutes of streaming during a performance :slight_smile:

You may have noticed in the video that I am using a long (obnoxiously long) ethernet cable for the streaming. The performance venue had many competing wifi signals, and streaming over wifi became impossible…frequent disconnects and slow reconnects. Craig, I used information from your post to help get that working, so thank you!

I was able to control the camera almost entirely over the network via automated curl requests, including starting the streaming plugin, starting streaming, and stopping streaming. But not stopping the plugin. Controlling the camera in this way seems like a simple task, but this is where 90% effort on the UE4 plugin code went, because it needed to be bulletproof. Since performers can accidentally shut off the camera (which would be detrimental to the performance), I needed a finite state machine that knew how to navigate between the different states to put it back into a “desired” state (like streaming). So if the ethernet cable fell out, or they shut the plugin off, the system would heal itself to the correct state again.

Is the performer in the front reacting to the video only using visual cues from the screen?

It is mostly improvised, but some of it is choreographed.

What type of interaction is going on below?

Are you referring to her turning? With the turning, she is controlling the orientation of the the live feed with the orientation of her body…you can see that when she turns, the view turns as well. You’ll notice that when she is facing the audience, the view from the camera is always towards the person holding the selfie stick. This is possible because the dancer is “inside” of a virtual sphere, and her rotation in the real world controls her rotation in the game engine.

How did you get the black and white “shadow” effect?

Are you referring to the silhouette around the performer? Since the projector is in front of the performer, the black shadow you see around her is from the projector itself. I am projecting white onto her outline so that it looks like she is “in front of” the projection. This allows us to do things like put the performer “in” water (the last section of the piece) and have it look somewhat convincing. Tracking the dancer in order to do this is part of a larger framework I’ve written (of which the Theta V client/server code is a part of). I’m happy to share more information if you have specific questions about it!

Would love to learn how you put this together.

I am in the process of putting together more comprehensive documentation about how tech-artists can put together work like this. I am currently trying to gauge interest for the kind of framework I have built, so that I can share it with others. I will be happy to share more information about it with you (or here, or both!) if you think it is something that will be valuable to others. In the meantime, here is the pdf file for the program for our show, which has a very high-level overview of the hardware involved:

1 Like

This is fantastic. I’m going to share this with the THETA product manager. I have a mtg with him tomorrow. So cool.

There are several artists on this forum. The trick is to get the information about your platform to them. People tend to jump in and out of the community, obviously more active during a project.

In addition to the artistic framework, there’s also the base technology of streaming a dual-fisheye stream to reduce the heat and using a GO RTMP server to stitch it. Very clever way to get around the heat problem!

IMO, the stitching you have is good-enough for this type of artistic event. It may have use for other types of events.

Thanks for sharing this!

Thanks Craig! I’ve lurked in many of your tutorial and explanation threads, and I would not have been successful with the Theta if it wasn’t for your work here :slight_smile:

I would love to do more work with the Theta in the future, and I hope my framework can allow other artists to leverage it creatively as well.

1 Like

Does the diagram below reflect the project architecture implementation steps?

How are you getting the body orientation of the dancer? Is the dancer wearing a position sensor?

For the Go RTMP server, I used https://github.com/nareix/joy4

In Unreal, one additional detail is that the stitching code I put together runs on the GPU, so it is very performant. UE4 is also where I handle all the camera state control, so I can turn the camera streaming off/on via a button on an xbox 360 controller.

The dancer wears a small android phone in the top of her back. I wrote an android app that runs on it and gathers the accelerometer data and streams it over the network to the main rendering computer. I do positional tracking with an external camera, so the android phone just handles orientation, but I combine the position + orientation in the game engine logic.

Everything else is accurate, looks good!

1 Like

Here’s an example of an early test with the orientation device:

1 Like

The more I learn about this project, the cooler it gets! Really nice job with this. It was gutsy to use it in a live performance with artists. I’m glad that everything was a success.

CUDA!
https://www.nvidia.com/content/newsletters/web/CUDA_Week_in_Review_May_21_10.html

Did you use the CUDA SDK for “Dawn of a New Age”?

We’re starting to experiment with NVIDIA Jetson.

Have you looked at Jetson Jetpack SDK?

1 Like