Skip to main content
SearchLoginLogin or Signup

Broadercasting test 2/16/18

Published onFeb 19, 2018
Broadercasting test 2/16/18

Broadercasting Test 1:

The first evaluation of the broadercasting system was done on February 16, 2018 during the Media Lab Talk on quantified forgiveness. The goal of this test was to establish the functionality and compatibility of the VR system with professional broadcast equipment. The VR broadcasting studio was connected to 4 live-cameras and 1 audio feed, and the operator was in VR for over an hour producing a 2D linear video. The operator switched between camera feeds, graphics, and monitored and displayed tweets.

The media lab talks is a series where Joi Ito has a conversation with a guest or group of guests about their work. Usually the guest presents their work, Joi has a conversation with them, and finally, questions are opened up to the audience. The event is usually broadcast live online and recorded. Two studios work together to accomplish this, Studio 125 and Diginovations. This presented a convenient and instructive opportunity to evaluate the broadercasting system.

The crew of Studio 125 setup 4 cameras on site, and have one feed dedicated for graphics. All these signals are transmitted at 1080i resolution at a frame-rate of 59.94Hz via SDI. One camera is focused entirely on the speaker, the second is focused on Joi Ito, the third is a wider shot focused on both, and the fourth video-camera roams the audience. The Studio 125 crew communicates with the camera-operators to zoom in/out, usually on Joi Ito and the guests.

In order to capture the feeds, we utilized a Blackmagic Decklink Duo 2 capture card which can digitize up to 4x HDSDI camera feeds (3G). During this test, we did not have a motherboard large enough (due to video-card crowding), to capture all 5 feeds, so the roaming camera feed was dropped. These feeds were communicated into Unity using the AV-PRO Live Camera Unity Asset Plugin. Audio was captured using an XLR to 3.5mm adapter into the motherboard’s sound-card. This adapter wasn’t great and didn’t sit well, leading to a scratchy noise added to the captured audio.

Two operators tried the system, with the first operator spending approximately 60 minutes in the system, and the second operator broadcasting the last 20 minutes. The experiment lead to the following observations:

It was very easy to switch between camera-feeds and the browser, furthermore seeing what was coming through on the live-cameras was also intuitive. Controlling the web-browser to go to various twitter sites was difficult, because it utilized both head-pose and mouse-pointing and was neither intuitive nor effective. In addition, the tweets displayed were cropped on the right and left side.

The largest hurdle was the attention problem. Monitoring four-screens and a twitter feed in real-time was a daunting task for a single operator. Often while searching through tweets, the operator would not pay attention to what was happening on the other screens. This would lead the operator to not-switch away from the screen when the guests were doing uninteresting things, such as checking their phone.

Another issue for this experiment was the lack of communication with the camera-operators. While the Studio-125 crew was able to tell the operators to focus on a specific person or shot, the operator in the VR system was only able to react to the changes. This led to quick cutaways as soon as the camera started moving.

Overall it was promising to observe that the broadercasting system was capable of producing content from a live-event. That being said, there wasn’t a clear indication that the experimental setup was introducing something new to the broadcasting task.

No comments here
Why not start the discussion?