Which kind of applications is Deep Live Hub designed for?
The solution for broadcasting, webcasting, offline and hybrid events.
Deep Live Hub can be used for several applications where live transcripts, subtitles and translations are useful. Typical applications include:
1. Broadcast:
Broadcasters use Deep Live Hub to generate subtitles and translated subtitles for live TV.
Here's how it works:
- The broadcaster sends an audio track of the video stream to Deep Live Hub using RTMP or WebRTC.
- The original language subtitles are created with a minimal delay of 3 seconds and can be pulled via HLS egress.
- Translated subtitles are available after 5 seconds via HLS egress, and for faster results, HLS-realtime can retrieve partial subtitles in the input language with sub-second delay.
In a typical broadcast workflow, the broadcaster only uses the subtitles generated by Deep Live Hub while maintaining control of the original video signal. After subtitles are created, they merge the original video stream and the subtitle text stream before airing the final output. This process allows broadcasters to offer subtitles in multiple languages simultaneously.
2. Webcast:
Deep Live Hub supports various webcast scenarios. The most common setup involves streaming video with closed captions or subtitles rendered directly into the video. Here's the process:
- The video is sent from the encoder to Deep Live Hub.
- The AI transcribes the audio track and generates subtitles.
- A second AI algorithm translates the subtitles into the desired languages.
- The subtitles are then rendered into the video stream.
For small audiences, the video can be consumed directly via the RTMP egress using a video player. However, for larger audiences, the video is typically pushed via RTMP to a content delivery network (CDN) or video streaming platform for distribution.
3. Offline and Hybrid Events:
For offline or hybrid events, Deep Live Hub can generate live transcripts and translations for both on-site and online audiences. Here's how this works:
- A video or audio stream from the event is sent to Deep Live Hub, where it is converted into a live transcript.
- The transcript can be accessed by the audience via the Aiconix Live Viewer, an autogenerated website that displays the transcript in real time.
- The live transcript can also be processed into subtitles for use in webcasts or broadcasts.
This setup is commonly used for press conferences, parliament sessions, town hall meetings, and conferences where both on-site and online participants require access to live transcripts or subtitles.
Note:
All of our application fields also include the possibility of human-in-the-loop editing, which allows users to make corrections before the subtitles are published. In the future we will also support LLM integration with the Deep Live Hub for contextual tasks.