
Demystifying SMPTE 2110 with Leigh Whitcomb at AI-Media Summit 2025
Demystifying SMPTE 2110: How Live Production is Moving to the Cloud, and What It Means for Metadata and Captions.
As live production rapidly evolves, traditional workflows based on SDI (Serial Digital Interface) are reaching their limits. In a cloud-first, virtualised world, where flexibility and scalability are critical, SDI’s tightly integrated streams of video, audio, and metadata are proving to be more of a bottleneck than a benefit.
In his recent presentation at the AI-Media Annual Summit 2025, Leigh Whitcomb of Whitcomb Consulting broke down how SMPTE 2110 – the next-generation standard for professional media – is redefining live production. He explored how 2110 manages metadata, improves workflows, and crucially, what this means for accessibility services like captions and subtitling.
Here’s what you need to know.
The Problem with Traditional SDI Workflows
In SDI workflows, everything – video, audio, captions, control data – is bundled together into a single, complex stream. For physical broadcast environments, this worked well enough. But as media companies increasingly move toward cloud processing and virtualised environments, SDI’s limitations become glaring.
Take captions, for example. If you want to generate closed captions for a UHD stream, you’re dealing with a full 12G SDI interface just to extract the audio and metadata – even though the captions themselves are only a tiny portion of the data.
As Whitcomb explained:
“While SDI works okayish with pizza box products, as I move to virtual processing or the cloud, this isn’t very friendly. I’ve got a 12-gig interface not very cloud-friendly, and interfaces that are not standard in the IT space.”
In short: SDI was never designed for the cloud era.
Enter SMPTE 2110: Separate Streams, Simplified Workflows
SMPTE 2110 flips the SDI model on its head by breaking out video, audio, and metadata into separate IP-based streams. This architectural shift is a game changer.
With 2110, devices can subscribe to only the streams they need. A captioning system, for example, doesn’t have to process the full video feed – it can just pull in the audio stream and relevant metadata.
This separation not only reduces bandwidth requirements but also makes the workflow far more cloud-friendly and scalable.
Whitcomb summarised:
“If I have an audio-only device, I can route only audio packets to it. I don’t need that 12-gig interface anymore.”
Beyond operational simplicity, 2110 also leverages standard IT technologies, making it easier to integrate with cloud services and virtualised environments.
Metadata and Captions in a 2110 World
One of the most exciting elements of 2110 is how it handles metadata – the crucial data that drives captions, SCTE triggers, HDR signalling, and more.
In the 2110 family of standards, 2110-40 specifically deals with metadata streams. By moving metadata to its own stream, workflows like closed captioning become more efficient and less dependent on specialised hardware.
Looking ahead, newer parts of the 2110 standard are set to unlock even greater capabilities:
- 2110-41 (Fast Metadata Framework): Designed for real-time, dynamic metadata transport.
- 2110-43 (Timed Text Markup Language): Provides native support for captions and subtitles using TTML, which is particularly exciting for accessibility and localisation workflows.
As Whitcomb pointed out:
“ST 2110-40 got us going but was mostly a legacy thing. These newer standards give us a framework for much more future-proof workflows.”
Practical Benefits for Captioning and Accessibility
For anyone working in captioning, subtitling, or live accessibility services, SMPTE 2110 represents a major step forward.
* Cloud-readiness: Captioning solutions can now operate entirely in the cloud, using lightweight streams rather than bulky SDI infrastructure.
* Scalability: Multiple captioning or metadata devices can subscribe to the same stream, simplifying multi-language and multi-region productions.
* Futureproofing: With 2110-43 and TTML support, broadcasters can deliver rich, flexible captions natively within their IP workflows.
Tools for Debugging and Optimising 2110 Workflows
Moving to IP brings new tools and approaches for monitoring and debugging. Whitcomb highlighted several valuable options:
- Wireshark: A free, powerful packet analyzer that can dissect 2110 streams with the right configuration.
- Specialist analyzers: Tools like Bridge Technologies’ VB440 and Telestream’s Prism 2110 offer high-level stream analysis, helping engineers quickly identify and resolve issues.
He emphasised that understanding these tools is critical as teams transition from SDI to IP environments.
Beyond Broadcast: The Rise of IPMX
One of the more forward-looking topics in Whitcomb’s session was IPMX – a new standard built on SMPTE 2110 but designed specifically for the broader ProAV (Professional Audio Visual) market.
While SMPTE 2110 was built for high-end broadcast operations, IPMX aims to simplify things for larger, more varied AV environments, including corporate events, education, and venues. This extension of the standard could help accelerate adoption and standardisation well beyond traditional media.
Final Thoughts: A Future-Ready Standard for Live Production
SMPTE 2110 isn’t just an evolution of SDI – it’s a rethinking of how we design live production for the future. By separating streams, embracing IP protocols, and providing better tools for metadata and accessibility services, 2110 opens the door to scalable, cloud-ready workflows that meet the demands of modern audiences.
Whether you’re a broadcaster looking to modernise, a live event producer, or a captioning provider focused on accessibility, now is the time to explore what 2110 can do for your workflows.
“As we move to 2110, it makes it much easier to go to the cloud or go to virtualised products.” – Leigh Whitcomb
Interested in how SMPTE 2110 can modernise your captioning and translation workflows? Get in touch with our team today to learn more.
Big thanks to Leigh Whitcomb for his expertise and insight on this topic, and for sharing his time with us at the recent AI-Media Annual Summit 2025.
Full session on Vimeo: https://vimeo.com/1080440730?share=copy#t=0