The SMPTE 2110 is far from our industry's first attempt to transition manufacturing to IP. A number of proprietary solutions have been available for years, and we've also seen standardized approaches such as 2022-6 that map SDI to IP. But with the SMPTE ST 2110, we finally got a standard designed from the ground up to partition video, audio and ancillary data and enable a truly flexible workflow.
At Net Insight, we receive countless questions about NMOS and SMPTE 2110 and how to handle the transition from SDI to IP and ST 2110. We try to answer some of them in this series of articles. In the first article, we concluded that the move to IP, and in particular to SMPTE 2110, is more than just new technology. But that doesn't mean we can ignore the technical aspects, because even if we stick to today's workflows and just replace SDI with SMPTE 2110, there will be significant technical changes and challenges. For this reason, in this article, we cover the basics of the SMPTE 2110 and NMOS specifications and provide a foundation for discovering new challenges emerging in the WAN.
The big picture - elementary streams and asynchronous behavior
First, the SMPTE 2110 is designed to be video format independent, supporting 720, 1080, 4k, progressive, interlaced, HDR, HFR and more. Standards exist for both compressed and uncompressed audio and video workflows, although the first round of work focused on uncompressed workflows. Therefore, the discussion in the industry has so far been very oriented towards studios and production plants. We are now fully committed to filling the gap around SMPTE 2110 in the WAN.
comparison betweenSMPTE 2110, SDI iSMPTE 2022-6
Compared to SDI and SMPTE 2022-6 where SDI was just mapped to IP, the big news is that SMPTE 2110splits audio, video, and auxiliary data into separate primary streams. This is to provide flexibility by allowing you to route and edit different streams independently. Besides, ST 2110 also describes how SDI (i.e. SMPTE 2022-6) is transmitted when it makes more sense.
And like any IP-based production format, the ST 2110 recognizes that the underlying infrastructure is out of sync. A factor that enables the separation of audio, video, and data streams in an asynchronous infrastructure is timing, which ensures that each elementary stream is time-stamped and that timing information is transmitted outside of it. In ST 2110 this is done by PTP (IEEE 1588).
In addition to synchronization, another challenge in moving to an asynchronous infrastructure is instability. In synchronous infrastructure, there is no concept of bursting because traffic is delivered continuously. This is no longer the case with IP. Because it is packet based, every device in the traffic path contains buffers that are out of sync. This means that each device and each buffer operates independently, resulting in traffic being delivered in bursts rather than a continuous stream. For this reason, ST 2110 defines several transmitter and receiver profiles that describe how large pulses are accepted in different environments.
Finally, SMPTE 2110 actually describes an IP-based multimedia data layer. The control plane is left to its sibling, the NMOS spec set. They describe how devices on a network can recognize each other, which streams are available, and how two devices are connected. The NMOS specifications are actually the glue that makes an ST 2110-based infrastructure manageable and in many cases even more interesting than the ST 2110 itself.
ST 2110 for sound(ST 2110-30 i ST 2110-31)
In SMPTE 2110, the audio transport is based on AES67 and specifies how uncompressed 48 kHz PCM audio is conveyed. Up to 8 channels can be combined in one stream and both 16 and 24 bit depths are supported. In addition, the ST 2110-31 standard specifies how AES3 (AES/EBU) compressed audio is transmitted over IP.
For elementary streams, the key challenge in transmitting audio over the WAN is protection against loss. This is usually done with Forward Error Correction (FEC) and/or 1+1 protection, but FEC on low bandwidth services like audio introduces too much latency. The solution is a WAN architecture that can combine multiple streams into a high bandwidth packet to which FEC can be applied.
ST 2110 for movies(ST-2110-20 i ST-2110-22)
In addition to the RTP wrapper, another innovation in uncompressed video transmission is that only the active part of the image, i.e. the pixels actually used, is transmitted. Compared to SDI and SMPTE 2022-6, this results in bandwidth savings of 15-30%.
Defined to support resolutions up to 32 x 32,000 pixels, the ST 2110 is future proof in terms of support for upcoming formats and high resolution specifications. Color mode and color depth support is flexible and includes HDR.
ST 2110 for additional data(ST 2110-40)
Support data has been used for a variety of reasons over the years, some closely related to the video stream and some not. ST 2110-40 describes a general way to encapsulate ancillary data in the IP protocol so that it can be transmitted independently of audio and video.
ST 2110 for synchronization, metadata and series control(ST 2110-10 i ST 2110-21)
All ST 2110 Essence streams are based on RTP, a proven technology for transmitting time-sensitive data over IP using UDP packets. Each bundle contains a timestamp that is used to align the edges of multiple essence streams to enable live switching.
PTP (ST 2059 / IEEE 1588) is used to synchronize devices in terms of frequency and time. The challenge with PTP is that it requires very low jitter to achieve the accuracy required in broadcast environments. In a studio where PTP goes through one or more switches dedicated to live media traffic, this is less of an issue. Especially since switches in studio environments tend to provide PTP support to improve accuracy. But in the WAN, where the distances are longer and the number of hops higher, the accuracy of PTP becomes a challenge. In our experience, you need a PTP-capable WAN, which means you either can't lease the infrastructure, or you need an overlay solution that offers PTP support in addition to your non-PTP-enabled infrastructure.
In addition to using RTP and PTP for synchronization, ST 2110-10 also describes how each stream has a set of metadata that tells the receiver how to interpret it. The metadata is described using the SDP session description protocol. However, the metadata information is actually provided by a separate control system described in the NMOS specifications.
Finally, ST 2110-21 outlines how to deal with the fact that IP is inherently cracked, and software-based solutions are becoming more common now that we use standard transport infrastructure. It describes a set of timing profiles that define how large bursts of packets the receiver must handle. Note that with many time profiles defined by the standard, you may encounter situations where the receiver only accepts the "narrow" profile (4 packets in a burst), while the sender accepts a "wide" profile (20 packets in a burst). This means that the receiver will drop packets if the packets sent are too large.
But more importantly, in WAN environments, the behavior of the sending device matters less than the level of jitter introduced by the network. The bursts are cumulative, and with many hops and often leased infrastructure, the WAN makes the flows much more violent. This means that you need WAN technology that smoothes the pulses and reduces them to the level specified by ST 2110.
NMOS for detecting, recording and managing media streams(IS-04 i IS-05)
As mentioned earlier, the NMOS specifications describe a control plane that makes the ST 2110-based infrastructure easier to manage and use. In the IS-04 specifications, NMOS describes how devices register in a common register and how they can query the register to obtain information about other devices. It supports both central registrations and peer-to-peer discovery to enable smaller configurations. Transmitting and receiving devices register their capabilities, and transmitting devices such as cameras or encoders register the available flows for receivers to receive.
And collecting flows is what IS-05 describes. Specifies how media flows can be established or removed between a sending device and a receiving device, regardless of the actual protocol used to transmit the flow. It supports both unicast and multicast flows, and connections can be set up immediately or scheduled for the future.
WAN production distribution means that the NMOS registry needs to be aware of devices and flows that are available in multiple locations. The solution is typically a multi-domain setup where a register is available at each location and a master broadcast controller is connected to each of these registers to control individual devices and data flow.
NMOS to control network connections(IS-06)
To ensure that broadcast controllers can communicate and request connectivity with network controllers in a standardized manner, NMOS created the IS-06 specification. It discusses how to discover network resources, how to authorize and secure the use of network resources, and how to monitor usage.
The broadcast controller learns the network topology from the network controller and can create, modify, and delete unicast and multicast flows and listeners. Configuring flows also includes reserving needed network resources, such as bandwidth and flow prioritization.
Multiple broadcast controllers can connect to a single network controller and network fabric, allowing multi-tenant configurations where a broadcast controller only has access to specific resources.
In a distributed environment, you typically have one network controller per site managing the local switch fabric. In addition, a dedicated WAN controller is needed for the WAN to manage resources and bandwidth. To leverage leased or shared infrastructure, you need a WAN solution that can reserve bandwidth and isolate services end-to-end.
The SMPTE 2110 is designed to support more flexible workflows than in the past. It does this by splitting audio, video, and data into separate primary streams. PTP synchronized and synchronized streams. It also handles IP transition challenges such as: B. Crack.
Its sibling, the NMOS specifications, describes how devices can discover each other and available streams, how they configure sender-to-receiver flows, and how a broadcast controller can manage network resources as needed.
As discussed in this article, the transition from SDI to SMPTE 2110 presents many challenges. Especially in wide area networks. The challenges we will cover in more detail in the next post in this SMPTE 2110 series.
ALL SMPTE 2110-POSTS
- Part 1 -How SMPTE 2110 is changing live production?
- Part 2 -What is SMPTE 2110 and NMOS?
- Part 3 -SMPTE 2110 Challenges in Wide Area Networks (WAN)
- Part 4 -SMPTE 2110 Benefits and Business Case
SMPTE ST 2110 WALL-MOUNTED TABLES
A set of standards for handling digital multimedia in an IP network. ST 2110 defines the separation of audio, video and auxiliary data into primary streams and is intended for use in equipment for the production and distribution of television programs
The Society of Film and Television Engineers is the worldwide professional association of engineers, technologists and executives working in the media and entertainment industry.
Finally, SMPTE 2110 really only describes an IP based media data plane. The control plane is left to its sibling, the set of NMOS specifications. They describe how devices on a network can detect each other, understand what streams are available, and how to connect two devices.What is SMPTE 2110 standard? ›
The SMPTE ST 2110 standards suite specifies the carriage, synchronization, and description of separate elementary essence streams over IP for real-time production, playout, and other professional media applications.What is the benefit of SMPTE 2110? ›
Interoperability: One of the primary benefits of the SMPTE ST 2110 standard is that it allows for greater interoperability between different systems and devices.What is the difference between SMPTE 2110 and smpte 2022? ›
While ST 2022-6 is more of a transport solution, i.e. for playout and transport over long distance, ST 2110 is useful for production environments where you or your customers need to shuffle audio, change ancillary, compress video… all the regular steps to do a production!What is the use of SMPTE? ›
Longitudinal SMPTE timecode is widely used to synchronize music. A frame rate of 30 frame/s is often used for audio in America, Japan, and other countries that rely on a 60 Hz mains frequency and use the NTSC television standard.What does NMOS do? ›
An N-channel metal-oxide semiconductor (NMOS) is a microelectronic circuit used for logic and memory chips and in complementary metal-oxide semiconductor (CMOS) design. NMOS transistors are faster than the P-channel metal-oxide semiconductor (PMOS) counterpart, and more of them can be put on a single chip.What is the full meaning of SMPTE? ›
The Society of Motion Picture and Television Engineers (SMPTE) (/ˈsɪmptiː/, rarely /ˈsʌmptiː/), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is a global professional association of engineers, technologists, and executives working in the media and entertainment industry.How does SMPTE timecode work? ›
SMPTE timecode appears as hour:minute:second:frame (for example, one hour would be written as 01:00:00:00). The frame rate is derived directly from the data of the recorded medium: in other words, the frame rate is inherent to the media, and can differ for film vs. digital, video vs. audio, and color vs.What is SMPTE code? ›
Time code, sometimes known as SMPTE time code or SMPTE code, is an electronic signal which is used to identify a precise location on time-based media such as audio or video tape or in digital systems. (SMPTE refers to the Society of Motion Picture and Television Engineers.)What is NMOS standard? ›
NMOS is a family name for specifications produced by the Advanced Media Workflow Association related to networked media for professional applications.
The bandwidth requirements of SMPTE 2110 (~2-3 Gbps for HD video depending on overhead, lossless compression, and other assets) means that it is not always suitable in situations such as remote production (REMI) over the internet where bitrates need to be much lower.What is the difference between SMPTE 2110 30 and 31? ›
SMPTE ST 2110-30 defines how you transport 16-bit or 24-bit linear PCM audio, including AES67-compliant audio streams for the most compatibility. SMPTE ST 2110-31 defines how you transport AES3-compliant audio and data such as Dolby®E and others.What is the SMPTE standard for 4k? ›
SMPTE UHDTV standard
The standard defines the following characteristics for these systems: A resolution of 3840 × 2160 (UHDTV1) or 7680 × 4320 (UHDTV2) Square (1∶1) pixels, for an overall image aspect ratio of 16∶9. A framerate of 23.976, 24, 25, 29.97, 30, 50, 59.94, 60, 100, 119.88, or 120 Hz with progressive scan.
SDI interfaces are standardized by the Society of Motion Picture and Television Engineers (SMPTE) and include: standard-definition SD-SDI (SMPTE 259M) for 480i and 576i; “expanded” standard-definition ED-SDI (SMPTE 344M) for 480p and 576p; high-definition HD-SDI (SMPTE 292M) for 720p and 1080i; and high-definition 3G- ...What is the difference between 2110 and 2022? ›
The main difference between two SDI over IP solution is that ST 2022 simply encapsulates the SDI video part, audio part and data part into one same IP packet, while ST 2110 makes the SDI video, audio and data parts encapsulated in three different IP packets for transmission.Is SMPTE analog or digital? ›
SMPTE 292 is a digital video transmission line standard published by the Society of Motion Picture and Television Engineers (SMPTE).What is the difference between MIDI and SMPTE? ›
Unlike the measure driven pulse of MIDI clock, MIDI time code (MTC) is time-based only relating to hours, minutes, seconds and frames, not measures or tempo. SMPTE is basically a higher resolution time-based clock. But MTC has been widely adopted by the audio industry and is slowly becoming the time code norm.What is the difference between media timecode and Smpte timecode? ›
The SMTPE timecode is the original frame count of the video. The MEDIA clock represents the real playback time of the video file. For some video frame rates, such as 23.976fps and 29.97fps NDF, those time indicators are not equal. 23.976 fps is the video speed, while the original footage was typically shot at 24 fps.What is the most used NMOS? ›
BS170. The BS170 is a most common N-channel enhancement mode MOSFET which is used by many electronic designers and hobbyists; it comes in a TO-92 package and performs best with low voltage low current applications.What is the disadvantage of NMOS? ›
The major drawback with NMOS (and most other logic families) is that a DC current must flow through a logic gate even when the output is in a steady state (low in the case of NMOS). This means static power dissipation, i.e. power drain even when the circuit is not switching, leading to high power consumption.
The ON resistance of an NMOS is roughly half that of a PMOS, although PMOS is less noise-prone. For the same output current, NMOS transistors offer a smaller footprint than PMOS transistors, and NMOS is also quicker than PMOS. In conclusion, NMOS and PMOS should both be created to function symmetrically.Is Smpte 2110 compressed? ›
ST 2110-22: Constant bit-rate compressed video transport
SMPTE 2110-22 defines the key requirements for transporting compressed video essence. The compression standard needs to provide a constant bitrate, a defined RTP payload and low latency to satisfy the needs of Live production.
“The difference between SMPTE A and SMPTE B is that SMPTE A is basically the equivalent of Interop, just formulated and structured in compliance with the SMPTE standards,” she explains. “It is also referred to as a base DCP as it does not include markers, metadata, MCA or rating information.What is SMPTE 310? ›
The standard SMPTE 310M is a television standard by the Society of Motion Picture and Television Engineers, describing a synchronous serial interface for MPEG-2 digital transport streams.What are the 2 types of timecode? ›
There are two categories of time code that editors need to know about; LTC (Linear Time Code) and Vertical Interval Time Code.What are the three types of timecode? ›
Linear timecode (LTC), in a separate audio track. Vertical interval timecode (VITC), in the vertical blanking interval of a video track. AES-EBU embedded timecode used with digital audio. Burnt-in timecode, in human-readable form in the video itself.Why should I use timecode? ›
The goal of timecode is to synchronize all the cameras and audio recorders on set so that in post it's a matter of clicks to sync them all up correctly. Timecode is a metadata-level form of synchronization, meaning it does not affect how the source devices are actually recording their medium.Is SMPTE timecode still used? ›
SMPTE Time Code is like that. While the entire production and post world around it has changed radically, time code has remained virtually the same. Still relevant after all these years, time code allows us to easily sync sound to picture in double system workflows.How do you read a timecode? ›
Timecode is displayed as [Hours : Minutes : Seconds : Frames]. For instance, in the sample above the video clip begins at 14:34:05:04 (14 hours, 34 minutes, 05 seconds, 04 frames).What do the numbers in timecode mean? ›
Standardized by SMPTE (the Society of Motion Picture and Television Engineers), video timecode is usually represented as hour, minute, second, then frame number each separated by a colon (:). For example, the 6th frame at 1 hour, 23 minutes, and 45 seconds of a video would be displayed as timecode 01:23:45:06.
N-Channel (NMOS) – In an N-Channel MOSFET, the source is connected to ground. To turn the MOSFET on, the gate voltage must be raised. To turn it off, the gate should be connected to ground.Which is better CMOS or NMOS? ›
CMOS is chosen over NMOS for embedded system design. Because, CMOS propagates both logic o and 1, whereas NMOS propagates only logic 1 that is VDD. The O/P after passing through one, the NMOS gate would be VDD-Vt. Therefore, CMOS technology is preferred.Why is it called NMOS? ›
NMOS is called N-Metal-Oxide-Semiconductor in English. It means N-type metal-oxide-semiconductor, and the transistor with this structure is called NMOS transistor. MOS transistors are divided into P-type MOS transistors and N-type MOS transistors.What is the difference between ST 2110 and NDI? ›
NDI, a lightly compressed, ultra-low latency codec, allows full productions in visually lossless video with a field of latency. SMPTE's ST-2110 allows full productions with uncompressed video and almost zero latency.Is SMPTE 2110 multicast? ›
Finally, most SMPTE 2110/2022 systems will be multi-homed (more than one ethernet adapter).How much fiber bandwidth do I need? ›
50 Mbps—Good for 2–4 people and 5–7 devices. A speed of 50 Mbps can handle 2–3 video streams plus some extra online activity. 100 Mbps—Good for 4–6 people and up to 10 devices. Most families would be amply covered with a 100 Mbps internet connection.What is the difference between AES67 and 2110? ›
The difference between AES67 and ST 2110 comes into play at exactly this place: while AES67 follows the recommendation of RFC 3550 to use a random offset between media clock and individual RTP stream clocks, ST 2110 mandates for an offset of 0. The reference clock provides a common reference time.What is 2110 broadcast? ›
SMPTE 2110, (as ST 2110, is commonly referred,) is a suite of standards released by the Society of Motion Picture and Television Engineers for transporting uncompressed or lossless compressed digital media over IP networks for broadcast contribution and production.How do I know if my signal is 4K? ›
Check your source devices' settings.
To check this, you need to go into your source's Display or Video settings menu and look at the output resolution; you can also check it by pressing the Info button on your TV's remote control to get an on-screen display that shows what resolution the TV is receiving.
In order to stream in 4K quality, you'll need to send a bitrate of at least 15 Mbps and have a 4K-compatible encoder and streaming service. You'll also want to make sure your viewers have the ability to view the stream in 4K, and/or offer more than one bitrate (recommended).
4K is darker than HD. A 4K screen has four times as many pixels as a Full HD 1080p screen, making it harder for the backlight to illuminate the image.What is the best quality video encoding format? ›
For web streaming, the best video codec is simple to identify – it is H. 264. H. 264 is hands down the best codec for web streaming because it is highly efficient and compatible with most other forms of video technology you need to use for web streaming.What is the best codec for video encoding? ›
|Codec Suitability||Live Origination||4K|
CMOS stands for Complementary Metal-Oxide-Semiconductor. On the other hand, NMOS is a metal oxide semiconductor MOS or MOSFET(metal-oxide-semiconductor field-effect transistor). These are two logic families, where CMOS uses both PMOS and MOS transistors for design and NMOS uses only FETs for design.What is NMOS in broadcasting? ›
The NMOS server is the controller that manages the transport and communication between devices, a broadcast controller, across the entire media network. This is essentially another layer in the full network and functions like an SDN. See Figure 1.What does N in NMOS stand for? ›
NMOS stands for N-type metal oxide semiconductor, is a type of MOSFET in which electrons are the dominant charge carrier in the semiconductor channel.What are the advantages of CMOS and NMOS? ›
An advantage of CMOS over NMOS is that both low-to-high and high-to-low output transitions are fast since the pull-up transistors have low resistance when switched on, unlike the load resistors in NMOS logic. In addition, the output signal swings the full voltage between the low and high rails.Why we use CMOS instead of NMOS? ›
The ON resistance of an NMOS is roughly half that of a PMOS, although PMOS is less noise-prone. For the same output current, NMOS transistors offer a smaller footprint than PMOS transistors, and NMOS is also quicker than PMOS. In conclusion, NMOS and PMOS should both be created to function symmetrically.Do phones use CMOS sensors? ›
Image sensors built into today's digital cameras and mobile phones mostly use either the CCD (charge coupled device) or CMOS technology.What is the most commonly used NMOS? ›
BS170. The BS170 is a most common N-channel enhancement mode MOSFET which is used by many electronic designers and hobbyists; it comes in a TO-92 package and performs best with low voltage low current applications.
The NMOS technology is widely used in microprocessors and many other metal oxide semiconductor devices because they need smaller chip region and gives high density. Also, the NMOS technology gives high speed because they have electrons as the charge carriers that have relatively high mobility.Is NMOS positive or negative? ›
In particular, they are constructed out of metal-oxide semiconductor (MOS) transistors. There are two types of MOS transistors — positive-MOS (pMOS) and negative-MOS (nMOS). Every pMOS and nMOS comes equipped with three main components — the gate, the source and the drain.What are the 3 regions of NMOS? ›
This is why the MOSFET is known as a voltage-driven device, and therefore, requires simple gate control circuit. The characteristic curves in Fig. 4.6b show that there are three distinct regions of operation labeled as triode region, saturation region, and cut-off-region.