You want to select specific tracks from your input? You want to make sure that some components are present in your stream? Follow the guide!

Before one knows where to go, one needs to know where he is
The first step is to understand the type of components carried in your input stream. When you ingest a SRT stream, the latter carries a Transport Stream (as per ISO/IEC 13818-1 MPEG2 Transport Stream (MPEG2-TS). As per wikipedia, TS is a packet-based media container format for transmitting video, audio and program data.
You can also monitor the SRT stream health; please check this article to learn more

Quortex I/O offers you a overview of all the components (basically the medias) contained in the TS stream. To do so, simply click on an input and select the "Analysis" tab.

Analysis Overview
As one can see, each row corresponds to a TS component (a "PID"), and the information is made of 5 columns:
The PID is the TS PID Value. It is basically a unique identifier for the component.
The Component describes the media type (thanks to the icons), plus the detected language (in case of audio), plus the page/magazine (for teletext) and the detected codec.
In case of teletext input, the same PID can carry several languages. In that case, you will see the same PID used in more than one row.
The Bitrate depicts the TS bitrate of the component. Note that it includes the elementary stream, but also the TS overhead.
The Status indicates whether an error was found when parsing the component
Last, but not least: the "Track language" is key for this article. It indicates the language code that will be used in the "Processing"

In case of audio, an icon "Audio Description" is shown whenever signaled in the Transport Stream. Forr subtitles, the hearing impaired icon is shown whevener signaled.

Advanced Analysis
Quortex I/O can also report a more thorough analysis, in json format. This can be done by clicking on "View full analysis"

This views is perfect if you are a TS black belt and want to know more about the PMT, PCR, and other fancy stuff. You can also share the exported json with our support in case something goes wrong.

The output mapping
Now that we know what our input stream is made of, let's see how we can map these streams to an output. For doing so, let's jump to the "processing" tab, and more precisely to the audio tab of the processing section.
The rule of thumb is that the number of rows will correspond to the number of audio outputs in your HLS or DASH stream. And for each of this row, you can decide which language to track; you can also decide with output language code wiill be carried (by default, the same as the tracked language is assumed). You can also add the audio description flag if need be, and modify the codec or the bitrate.

Keep in mind that the processing applies to a whole pool; this is why tracking with the audio language makes much more sense than tracking with the PID (as it's very unlikely that all streams from a pool will have the exact same PID mapping).

What is extremely important is that the "tracked language" reported above will be used as an identifier to track a component in the processing.

Audio groups
If you have more than one audio that share the same bitrate, an audio group will be created in HLS, that will contain all the languages declared in that bitrate.
Was this article helpful?
Thank you!