admin管理员组

文章数量:1415100

Does anyone know what munication standards are being used to detect camera hardware for use with getUserMedia?

I presume that it's MTP or something like that, although I expect that the implementation is different for each browser/OS, but I've been searching for two days and I can't find any solid information on this.

Does anyone know what munication standards are being used to detect camera hardware for use with getUserMedia?

I presume that it's MTP or something like that, although I expect that the implementation is different for each browser/OS, but I've been searching for two days and I can't find any solid information on this.

Share Improve this question asked Jul 17, 2018 at 9:01 NanhydrinNanhydrin 4,4722 gold badges41 silver badges52 bronze badges 1
  • 1 If you are talking only about hardware detection protocol then I think you are right. Each Browser uses methods defined in particular OS to access camera hardware. But if you are looking to media transport protocols used in WebRTC then you can go through innoarchitech./what-is-webrtc-and-how-does-it-work, w3/TR/webrtc. – Sandip Nirmal Commented Jul 25, 2018 at 9:12
Add a ment  | 

2 Answers 2

Reset to default 7

I have long time searched for the answer of your question. At first I found this on w3 WebRTC site:

This document defines a set of ECMAScript APIs in WebIDL to allow media to be sent to and received from another browser or device implementing the appropriate set of real-time protocols. This specification is being developed in conjunction with a protocol specification developed by the IETF RTCWEB group and an API specification to get access to local media devices developed by the Media Capture Task Force.

Then on the site "Media transport and use of RTP" I have found following informations:

5.2.4. Media Stream Identification:

WebRTC endpoints that implement the SDP bundle negotiation extension will use the SDP grouping framework 'mid' attribute to identify media streams. Such endpoints MUST implement the RTP MID header extension described in [I-D.ietf-mmusic-sdp-bundle-negotiation].

This header extension uses the [RFC5285] generic header extension framework, and so needs to be negotiated before it can be used.

12.2.1. Media Source Identification:

Each RTP packet stream is identified by a unique synchronisation source (SSRC) identifier. The SSRC identifier is carried in each of the RTP packets prising a RTP packet stream, and is also used to identify that stream in the corresponding RTCP reports. The SSRC is chosen as discussed in Section 4.8. The first stage in demultiplexing RTP and RTCP packets received on a single transport layer flow at a WebRTC Endpoint is to separate the RTP packet streams based on their SSRC value; once that is done, additional demultiplexing steps can determine how and where to render the media.

RTP allows a mixer, or other RTP-layer middlebox, to bine encoded streams from multiple media sources to form a new encoded stream from a new media source (the mixer). The RTP packets in that new RTP packet stream can include a Contributing Source (CSRC) list, indicating which original SSRCs contributed to the bined source stream.

As described in Section 4.1, implementations need to support reception of RTP data packets containing a CSRC list and RTCP packets that relate to sources present in the CSRC list. The CSRC list can change on a packet-by-packet basis, depending on the mixing operation being performed.

Knowledge of what media sources contributed to a particular RTP packet can be important if the user interface indicates which participants are active in the session. Changes in the CSRC list included in packets needs to be exposed to the WebRTC application using some API, if the application is to be able to track changes in session participation. It is desirable to map CSRC values back into WebRTC MediaStream identities as they cross this API, to avoid exposing the SSRC/CSRC name space to WebRTC applications.

If the mixer-to-client audio level extension [RFC6465] is being used in the session (see Section 5.2.3), the information in the CSRC list is augmented by audio level information for each contributing source. It is desirable to expose this information to the WebRTC application using some API, after mapping the CSRC values to WebRTC MediaStream identities, so it can be exposed in the user interface.

Perkins, et al. Expires September 18, 2016 [Page 35]

Internet-Draft RTP for WebRTC March 2016

All Transports for WebRTC are listed on this site.

All documents from IETF RTCWEB group you can find on site "Real-Time Communication in WEB-browsers (rtcweb)".


For further information:

  • Media Capture (with links to all documents)
  • MediaStream API (all methods which are used in this API)
  • Real-time Transport Protocol (RTP)
  • Session Description Protocol (SDP)


My conclusion:

  1. Session Description Protocol (SDP)
  2. Real Time Transport Protocol (RTP) (may be too)

The webrtc library has a set of platform-specific glue modules which can be found here and also in the Chrome tree. On Windows this uses the MediaFoundation APIs (with a fallback to DirectShow), video4linux on Linux and AVFoundation on Mac

本文标签: