Getting started with webrtc
Remarks#
WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. The WebRTC components have been optimized to best serve this purpose.
Checkout following links to get more information about WebRTC
webrtc.org
webrtc architecture
check live demo here
Setting up a WebRTC-based communication system
To setup a WebRTC-based communication system, you need three main components:
-
A WebRTC signaling server
To establish a WebRTC connections, peers need to contact a signaling server, which then provides the address information the peers require to set up a peer-to-peer connection. Signaling servers are for example:
- signalmaster: Lightweight, JavaScript-based signaling server
- NextRTC: Java-based signaling server
- Kurento: Comprehensive WebRTC framework
- Janus: General purpose WebRTC Gateway
-
A WebRTC client application
The client accesses either a browser’s WebRTC implementation through a JavaScript API or uses a WebRTC library (i.e. as part of a desktop or mobile app). To establish the connection to a peer, the client first needs to connect to the signaling server. Examples for WebRTC clients are:
-
OpenWebRTC, a cross-platform client with mobile focus
-
Peer.js A browser-based client (Peer.js also provides a light-weight server)
-
A STUN/TURN server
Session Traversal Utilities for NAT (STUN) enables peers to exchange address information even if they are behind routers employing Network Adress Translation (NAT). If network restrictions prevent peers from communication directly at all, the traffic is routed via a Traversal Using Relays around NAT (TURN) server. You find a detailed and graphical explanation of STUN and TURN at https://www.avaya.com/blogs/archives/2014/08/understanding-webrtc-media-connections-ice-stun-and-turn.html. Examples for WebRTC STUN/TURN servers are:
- coturn combines STUN and TURN and is typically part of a fully-fledged WebRTC infrastructure.
- Janus WebRTC Gateway comes with an integrated STUN/TURN server.
Introduction to WebRTC
WebRTC is an open framework for the web that enables Real Time Communications in the browser. It includes the fundamental building blocks for high-quality communications on the web, such as network, audio and video components used in voice and video chat applications.
These components, when implemented in a browser, can be accessed through a JavaScript API, enabling developers to easily implement their own RTC web app.
The WebRTC effort is being standardized on an API level at the W3C and at the protocol level at the IETF.
- A key factor in the success of the web is that its core technologies – such as HTML, HTTP, and TCP/IP – are open and freely implementable. Currently, there is no free, high-quality, complete solution available that enables communication in the browser. WebRTC enables this.
- Already integrated with best-of-breed voice and video engines that have been deployed on millions of endpoints over the last 8+ years. Google does not charge royalties for WebRTC.
- Includes and abstracts key NAT and firewall traversal technology, using STUN, ICE, TURN, RTP-over-TCP and support for proxies.
- Builds on the strength of the web browser: WebRTC abstracts signaling by offering a signaling state machine that maps directly to PeerConnection. Web developers can therefore choose the protocol of choice for their usage scenario (for example, but not limited to, SIP, XMPP/Jingle, etc).
Read more about WebRTC from here
Get access to your audio and video using getUserMedia() API, Hello WebRTC!
navigator.mediaDevices
is the common method adapted in Chrome and FF to getUserMedia as of now.
A promised based call back which returns local stream on success
navigator.mediaDevices.getUserMedia({ audio: true, video: true })
.then(stream => {
// attach this stream to window object so you can reuse it later
window.localStream = stream;
// Your code to use the stream
})
.catch((err) =>{
console.log(err);
});
You can pass audio and video constraints to getUserMedia to control capture settings like resolution, framerate, device preference, and more.
Attach the stream to a video element
// Set the video element to autoplay to ensure the video appears live
videoElement.autoplay = true;
// Mute the local video so the audio output doesn't feedback into the input
videoElement.muted = true;
// attach the stream to the video element
stop both video and audio
localStream.getTracks().forEach((track) => {
track.stop();
});
stop only audio
localStream.getAudioTracks()[0].stop();
stop only video
localStream.getVideoTracks()[0].stop();