CESAR GUIRAO: Today, I'm goingto talk about the Android development with WebRTC.

The summary of thetalk actually is, we are going to seethe options that we have for integrating WebRTCinto an Android application.

And we are going tohave a live demo on how to use WebRTC using the JavaAPIs provided by WebRTC, and the bindings that they have.

So, let's start with options.

The options here are from thepoint of view of using WebRTC.

So, at the end, if you areusing another language, or other framework,whatever, they are going to usethese three options.

Maybe you are notdoing some of them, but the frameworkwill use one of these.

So, we have the Android WebView,we are going to see that, the Native Java API forWebRTC, and the C++ API.

Let's start withthe Android WebView.

The WebView thatis based on Chrome was added to Androidsome time ago.

It replaced the old WebViewthat was already available in Android using WebKit.

And this newWebView uses Chrome, and it has WebRTC support.

It was introduced on Android4.

4, but the problem with that is that it was introduced usinga very old version from Chrome, and it was notsupporting well WebRTC.

So, at the end, it is onlyusable in Lollipop and higher.

So the good thingnow, on new devices, is that it is updatedvery frequently with every version of Chrome,so it is one of the options.

But it has some limitations.

The problem with workingonly on Lollipop, at the end, there are a lot of devicesthat you are not targeting if you are using the WebView.

And in Lollipop and newer,versions are only 50% of the market share of Android.

The other thing isthat the WebView is a standardcomponent on Android, so it's updated outsideyour application.

That means that it'sa good thing, maybe, but also it can break your app.

You don't control whenthe WebView is updated.

That means that sometimes{} people can be running your application in {} ourWebView that you have not tested, or something, or maybeyou need to update the WebView.

And people are just not doingthat, so you cannot force that.

And the other thing isthat all the video views are inside the WebView.

So there would beapplications where everything is contained on the WebView.

So if you want to mix the videoviews with other components, UI components, fromAndroid, it is something that is not easy to do.

As alternative forthe WebView, this is the most popularone, that is, Crosswalk.

It is an open source projectthat they compile on Chrome, and they provide theChromium build for you to use as their WebView.

The good thingsabout that, the pros are that they have 4.

0 support,so it works on more devices, and you can embed the binaryinside your application, so you can decidewhen to update.

That is a good thing,depending on your use case.

And you have always thelatest version of Chrome.

That is also a good thing.

As cons, as you embed thebinary inside your application, the binary size isgoing to increase a lot.

The Chrome [? units ?]are a very big project.

And also, that is a goodthing, another thing, if you manually upgradethe version of Chrome, the thing is, Google can forceyou to update your application at some point if thereis some security issue or something like that.

Maybe sometimes this canhappen, and you have to update, or your application can beremoved from Google Play.

So, let's go to the secondoption, that is the Java APIs.

And WebRTC already providessome bindings for Java, so it is something thatyou already can use.

The good thing withthat is that you have access– all the videois rendered using Native view, so you can integrate all theUI with your video views.

So, you can mix them, and createyour application as you want.

And also it is manually updated.

The problem here is thatthe WebRTC is a big project.

So, there areprebuilts available, Pristine.

Io was one ofthe most popular ones, but it's outdated now, and theyare not maintaining anymore.

And the other option is tocompile it from the source.

That is not an easy task.

I think that other people isgoing to talk about that today.

For example, forAndroid, it doesn't compile on Mac or Windows,so you need a dedicated box using a [INAUDIBLE]to compile it.

So, it's not an easything to maintain.

Other things aboutthe Native Java API, that the problemis that the API is more complex than Java Script.

Mainly because Java, thelanguage, is different, but there are other thingsfrom Android that make that a bit more complicated.

Also, using that approximation,for example, the binary size is so big, it is likeintegrating the WebView.

For example, when you createsomething with Pristine.

Io, you have 20 megabytesof APK [? sites.

?] Here, in the Java option,you have other alternatives, like TokBox, and otherplatform providers.

They provide the JavaAPI that you can use, and they instruct all thePeerConnection APIs, so you don't have to care about that.

And also, they don'tdo all the work of doing all that's in[INAUDIBLE] everything there.

And the last optionis the C++ API.

WebRTC is done in C++, so youcan access all the APIs in C++.

That make sense if yourcodebase is already in C++, that is not maybe our commontheme, but for portability, it is a good option.

You still need Java accessfor the capturing/rendering, and in Java, most of theAPIs are in Java and Android, sorry– most of theAPIs are in Java.

So, you would need JNI tohave access to the camera, or there are new APIs, butit depends on the person.

So, at the end,it is complicated.

But the good thingwith that is you have the maximum portability.

The same code in C++ can runin iOS, Android, Desktop, whatever, in other platforms.

But it is verycomplex to maintain.

The C++ API is not asstable as the others, so they are changing the API.

And sometimes ifyou want to upgrade to the new version of WebRTC,you have to modify your code, so it is hard to maintain.

Let's start with using the JavaAPI to create an application.

This is the fun part, I think.

Well, at least for me.

The set up.

We create a singleactivity application in using Android Studio.

The thing is, to startwith using WebRTC, we need to decide somethingabout the signaling mechanism.

You can use Websocket,PubNub, this is up to you.

You can use SMS if youwant, this doesn't matter.

In the example, I'mgoing to use Socket.


That is very easy to setup a server for an App.

To get WebRTC, for theexample, the easiest way is to get a prebuild,like Pristine.


You add that toyour gradle file, and you already have all theAPIs needed to use WebRTC, is the only linethat you have to add.

The problem with thatis it's not updated.

The last version is fromDecember of last year, so that's an issue, maybe.

And don't forget toadd the permissions needed to access the camera, toaccess the internet, and access to the microphone.

And we can start with theWebRTC initialization.

WebRTC is a C++code, so at the end, this is the thing thatI was commenting before, that you need to accessall the APIs in Java.

So, there is this initialmethod, a starting method, that you pass the context toaccess all the hardware, APIs, something like that.

And also to decide ifyou want to use all your own video, if you want touse hardware acceleration here.

This is something that isneeded from the C++ to access to the Java APIs.

With this you create aPeerConnection factory.

That is the object that is usedto create the PeerConnection.

So, the next step isthe video capturer.

The good thing is, withWebRTC, they already provide all the setupto start capturing.

So, you don't have to know howto use the camera, the APIs, or something like that.

So, [INAUDIBLE] hasthese two lines.

You can get the name ofthe front-facing device, and you can create the videocapturer using the name.

They already provideimplementations for the camera APIs, so youdon't have to deal with that.

Also, they added,some weeks ago, the possibility to createa video capturer using screensharing.

So, you can screenshare theview from the application.

So, there are some interestinguse cases using that.

That is a goodthing, but it's not available in thePristine.

Io compilation yet.

I mean they arenot maintaining it.

Here is the line tocreate the video capturer, and the next step is to add thisvideo capturer to something.

In WebRTC, we have theconcept of the media stream.

That is what we use to sendthe video to the other peer, so we need to add in the streamtwo tracks, the audio track and the video track.

When we create thetracks, we set up the video source usingthe video capturer that we have created before.

So after that, wecreate the audio source and create the audio track, andwe have the local media stream.

With this localmedia stream, we can start showing the preview of thelocal video in the application.

For the rendering.

Here, to see the previewof the local video render, there are severaloptions available.

In WebRTC, they already providethese two, GLSurfaceview and SurfaceViewRenderer.

The difference betweenthem is the GLSurfaceview is a commonGLSurfaceview, that is being used by all the renderersin the same conference.

So you can overlapthe video there, but you have to add therenderers in the order that you want to putthem in the screen.

And all of them sharethe same SurfaceView.

Maybe it's OK forsome applications, but there is the otherSurfaceViewRenderer, that you use a differentview for every video.

So you can place the viewsin the layout in any way.

The problem with this lastone is it's more flexible, but the SurfaceView views inAndroid have layout issues.

At the end, the implementationof SurfaceView in Android, they are not reallyviews, they are like Windows overthe real window, so maybe you can havesome layout issues.

And WebRTC provides all the APIsto create your own renderer.

So, at the end, if you haveissues with one of them, you can create your own rendererusing [? text tool ?] view, or maybe, for example,if you have a game, you can integrate thetexture or the video frame inside your game.

You have thepossibility to do it.

So, in the samplecode, in this example, we are going to usethe GLSurfaceview.

We get the SurfaceViewfrom the layout.

And at the end, wecreate the renderers.

This is the other peer rendererthat we are creating first to cover all the GLSurfaceView.

And we are creatingour preview renderer, covering only one part,one square, in the view.

And if we add the rendererto the local video track, we start seeing our preview.

So, with that, we havehalf of the [INAUDIBLE].

The next things is tocreate the PeerConnection.

It's not tricky, but thething is, you usually see the PeerConnection.

When you createthe PeerConnection, people add some STUN serverfrom Google, but at the end, you need to provide yourTURN and STUN servers.

This is something that youneed for your deployment.

If you don't have that,the PeerConnection maybe works in the local network,but this is very probable that it's not going to workoutside your local network.

Because, at the end, sometimesthere are firewalls, or knots, that makes you to have issuesconnecting from one's peer to the other.

So, this is very important.

So, this is where all thethree parties are important, because, for example, TokBox, weprovide all these things so you don't have to deploy your own.

So, we create the PeerConnectionusing the PeerConnection factory that wehave created before, and we add our local streamto the PeerConnection.

That way, whenthe PeerConnection is connected to the otherpeer, they will see our video.

And here start theSDP negotiations.

The SDP negotiation is thesame that is on Java Script.

Maybe it is a bit moreverbose because of the way that you have to[? lead on ?] Java, but you have to implementthe peerConnectionObserver and SDPObserver toget notifications.

They are the listenersto get the notifications from PeerConnectionwhen something happens.

We are sending, in our samplecode, the SDP over socket.


That is our signaling protocol.

And this is the last stepto have media flowing.

You have to rememberalso, when you get the stream from the otherpeer, to add the render there to see the videofrom the other API.

This is the diagram of howthe SDP negotiation works.

It seems very complicated,but it's not so much.

At the end, both clients connectto the server using socket.

IO, and the server decidesto send the createOffer.

That is the startmessage to one of them.

So, one of them, usingthe PeerConnection API, creates the offer andsets the local description in this PeerConnection object.

Then that generates theoffer SDP that you'll send to the other peer.

The server is justrelaying the message.

And the client 2, whenit receives the message, it sets the remote description,and creates the answer that travels to the other way, andthey have the information of, the codex of, what size.

So, in this process, thereare also other things that are called candidates.

That whenever you startdoing the SDP negotiation, PeerConnectionautomatically starts trying to guess which IPs you have.

So, all thecandidates are options to connect to your host.

So, if you have several[INAUDIBLE] servers, maybe you would seemore candidates here, because you have more optionsto connect to you or something like that.

And at the end ofthis process, we have the mediaestablished, after this.

And the server issomething that maybe, if you are doingAndroid development, is something that maybeyou are not used to doing.

But the server issomething as easy as that.

This socket.

IO server,only 30 lines of code.

It doesn't haveany kind of logic.

When there is a message,an offer message, it sends the messageto the other peers.

Answer, the same, andthe candidates, the same.

So, for the first one,it's sends the createOffer, but to start the process,the server is very easy.

At the end, also theapplication is also very easy.

It is very small.

260 lines of code forthe Android application.

That is the minimum thingneeded to have video working.

There is no [? other ?]handling and other things, but it is something thatis easy to understand.

And the server is verysmall, as you have seen.

Here, you can findall the source code that I uploaded here to pickup, so feel free to use it and to test it.

And some Androidtips, to finalize.

The binary size of theapplication, using WebRTC, is very big.

I recommend to usethis split mechanism, creating different APKsfor every architecture.

At the end, thesize is something important for thefinal application if you're creating somethingfor a commercial application, or something like that.

Remember to stop thecamera and the microphone.

This is something thatWebRTC is not doing for you.

WebRTC doesn't have access tothe events of the application.

This is usually handledby the activity, so you have to take care of juststopping the camera when you go to the background, orstopping the microphone when you receive a phone call.

These are importantthings to remember.

Audio routing, this issomething that it seems easy when you connect the headset,or Bluetooth headset, something like that.

It's not as easy if youlook at the implementation of the AppRTC application.

That is the example codethat is provided with WebRTC.

It's not as easy.

There are a lot of edgecases, and it is better to look at an implementationand do something similar.

And at the end,if you want to try new codex, like the VP9 or H264,by default it is using VP8.

But there is no easyway to select them, so you have to thenmodify the SDP, and then reorder the codexthere to use one or the other.

Maybe in the future issomething that is a bit easier, but now is a bit complicated.

And that's all for today.

Thank you.

Thank you.


Source: Youtube