ARIK HALPERIN: Hello,I'm Arik Halperin, and I'm going totalk about how to do iOS development with WebRTC.

So first we look at howto build an iOS WebRTC app using the AppRTC pod.

And then we'll talk aboutbuilding your own pod and compiling yourown code for WebRTC.

And we will talk aboutPushKIt and CallKit, which are frameworks from Apple,which are meant to help in building VoIP applications.

So what we willsee is how to build an application, which is similarto the example code in WebRTC.

It connects to an AppRTC server.

AppRTC is an open sourceproject from Google that you can find on GitHub.

And our application willconnect to that server and be able to make videochat with another client also connected to the server.

So about the AppRTC pod.

It's listed on CocoaPods, theCocoaPods.


And it also has aGitHub repository.

It's on ISBX/AppRTC-iOS.

It contains the podspec, headerfiles, and a demo application.

And the code there, for WebRTC,is updated from January 2016.

It's a pristine I/Ocompilation of WebRTC.

So how do we use the pod? We create the project.

For example, we'llcall it KrankyGeekDemo And in the projectdirectory, from the console, we run a pod install.

And it will installthe AppRTC pod for us.

So what we get is the workspacewhere in the upper part, you can see the filesfor the AppRTC geek demo.

And we also have a podsproject containing the code from AppRTC pod, whichactually has two dependencies.

One is the libjingle peerconnection from pristine IO.

And the second one isthe SocketRocket library.

And this is the pod itself.

What's importantto look at here is first you can see the way theGitHub path for the AppRTC pod.

And you can also seethe frameworks that are needed in order tobuild an application using that pod, the iOS frameworks,and the native libraries that you will compile with, andthe two dependencies bridging your peer connection,and SocketRocket.

So in order to makethe code, these are the key ingredientsin the application.

In our app delegate,we will have to initialize the SSL librarythat comes with WebRTC and then initialize it,the necessary libraries used in order to be able to talkto our server in a secure way.

So in initialization, wecall the initialize SSL, and the point terminationthen initializes SSL.

When we start theapplication, what we will see is the domain view where wewill input the room number on which we want to connect.

When we press start call,we will start connecting and then our code, what we willdo is create a Jingle client.

They are the upclient.

This is an object that knowshow to talk to an AppRTC server.

It's part of WebRTC code.

And we will tell it toconnect to our room.

The client will knowthat we its delegates have to call back withmethods on the events and the notifications.

So a little bit on theanatomy of application.

We'll have aCallView Controller, and that CallViewController will have a video viewwith two subviews, one for the remote video and theother one for the local video.

And this is the processthat will happen.

Our ViewController will askthey ARDAppClient to connect to the AppRTC server.

It will send theconnect request.

And when it does that, itwill also start the camera.

And we will get acallback from it that the a localvideo track exists.

After it says connected,video will start streaming from the network, and it willgive us another call back about the remote video track.

And what we will do–so for the camera, we will– WebRTCwill open the camera and create an RTCvideo source for it.

And then it will use that oneto create a local video track and give us the callback sowe can store it and point it to our local videoview for rendering.

And this way, the camera imagewill show on our local view.

For the remote, it's similar.

An RTC video source will becreated for the remote video.

A remote video track willalso be created by WebRTC.

And we'll get the callback from the app client.

And we will attachit to our view and show the remote side video.

So this is how it'sdone in code It did receive local videotrack is where we attached the local view tothe video track, to the camera video track,and did receive remote video track is where we will attachthe view to the remote video track.

And more on video increasesignal tones session right after me.

So hanging up will just callthe client's disconnect function and clean up the code–clean up after ourselves.

So this is how touse the AppRTC pod.

But what if you wantto use your own pod? And there are several reasonsyou may want to do that.

So the first, you want to usethe latest code from WebRTC.

Second, you want tomake changes to WebRTC and customize some stuff.

For example, later we'lltalk about CallKit.

In order to work with CallKit,you can't use WebRTC as is.

You have to make some changesthat are not still there in the code.

And so you will wantto customize WebRTC.

And maybe you want touse a different server.

You don't want touse AppRTC server.

You have your own server,your own protocol, you want to workin a different way.

So you want tomake your own pod.

So what I did was I took AppRTCfrom GitHub and I cloned it.

And then cloned it underArikHalperin/AppRTC-iOS.

And then I changed it touse WebRTC from last month.

It was the latest codewhen I prepared the pod.

It's not listed inCocoaPods, so in my pod file, I will put the GitHub pathinstead of just putting the name of my pod, WebRTC.

And then I builtthe WebRTC code.

So how did I do that? You have the link.

It's underWebRTC.

Org/native-code/iOS, where you can get an explanationon how to build WebRTC for iOS.

So first you getthe prerequisites, which is to install depot_toolsand get the latest xcode.

And then you fetch thecode using fetch command.

It's part of the depot_tools.

And you specify whereyou want WebRTC iOS.

And you run gclientsync, and that's it.

You have the code forWebRTC on your computer.

It's important to disablespotlight indexing on the directorywhere you get it because later when you build,it takes a long time to build.

And if you disable spotlight,it will be a little faster.

So for each architecture Ibuilt for ARM32 and ARM64.

I didn't botherwith the simulator.

Main reason was simulatordoes not have a camera, so it was not interesting.

So for each architecture,I built WebRTC.

And what happens whenyou build WebRTC? WebRTC is made of many models.

And each model hasits own static file.

And it's a nightmare to compileall of them in an application.

So I took them, and I putthem all in one big file.

And I used libtoolin order to do that.

And for the release version, Ialso stripped all the symbols, and this reduced thefile by a factor of 10.

And using lipo, I tookall the architectures and made one file that Ican use with the Xcode.

Then I copied all the relevantheader files and compiled library into my pod.

And the scripts for– thatcan show you how to build are in the scripts directoryin my GitHub repository.

So if you want to look andsee how this magic is done, you can find it there.

So what we see until now? We look at how to build aniOS WebRTC application using the AppRTC pod.

And then we talked abouthow to build your own pod and compile WebRTC from scratch.

And next, we're goingto talk about PushKit.

Why do we need PushKit,and what is it? In order to get an incomingcall with a VoIP app, you have to be connectedto your server.

But the problem with beingconnected to a server is that you have to listenall the time to things that come from your server.

And the main issue with thisis it drains your battery.

You use battery all thetime, even if you do nothing.

And the second problem,which is harder to– which you can't overcomeis that this, on iOS, was being done using athing called VoIP socket.

It was deprecated iniOS 9, and in iOS 10, it no longer works at all.

And Apple introducedthe VoIP push in iOS 8.

So how does VoIP push work? We have two clients here.

And both clients, when theystart working, what they do is they get a token fromthe operating system, which identifies them in the APNS,Apple's Push Notification Service.

They take this token, andthey tell the server, listen, I'm client X, andthis is my token.

So from now on, when the serverwants to talk to client X, it knows which tokento use in APNS.

And Client1 wants to sendthe message to Client2 and wants to answer a call.

So it sends a message tothe server– call Client2.

The server looks up Client2 andfinds its token and tells APNS, I want you to send a pushwith an incoming call to the client identifiedby this token.

APNS looks at thetoken and says, OK, I know this client islocated at this IP and this pod and sends the pushto the client.

And the client getsthe push, wakes up, and can answer the call.

So if you want touse PushKit, you have to prepare yourapplication for that.

In Xcode first of all, you needto have the background mode called the VoIP notifications.

And the in Apple–as usual with Apple, everything is abit of a headache.

You have to create an iOSVoIP Services Certificate and compile the applicationwith that certificate.

A little code in your appdelegate, you import PushKit.

And when your applicationfinishes launches, you do VoIP registration.

And this is done by creatinga PKPushRegistry object and telling it that the desiredpush type we are going to use is PushTypeVoIP.


And the main advantageof PushTypeVoIP is it has a very highpriority on APNS.

And so you will get itin the minimal latency and as fast as possible.

So handling credentials update.

Credentials update is wherethere iOS updates your token.

So when your applicationstarts, or whenever iOS changes the token,you will get the callback in your app delegate, whichis didUpdatePushCredentials.

And when you getthat call back, you can take the tokenout of the credentials and update yourselfand say to the server, now my token is this.

And when you get anincoming push notification, you will get thedidReceiveIncomi ngPushWithPayload, and thereyou will get the PKPushPayload.

And the PKPushPayloadis a special field, which is the UUID, and itidentifies the push transaction in the system, and we'll soonsee, when I talk about CallKit, how to use that.

So CallKit.

What is CallKit? Apple are saying that theCallKit is the framework that's going to elevate yoursearch party applica– VoIP applications to afirst party experience.

And what does that mean? So first it meansreceiving and making a call on yourVoIP service appear like any other native call.

You can start your VoIPcalls from contacts, recents, or any other way nativecalls are started.

The incoming call screen willnow look like a native call.

Anyone of you who has theVoIP application, which updated to VoIPcall, probably notice that when you get a call, youno longer get the usual screen, but you get the native quotes.

And I was very surprised whenSkype did that for me, so.

The call screen alsolooks like a native call.

And here, you have an example.

You can see the incoming call.

And the difference betweenthe incoming call screen here and the usualincoming call screen is that here I have myservice name on the screen instead of saying, forexample, mobile or whatever.

And the call screen alsohas a small difference.

There is an icon for myapplication with my service name that I can press and changethe UI to my application UI.

So CallKit is builtof two main classes.

One is the CXProvider, andone is the CXCallController.

And what are they used for? So let's look at them,one versus the other.

CXProvider is used to receiveout of band notifications.

These are not user actions.

For example, an incoming call.

CXCallController,it's used for requests from your application, whichare local user actions.

And these are internalevents like start call.

It interplays also with ourproviders in the system.

For example, if I'm doing acall with the usual mobile telephony, and I wantto start a VoIP call, I can ask CXCallControllerto start my call, and it will hold thecurrent telephony call and allow for mycall to take place.

Example uses, we useCXProvider to report incoming calls,outgoing call connected, call ended on the remote side.

CXCallController weneed to use to request starting an outgoingcall, answering a call, or ending a call.

So CXProvider sends messagesto the system via an object message called CXCallUpdate.

And it receivesthe notifications from the systems with anobject called CXAction.

CXCallControllersends a notification to the system viaCXTransactions.

Let's look at some use cases.

So we get an incomingcall via push.

Our incoming callhandler is called.

We say to CXProvider,CXCallUpdate incoming call.

And notifies thesystem, and the systems shows the nativeincoming call screen.

When the useranswers, the system notifies the CXProviderwith the CXAnswerAction, which notifies our handler.

And we now notifyour VoIP server that the user answered the call.

Ending a call,similarly, but this time it's a CXEndCallAction.

So we set CallKitin our application.

And did it finishlaunching with options? First thing we do is wecreate a configuration object for our VoIP provider.

And we create a CXProviderwith that configuration.

Configuration contains our nameand several other parameters.

And we say to the providerthat our app delegate is the delegate for a callback from this provider.

And we also createthe call controller.

So if you receive a callfrom push, first thing we do is we extract theUUID from the push and we use that UUID to savethe call in our database.

We will identify it by the UUID.

And we will create theCXCallUpdate object and report to the systemvia– to CXProvider via report new incomingwith UUID provide that the call is incoming.

And we give it the UUID.

And we may get an error.

For example, if the user setsthe device to do not disturb, then the call willnot go forward and CallKit willreturn an error to us.

But if everything workswell, a call link to Apple, we need to allocate an audiocontroller object for the call.

But this is something youdon't do when you use WebRTC.

As Chris willexplain later, WebRTC handles its own audio session.

And doing this willcreate unexpected results.

And contrary to whatApple is saying, don't mess with the audioat all when you use CallKit.

So when the useranswers the call, we first say theaction is fulfilled.

And then we notify ourserver that the user answered the call.

And when CallKit startsaudio, according to Apple, we need to start theaudio on the device.

And again, since we areusing WebRTC, don't do that.

Ending calls, so the userpresses the hangup button.

And we need to tell ourserver that the call has ended and fulfill the action.

Starting a call, for example,I won't cover the case where it happens fromrecents and other places.

But let's talk about whenit's done from our UI.

We call the call controller,and we give it a CXTransaction, which means start to call.

The system accepts thatCXTransaction is if it's possible to start the call.

Or maybe there was a telephonycall that was going on, and the system neededto hold that call.

After it holds the call, it willgive us the CXStartCallAction.

And [INAUDIBLE] start callhandler will notify our server that we are starting a call.

So this is howit's done in code.

We create aCXStartCallAction, and then we request the transactionfrom the call controller.

And when the system authorizesthe starting of the call, we will get theperformStartCallAction where we will tell ourserver that the call started and fulfilled theaction for CallKit.

So if you want toread more on CallKit, you can find it onApple's developer site.

There's a very nice presentationthere from WWDC 2016.


Source: Youtube