Kjartan Vestvik By Kjartan Vestvik • July 12, 2017

Tutorial: How to embed and playback videos in iOS apps (Objective-C)

This is an updated, complete tutorial on how to use the SynqObjC mobile SDK. It builds on an older blog post that explained how to add video uploading using the SynqObjC SDK. In this blog post we also cover how to add video playback functionality to your app using our SDK.

Obj-C SDK for adding video functionality to your iOS app

The SYNQ Video API takes care of all your video needs: uploading, storage, transcoding, playback and content delivery. If you would like your mobile app to interact with the SYNQ API, be it video uploading, playback or live video streaming, you should use our mobile SDKs. And if you prefer Objective-C, the SynqObjC SDK is for you (Note: there is also a Swift version of our mobile SDK: SynqSwift, check out the blog post on this SDK).

If you are new to SYNQ, this illustration shows how a mobile client interacts with the SYNQ ecosystem:

Overview of the client-server communication in SYNQ

The SYNQ API is designed to be accessed from a server, and your mobile client should communicate with this server, which in turn will authenticate requests from the client before sending requests to the SYNQ API. The client should never communicate directly with the API! This is for security reasons, as this would expose your secret API key.

If you don't have your own server wired up to the SYNQ API, we've provided a NodeJS example server for you, so you can fully test the features of our SDK. We will also be using a helper library, SynqHttpLib, containing functions for communicating with this example server. We will get back to this shortly.

Before we begin, let's have a look at the topics we will go through in this guide:

  • Install SynqObjC
  • Set up the example server
  • Use the HTTP client
  • Upload a video using SynqUploader
  • Play a video

(To see a complete example app with these features added, clone the SYNQ-iOS repo, run “pod install” from the Example directory, and open the workspace in Xcode)

Installing SynqObjC

The easiest way to add the SDK to your project is by using CocoaPods (http://cocoapods.org). CocoaPods is a dependency manager for iOS projects, and there are thousands of libraries and frameworks available for you through CocoaPods. If you have not used CocoaPods before, you first need to install it by typing this into Terminal:

sudo gem install cocoapods

Now it's time to add the SYNQ SDK to your iOS app project. Adding it is easy with CocoaPods; in Terminal, simply move to your project directory (the directory where your .xcodeproj file is located). (if you already are using cocoapods in your project, you have a Podfile and you can skip this step) Then type:

pod init

This will create a Podfile in your project folder. The Podfile describes the dependencies to use in your project. Now you need to add a line to your Podfile, between the “target 'Your Project'” line and the “end” line:

pod 'SynqObjC'

Then run this command to add the dependency to your project and configure your Xcode project accordingly:

pod install

Now you should close your Xcode project (if you have it open), and instead open the Xcode workspace that CocoaPods generated for you.

open Your-App.xcworkspace

Tip: you might want to check out the example project included in the SYNQ-iOS repo. Clone the repo at https://github.com/SYNQfm/SYNQ-iOS.git, cd into the Example directory and run pod install. Now you will have a Xcode project with a fully functional app showcasing how to use the SYNQ SDK to upload videos from the device.

To be able to upload a video to SYNQ, you need to interact with the SYNQ API. This is where our example server comes into play. But first, let's have a look at the process of uploading a video through our API.

Set up the NodeJS example server

To be able to upload a video to SYNQ, you need to interact with the SYNQ API. This is where our example server comes into play. But first, let's have a look at the process of uploading a video through our API.

Uploading a video to SYNQ is comprised of four steps:

  1. Create a video object in the SYNQ API
  2. Fetch the video's upload parameters from the SYNQ API
  3. Set the upload parameters in the SQVideoUpload object
  4. Pass the SQVideoUpload object to the uploadVideoArray: function


In the following part we will be using the example server to illustrate how steps 1 and 2 can be performed.

Let's start by cloning the GitHub repo:

git clone https://github.com/SYNQfm/SYNQ-Nodejs-example-server.git

Install Yarn if you do not have it already:

npm install -g yarn

Then cd into the synq-node-server directory and install dependencies by running yarn:


You will need a SYNQ API key to be able to access the API. Register or login at https://www.synq.fm/register/ and create a project to get your API key. Then you need to create a file called env.js and add a line to this file like this, replacing API_KEY with your API key:

process.env.SYNQ_API_KEY = 'API_KEY';

Then the server needs to be built and run by running these two commands:

yarn build && yarn start

If this succeeds you should see “restify listening at http://[::]:8080”. Your example server is ready for use! Now our HTTP client needs to be declared. Time to switch to Xcode.

Use the HttpClient

You might have noticed already, but after opening the Xcode workspace generated by CocoaPods, if you expand the Pods group under the Pods project in the project navigator, you will see three installed pods:

  • AFNetworking - this is the networking library that SynqObjC is using when making HTTP calls
  • SynqHttpLib - the HTTP client - this is the one we'll be using next!
  • SynqObjC - the SYNQ SDK

Let's first import the SynqHttpLib to our class and create an instance of it. Add this import statement:

#import <SynqHttpLib/SHLHttpClient.h>


And then create an instance of our HTTP client and set the base URL for server requests. In the case of our example server running locally on your machine, this should be set to “http://localhost:8080”:

SHLHttpClient *client = [[SHLHttpClient alloc] init];


Now we are ready to call the HTTP functions in the SHLHttpClient. First, we need to create a test user:

[client createUserWithName:@”UserName”
             successBlock:^(NSDictionary *jsonResponse)
   // User successfully created                      
         httpFailureBlock:^(NSURLSessionDataTask *task, NSError *error)
   // User create failed


When the user is created, let's authorize this user by logging in:

[client loginUserWithName:@”UserName”
            successBlock:^(NSDictionary *jsonResponse)
   // User successfully logged in
        httpFailureBlock:^(NSURLSessionDataTask *task, NSError *error)
   // User login failed


Alright, we are almost ready to start uploading a video!

Use the SynqUploader

Videos on an iOS device are represented as objects of the PHAsset class (in the Photos framework), and they can be accessed by the method fetchAssetsWithMediaType:options: More info can be found here (https://developer.apple.com/reference/photos/phasset?language=objc).

When you have fetched the desired video asset, you must first instantiate a SQVideoUpload object to represent the video:

SQVideoUpload *video = [[SQVideoUpload alloc] initWithPHAsset:asset];


Now we need to add the upload parameters to this video object before we can call the upload function. The example server provides an endpoint that will create a new video object for us and return the upload parameters for that video. That is step 1 and 2 combined into just one call, cool! Let's create a function to handle this:

- (void)createVideoObjectForVideo:(SQVideoUpload *)sqVideo
[client createVideoAndGetParamsWithSuccess:^(NSDictionary *jsonResponse)
   // Set the upload params in the video object
   [sqVideo setUploadParameters:jsonResponse];
                         httpFailureBlock:^(NSURLSessionDataTask *task, NSError *error)
   // An error occurred


Once we have the upload parameters for the video set, we are ready to upload the video. The upload function requires a NSArray of video objects, so let's simply add our single video to an array:

NSArray *videoArray = [NSArray arrayWithObjects:sqVideo, nil];


Then we can start the upload:

[[SynqUploader sharedInstance] uploadVideoArray:videoArray
                           exportProgressBlock:^(double exportProgress)
   // Report progress to UI
                           uploadProgressBlock:^(double uploadProgress)
   // Report progress to UI


Lean back and watch the upload unfold… What happens now are actually several things: first, the video is exported to a temporary file. Then the video file is uploaded. When upload is complete, the temporary file is deleted. The result of an upload is reported through the SQVideoUploadDelegate protocol, and there are three methods used to report the result:

- (void) allVideosUploadedSuccessfully
- (void) videoUploadCompleteForVideo:(SQVideoUpload *)video
- (void) videoUploadFailedForVideo:(SQVideoUpload *)video


As the name of these methods suggest, they are called when an upload completed successfully, when an upload fails, and when all videos have uploaded successfully. You would normally use these methods to update the UI or do some other updates to your view controller or model.

Play a video

We have recently added functionality for playing back uploaded video files to the SynqObjC SDK. This has been a feature requested by our users, and we would like to accommodate this need. When it comes to video players on iOS, AVPlayerViewController in Apple's AVKit is an excellent out-of-the-box video player wrapped in a view controller. It gives you playback controls and styling matching the native system players, and it will automatically adopt new features of future operating system upgrades. It also supports AirPlay. So we assessed that there's no need to create a custom video player, let's keep it simple and use the standard video player.

Configuring the AVPlayerViewController is simply a matter of allocating an instance of the view controller and setting the URL of the remote video to be played. Only a couple lines of code. So this section will also cover how to fetch data about your uploaded videos from the SYNQ API, and how to get the URL of the video you intend to play. These are the steps needed to be able to play one of your uploaded videos:

  1. Get a list of videos from the API
  2. Parse the list into video objects
  3. Get the video URL and configure the video player


We will continue using our example server to get information from the API. The endpoint for getting all videos for a specific user is “/users/<userID>/videos/”, replacing userID with the ID of the current user. (The ID of a user is returned when a user is successfully logged in) The response from the call is a JSON array of video objects belonging to the specified user. This is an example of how a video object might look like:

Now, we would like those JSON data to be converted into something more useful, like our own video object. A video object should have properties for video id, the time it was created and modified, the state, and the various output formats with corresponding output URLs. To model our Video objects, let's create a class named Video and add these public properties:

@property NSString *video_id;
@property NSString *created_at;
@property NSString *updated_at;
@property State state;


We need a private property for the outputs, add this to the class´ interface in the .m file:

@property NSDictionary *outputs;


Also, we should create enums to represent the state of the video object and the different output formats (put it in the .h file):

typedef enum {
    VideoStateCreated = 0,
} State;
typedef enum {
    VideoOutputFormatHls = 0,
} OutputFormat;


To help us when parsing the JSON data and looking for the different values, let's also define some string constants:

#define VIDEO_ID    @"video_id"
#define CREATED_AT  @"created_at"
#define UPDATED_AT  @"updated_at"
#define STATE       @"state"
#define OUTPUTS     @"outputs"


We would like to convert the state string into the corresponding enum value, and also do the same for the output format. First, we need to have a method to return the string representation of a given State or OutputFormat enum:

- (NSString *) getStringForVideoState:(State)state
    NSArray *arr = @[
                     @"created",          // empty video object
                     @"uploading",        // upload parameters requested
                     @"uploaded",         // upload completed
                     @"streaming",        // streaming parameters requested
    return (NSString *)[arr objectAtIndex:state];
- (NSString *)getStringForOutputFormat:(OutputFormat)format
    NSArray *arr = @[
    return (NSString *)[arr objectAtIndex:format];}


Now we can create a method to get the state from a state string:

- (State) getStateForString:(NSString *)stateString
    if ([stateString isEqualToString:[self getStringForVideoState:VideoStateCreated]]) {
        return VideoStateCreated;
    else if ([stateString isEqualToString:[self getStringForVideoState:VideoStateUploading]]) {
        return VideoStateUploading;
    else if ([stateString isEqualToString:[self getStringForVideoState:VideoStateUploaded]]) {
        return VideoStateUploaded;
    else if ([stateString isEqualToString:[self getStringForVideoState:VideoStateStreaming]]) {
        return VideoStateStreaming;
    else {
        return VideoStateCreated;


We can simply represent the output format as a dictionary. So now we need to create a method to get the output format dictionary from a string:

- (NSDictionary *) getOutputDictForOutputFormat:(OutputFormat)format
    if (!self.outputs) {
        return nil;
    // Get the correct string name for the output format
    NSString *formatName = [self getStringForOutputFormat:format];
    return [self.outputs objectForKey:formatName];


The if test is there to check if there is a value for the outputs field at all, as this is a field that is only present in the JSON when a video has been uploaded for the video object, and the contents of this field will change as the transcoding of the video files progresses (The URL is only present when the transcoding is finished and the state is “complete”).

Also, make sure to add the method signature to the .h file to make the method public! We need to be able to call this method to get the output URL from our video objects.

Finally, we can write the init method that will parse the JSON dictionary:

- (id) initWithJSON:(NSDictionary *)jsonDictionary
    if (self = [super init]) {
        // Extract the properties from the JSON data
        self.video_id = [jsonDictionary valueForKey:VIDEO_ID];
        self.created_at = [jsonDictionary valueForKey:CREATED_AT];
        self.updated_at = [jsonDictionary valueForKey:UPDATED_AT];
        self.state = [self getStateForString:[jsonDictionary valueForKey:STATE]];
        self.outputs = [jsonDictionary valueForKey:OUTPUTS];
     return self;


Now we can create instances of our Video class by using the JSON data returned when fetching the current user's videos. But how do we fetch the user's videos? We use our SHLHttpClient to make a request to our example server, and this is done by calling the function “getUserVideosSuccessBlk”:

[client getUserVideosSuccessBlk:^(NSArray *jsonResponse) {
        // Handle success
    httpFailureBlock:^(NSURLSessionDataTask *task, NSError *error)
        // Handle error       


In the success block we do the actual parsing of the JSON array and create our Video objects:

// Iterate each video object JSON
    for (NSDictionary *jsonObject in jsonResponse) {
        Video *aVideo = [[Video alloc] initWithJSON:jsonObject];
        // If the video state = "uploaded", add to videos array
        if ([aVideo state] == VideoStateUploaded) {
            [videos addObject:aVideo];


Notice that we check the state of each video object. That is because we are only interested in the videos that have been uploaded, as these are the only videos that will be able to play back. We add these videos to a NSMutableArray.

Now we could display a list of all uploaded videos and have the user select one to open the player view and watch the video. We will skip this part, but let's just assume that we have selected one of the videos that we would like to play. Then we would get the URL of the video in the desired format, let´s get the HLS output format:

NSDictionary *outputDict = [video getOutputDictForOutputFormat:VideoOutputFormatHls];
if ([outputDict objectForKey:@"url"] == nil) {
// Convert url string to NSURL
NSURL *videoURL = [NSURL URLWithString:[outputDict objectForKey:@"url"]];


We need to check that the value for “url” is not empty, before we can convert the “url” string into a NSURL. Finally, we are ready to configure an instance of the AVPlayerViewController, set the player URL and present the player view controller:

AVPlayerViewController *avPlayerViewController = [[AVPlayerViewController alloc] init];
avPlayerViewController.player = [[AVPlayer alloc] initWithURL:videoURL];
    // Present the player view controller
    [self presentViewController:avPlayerViewController animated:YES completion:^{
        [avPlayerViewController.player play];


This concludes our tutorial on the SynqObjC SDK. Please don't hesitate to get in touch with us if you have any questions or feedback as to how we can improve these guides.

Get started with your video project today - Sign up for a free 14-day trial:



Related posts: