FMOD Engine User Manual 2.03

4. Platform Details | iOS

iOS Specific Starter Guide

SDK Version

FMOD is compiled using the following tools.

Compatibility

FMOD supports devices of the below architectures back to iOS/tvOS 12.0. Please note that armv6, armv7, armv7s and x86 are no longer supported by Xcode and thus are no longer supported by FMOD.

Libraries

Each lib is a universal binary containing the relevant architectures from the 'Compatibility' list above.

Core API library

Studio API library (used in conjunction with Core API library)

Apple libraries (required for the Core API and Studio API libraries)

Hardware Decoding

Via the AudioQueue codec FMOD supports decoding AAC, ALAC and MP3. At present iOS devices only have support for decoding one sound with hardware at a time (which may be consumed by playing the iPod). At the cost of extra CPU, all iOS devices have access to software codecs to support more than one sound of these formats. By default FMOD will try to use hardware, if it is in use a software codec will be used. If you want explicit control over whether hardware or software is chosen you can use the FMOD_AUDIOQUEUE_CODECPOLICY enumeration provided in fmod_ios.h. This is set with FMOD_CREATESOUNDEXINFO::audioqueuepolicy via System::createSound.

When playing MP3s using the AudioQueue codec, seeking is generally slow for the first time each position is visited. If you need fast random access to a file you can create the sound using the FMOD_ACCURATETIME flag. This will scan the file at load time to determine its accurate length, which has the benefit of creating a seek table to aid in seeking. This is a one-time upfront cost for fast seeking vs paying the cost at runtime for each unique position.

All decoding performed by the AudioQueue codec is done on standalone files such as .mp3, .m4a, etc. There is no support for using AudioQueue with FSB or bank compressed audio. Any MP3 decoding for FSB files is performed by the standard cross-platform FMOD decoder.

Handling Interruptions

Unlike in previous versions of FMOD, it is now the responsibility of the developer to interact with the AudioSession APIs native to this platform. To assist in this matter we provide two functions you can use when you need to handle interruptions, System::mixerSuspend and System::mixerResume. For more information about interruptions please check the Apple documentation.

bool gIsSuspended = false;
bool gNeedsReset = false;
// Substitute 'coreSystem' below wth you FMOD:System pointer.

[[NSNotificationCenter defaultCenter] addObserverForName:AVAudioSessionInterruptionNotification object:nil queue:nil usingBlock:^(NSNotification *notification)
{
    AVAudioSessionInterruptionType type = (AVAudioSessionInterruptionType)[[notification.userInfo valueForKey:AVAudioSessionInterruptionTypeKey] unsignedIntegerValue];
    if (type == AVAudioSessionInterruptionTypeBegan)
    {
        // Ignore deprecated warnings regarding AVAudioSessionInterruptionReasonAppWasSuspended and
        // AVAudioSessionInterruptionWasSuspendedKey, we protect usage for the versions where they are available
        #pragma clang diagnostic push
        #pragma clang diagnostic ignored "-Wdeprecated-declarations"

        // If the audio session was deactivated while the app was in the background, the app receives the
        // notification when relaunched. Identify this reason for interruption and ignore it.
        if (@available(iOS 16.0, tvOS 14.5, *))
        {
            // Delayed suspend-in-background notifications no longer exist, this must be a real interruption
        }
        #if !TARGET_OS_TV // tvOS never supported "AVAudioSessionInterruptionReasonAppWasSuspended"
        else if (@available(iOS 14.5, *))
        {
            if ([[notification.userInfo valueForKey:AVAudioSessionInterruptionReasonKey] intValue] == AVAudioSessionInterruptionReasonAppWasSuspended)
            {
                return; // Ignore delayed suspend-in-background notification
            }
        }
        #endif
        else
        {
            if ([[notification.userInfo valueForKey:AVAudioSessionInterruptionWasSuspendedKey] boolValue])
            {
                return; // Ignore delayed suspend-in-background notification
            }
        }

        coreSystem->mixerSuspend();
        gIsSuspended = true;

        #pragma clang diagnostic pop
    }
    else if (type == AVAudioSessionInterruptionTypeEnded)
    {
        NSError *errorMessage = nullptr;
        if (![[AVAudioSession sharedInstance] setActive:TRUE error:&errorMessage])
        {
            // Interruption like Siri can prevent session activation, wait for did-become-active notification
            return;
        }

        coreSystem->mixerResume();
        gIsSuspended = false;
    }
}];

[[NSNotificationCenter defaultCenter] addObserverForName:UIApplicationDidBecomeActiveNotification object:nil queue:nil usingBlock:^(NSNotification *notification)
{
    if (gNeedsReset)
    {
        coreSystem->mixerSuspend();
        gIsSuspended = true;
    }

    NSError *errorMessage = nullptr;
    if (![[AVAudioSession sharedInstance] setActive:TRUE error:&errorMessage])
    {
        if ([errorMessage code] == AVAudioSessionErrorCodeCannotStartPlaying)
        {
            // Interruption like Screen Time can prevent session activation, but will not trigger an interruption-ended notification.
            // There is no other callback or trigger to hook into after this point, we are not in the background and there is no other audio playing.
            // Our only option is to have a sleep loop until the Audio Session can be activated again.
            while (![[AVAudioSession sharedInstance] setActive:TRUE error:nil])
            {
                usleep(20000);
            }
        }
        else
        {
            // Interruption like Siri can prevent session activation, wait for interruption-ended notification.
            return;
        }
    }

    // It's possible the system missed sending us an interruption end, so recover here
    if (gIsSuspended)
    {
        coreSystem->mixerResume();
        gNeedsReset = false;
        gIsSuspended = false;
    }
}];

[[NSNotificationCenter defaultCenter] addObserverForName:AVAudioSessionMediaServicesWereResetNotification object:nil queue:nil usingBlock:^(NSNotification *notification)
{
    if ([UIApplication sharedApplication].applicationState == UIApplicationStateBackground || gIsSuspended)
    {
        // Received the reset notification while in the background, need to reset the AudioUnit when we come back to foreground.
        gNeedsReset = true;
    }
    else
    {
        // In the foregound but something chopped the media services, need to do a reset.
        coreSystem->mixerSuspend();
        coreSystem->mixerResume();
    }
}];

Lock Screen & Background Audio

There is no special configuration inside FMOD required to enable the playback of audio from the lock screen or the background, there are two things you must configure outside of FMOD to do this though.

  1. Choose an AudioSession category that supports background / lock screen audio, see audio session basics for more details.
  2. Enable background audio functionality in your info.plist with the UIBackgroundModes key, see the iOS key reference for more details.

When playing audio on the lock screen (or during the fade out transition to silence when locking) it is important to ensure your buffering is configured correctly to allow low power audio playback. Please consult the latency section of this doc for further details.

Recording

Much like lock screen and background audio, recording requires a particular AudioSession category to be active at the time of System::recordStart (and must remain active until the recording finishes). The required category is called 'play and record' and can be read about in the audio session basics documentation. Note that FMOD is always 'playing' audio (even silence) so it is not sufficient to simply use the 'recording' category unless you are running the 'No Sound' or 'Wav Writer' output mode.

Some devices may take some time to switch AudioSession category so it is recommended to set this category at application start time to avoid any hiccups in audio playback.

You will also need to add a "Privacy - Microphone Usage Description" (NSMicrophoneUsageDescription) key to the built project's Info.plist file, with a string value explaining to the user how your application will use their recorded data.

Latency

The default latency introduced by FMOD for this platform is 4 blocks of 512 samples at a sample rate of 24 kHz, which equates to approximately 85 ms. You are free to change this using two APIs, System::setDSPBufferSize and System::setSoftwareFormat but there are some important considerations.

If you have configured background or lock screen audio when locking the device the OS will conserve power by requesting audio from FMOD less frequently. If you desire this functionality please ensure your DSP buffer size is sufficiently large to cover the request. The iOS operating system will expect 4096 samples to be available, so configure FMOD as 8 blocks of 512 samples or 4 blocks of 1024 samples to satisfy the request (otherwise silence will be produced and a warning issued on the TTY).

If you are worried about latency and do not want automatic low power mode you can configure the Audio Session buffer and sample rate to match FMOD for best results. Assuming an FMOD block size of 512 samples and 24 kHz sample rate you should configure the OS with the following:

AVAudioSession *session = [AVAudioSession sharedInstance];
double rate = 24000.0; // This should match System::setSoftwareFormat 'samplerate' which defaults to 24000
int blockSize = 512; // This should match System::setDSPBufferSize 'bufferlength' which defaults to 512

BOOL success = [session setPreferredSampleRate:rate error:nil];
assert(success);

success = [session setPreferredIOBufferDuration:blockSize / rate error:nil];
assert(success);

success = [session setActive:TRUE error:nil];
assert(success);

Multi-channel Output

For hardware that supports greater than stereo output you can configure the device to operate with that channel count using the AudioSession API.

Here is a code snippet that demonstrates using as many channels as available:

AVAudioSession *session = [AVAudioSession sharedInstance];
long maxChannels = [session maximumOutputNumberOfChannels];

BOOL success = [session setPreferredOutputNumberOfChannels:maxChannels error:nil];
assert(success);

success = [session setActive:TRUE error:nil];
assert(success);

Suspend in Background

FMOD native threads will continue running when your application transitions to the background, this will continue to use resources. To completely stop FMOD without losing your current setup you can call System::mixerSuspend as part of your backgrounding process. When you return to the foreground, use System::mixerResume to reactivate FMOD. It is extremely important to ensure no FMOD APIs are called in-between suspend and resume as they run the risk of causing a deadlock. You must also call suspend and resume pairs on the same thread.

Thread Affinity

All threads will default to FMOD_THREAD_AFFINITY_CORE_ALL, it is not currently possible to override this with Thread_SetAttributes.

Thread Priority

The relationship between FMOD platform agnostic thread priority and the platform specific values is as follows:

For FMOD to detect the channel count you must use setPreferredOutputNumberOfChannels and activate your AudioSession before calling System::init.

Performance Reference

This section is a companion for the CPU Performance white paper and serves as a quick reference of facts targeting this platform.

Format Choice

Each compression format provided in FMOD has a reason for being included, the below list will detail our recommendations for this platform. Formats listed as primary are considering the best choice, secondary formats should only be considered if the primary doesn't satisfy your requirements.

Channel Count

To give developers an idea about the costs of a particular format we provide synthetic benchmark results. These results are based on simple usage of the Studio API using recommended configuration settings.

Settings

Test Device: A

Results: A

Test Device: B

Results: B