<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Finn Voorhees&apos; Blog</title><description/><link>https://finnvoorhees.com/</link><item><title>Platform-Specific Release Notes with Xcode Cloud</title><link>https://finnvoorhees.com/words/platform-specific-release-notes-with-xcode-cloud/</link><guid isPermaLink="true">https://finnvoorhees.com/words/platform-specific-release-notes-with-xcode-cloud/</guid><pubDate>Tue, 09 Jan 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve been using &lt;a href=&quot;https://developer.apple.com/xcode-cloud/&quot;&gt;Xcode Cloud&lt;/a&gt; as a CI/CD for all my apps since it came out. It has streamlined the release cycle, and since the addition of support for &quot;What to Test&quot; notes in June 2023, you can build and distribute an app to external testers with the click of a button, a git commit, or a &lt;a href=&quot;https://github.com/finnvoor/xcc&quot;&gt;terminal command&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When adding Xcode Cloud release workflows to a cross-platform iOS and macOS app, however, I ran into an issue. While you can specify release notes for the build by editing &lt;code&gt;TestFlight/WhatToTest.&amp;lt;LOCALE&amp;gt;.txt&lt;/code&gt;, these release notes will be used for all workflows in your project. Because the iOS app was built using SwiftUI and the macOS app was built using AppKit, each had different features to test and bugs that I was fixing, so it didn&apos;t make sense to write the same release notes for each platform. I didn&apos;t really want to have to make separate commits to make builds with unique release notes, so I just submitted an enhancement request (FB13501281), prefixed each line of my release notes with (iOS) or (macOS), and moved on.&lt;/p&gt;
&lt;p&gt;To my surprise, however, a few days later I received a response in the feedback app from someone letting me know my suggestion had been passed along to the responsible team, offering a workaround in the meantime:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#  ci_pre_xcodebuild.sh
if [[ &quot;$CI_XCODEBUILD_ACTION&quot; == &quot;archive&quot; ]]; then
    if [[ &quot;$CI_PRODUCT_PLATFORM&quot; == &quot;iOS&quot; ]]; then
        echo &quot;Set WhatToTest notes for iOS&quot;
        mv $CI_PROJECT_FILE_PATH/../TestFlight/iOS-WhatToTest.en-US.txt $CI_PROJECT_FILE_PATH/../TestFlight/WhatToTest.en-US.txt
    elif [[ &quot;$CI_PRODUCT_PLATFORM&quot; == &quot;macOS&quot; ]]; then
        echo &quot;Set WhatToTest notes for macOS&quot;
        mv $CI_PROJECT_FILE_PATH/../TestFlight/macOS-WhatToTest.en-US.txt $CI_PROJECT_FILE_PATH/../TestFlight/WhatToTest.en-US.txt
    fi
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The solution is a &lt;a href=&quot;https://developer.apple.com/documentation/xcode/writing-custom-build-scripts&quot;&gt;custom build script&lt;/a&gt; that automatically replaces the contents of &lt;code&gt;WhatToTest.&amp;lt;LOCALE&amp;gt;.txt&lt;/code&gt; with the contents of a platform-specific release notes file depending on which platform the running workflow is targeting.&lt;/p&gt;
&lt;p&gt;While it would be great to specify a custom path to release notes in each workflow, this script has been working nicely for me and isn&apos;t too much additional setup.&lt;/p&gt;
</content:encoded></item><item><title>Subscribing to SwiftData changes outside SwiftUI</title><link>https://finnvoorhees.com/words/subscribing-to-swiftdata-changes-outside-swiftui/</link><guid isPermaLink="true">https://finnvoorhees.com/words/subscribing-to-swiftdata-changes-outside-swiftui/</guid><pubDate>Sun, 28 Apr 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;While there are plenty of articles about &lt;a href=&quot;https://jacobbartlett.substack.com/p/swiftdata-outside-swiftui&quot;&gt;using SwiftData outside of SwiftUI&lt;/a&gt;, many fail to mention how you might observe insertions and deletions similarly to how SwiftUI’s &lt;a href=&quot;https://developer.apple.com/documentation/swiftdata/query&quot;&gt;Query&lt;/a&gt; macro works.&lt;/p&gt;
&lt;p&gt;Because SwiftData is backed by CoreData, changes can be detected using the &lt;code&gt;.NSPersistentStoreRemoteChange&lt;/code&gt; notification. A simple &lt;code&gt;Database&lt;/code&gt; wrapper class can be created that vends &lt;code&gt;AsyncStream&lt;/code&gt;&apos;s of SwiftData models given a &lt;code&gt;FetchDescriptor&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This AsyncStream can then be used to, for example, update a diffable data source to display your data in a &lt;code&gt;UICollectionView&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Database.swift&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;import Foundation
import SwiftData

@MainActor final class Database {
    // MARK: Lifecycle

    init(isStoredInMemoryOnly: Bool = false) {
        do {
            let configuration = ModelConfiguration(isStoredInMemoryOnly: isStoredInMemoryOnly)
            container = try ModelContainer(for: Link.self, configurations: configuration)
        } catch {
            fatalError(&quot;\(error)&quot;)
        }
    }

    // MARK: Public

    public var context: ModelContext { container.mainContext }

    // MARK: Internal

    func models&amp;lt;T: PersistentModel&amp;gt;(
        filter: Predicate&amp;lt;T&amp;gt;? = nil,
        sort keyPath: KeyPath&amp;lt;T, some Comparable&amp;gt;,
        order: SortOrder = .forward
    ) -&amp;gt; AsyncStream&amp;lt;[T]&amp;gt; {
        let fetchDescriptor = FetchDescriptor(
            predicate: filter,
            sortBy: [SortDescriptor(keyPath, order: order)]
        )
        return models(matching: fetchDescriptor)
    }

    func models&amp;lt;T: PersistentModel&amp;gt;(matching fetchDescriptor: FetchDescriptor&amp;lt;T&amp;gt;) -&amp;gt; AsyncStream&amp;lt;[T]&amp;gt; {
        AsyncStream { continuation in
            let task = Task {
                for await _ in NotificationCenter.default.notifications(
                    named: .NSPersistentStoreRemoteChange
                ).map({ _ in () }) {
                    do {
                        let models = try container.mainContext.fetch(fetchDescriptor)
                        continuation.yield(models)
                    } catch {
                        // log/ignore the error, or return an AsyncThrowingStream
                    }
                }
            }
            continuation.onTermination = { _ in
                task.cancel()
            }
            do {
                let models = try container.mainContext.fetch(fetchDescriptor)
                continuation.yield(models)
            } catch {
                // log/ignore the error, or return an AsyncThrowingStream
            }
        }
    }

    // MARK: Private

    private let container: ModelContainer
}
&lt;/code&gt;&lt;/pre&gt;
&lt;hr /&gt;
&lt;h3&gt;Usage&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;for await items in database.models(sort: \TodoItem.creationTime, order: .reverse) {
    var snapshot = NSDiffableDataSourceSnapshot&amp;lt;Int, TodoItem&amp;gt;()
    snapshot.appendSections([0])
    snapshot.appendItems(items, toSection: 0)
    dataSource.apply(snapshot)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You might want to throw a &lt;code&gt;.removeDuplicates()&lt;/code&gt; from &lt;a href=&quot;https://github.com/apple/swift-async-algorithms&quot;&gt;swift-async-algorithms&lt;/a&gt; on the stream to avoid unnecessary updates.&lt;/p&gt;
&lt;/blockquote&gt;
</content:encoded></item><item><title>Reading and Writing Spatial Photos with Image I/O</title><link>https://finnvoorhees.com/words/reading-and-writing-spatial-photos-with-image-io/</link><guid isPermaLink="true">https://finnvoorhees.com/words/reading-and-writing-spatial-photos-with-image-io/</guid><pubDate>Sun, 21 Jan 2024 00:00:00 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;This is a follow up to my previous post on &lt;a href=&quot;/words/reading-and-writing-spatial-video-with-avfoundation&quot;&gt;Reading and Writing Spatial Video with AVFoundation&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#writing-spatial-photos&quot;&gt;Writing Spatial Photos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#reading-spatial-photos&quot;&gt;Reading Spatial Photos&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;p&gt;Spatial photos are even less documented than spatial video, however it&apos;s not too difficult to create your own spatial photos once you know the basics. While spatial videos are contained in MV-HEVC video files, spatial photos are stored in a HEIC file. HEIC files can contain multiple images and metadata, and in the case of spatial photo, each spatial photo file contains two images: one for the left eye and one for the right. Once you know what metadata you need to add, spatial photos become quite easy to create using the &lt;a href=&quot;https://developer.apple.com/documentation/imageio&quot;&gt;Image I/O&lt;/a&gt; framework.&lt;/p&gt;
&lt;h2&gt;Writing Spatial Photos&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;To start out, you will need two &lt;code&gt;CGImages&lt;/code&gt;, one representing each eye. In this case we are using a solid red image for the left eye and a solid blue image for the left eye, but in practice you will most likely render a scene with a camera offset or capture two images using a stereo camera system:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let imageSize = CGRect(x: 0, y: 0, width: 3072, height: 3072)
let leftImage = CIContext().createCGImage(.red, from: imageSize)!
let rightImage = CIContext().createCGImage(.blue, from: imageSize)!
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Create a new image destination pointing towards an output URL, specifying &quot;public.heic&quot; for the type identifier and &quot;2&quot; for the image count:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let destination = CGImageDestinationCreateWithURL(url as CFURL, UTType.heic.identifier as CFString, 2, nil)!
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Create a properties dictionary. This will contain information on the left and right image index as well as a matrix that relates a camera’s internal properties to an ideal pinhole-camera model. The camera intrinsics may be specific to the technique you are using to generate stereo images:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let properties = [
    kCGImagePropertyGroups: [
        kCGImagePropertyGroupIndex: 0,
        kCGImagePropertyGroupType: kCGImagePropertyGroupTypeStereoPair,
        kCGImagePropertyGroupImageIndexLeft: 0,
        kCGImagePropertyGroupImageIndexRight: 1,
    ],
    kCGImagePropertyHEIFDictionary: [
        kIIOMetadata_CameraModelKey: [
            kIIOCameraModel_Intrinsics: [
                1676.249856948853, 0, 1536,
                0, 1676.249856948853, 1536,
                0, 0, 1
            ] as CFArray
        ]
    ]
]
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Add the left and right image to you image destination, passing the properties to each call:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;CGImageDestinationAddImage(destination, leftImage, properties as CFDictionary)
CGImageDestinationAddImage(destination, rightImage, properties as CFDictionary)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Finalize the image:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;CGImageDestinationFinalize(destination)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now have a spatial photo stored at &lt;code&gt;url&lt;/code&gt; that you can view in the Apple Vision Pro simulator. It is worth noting there are other property values you may specify, however the ones above are the bare minimum for getting a spatial photo to display in the simulator. The example spatial photo from Apple contains values for camera extrinsics (with a position offset indicating it was taken with a two-camera system) as well as a horizontal disparity adjustment.&lt;/p&gt;
&lt;h2&gt;Reading Spatial Photos&lt;/h2&gt;
&lt;p&gt;Reading a spatial photo is as easy as creating a &lt;code&gt;CGImageSource&lt;/code&gt; pointing to the spatial photo URL and copying the left and right image from it:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;let source = CGImageSourceCreateWithURL(url as CFURL, nil)!
let leftImage: CGImage = CGImageSourceCreateImageAtIndex(source, 0, nil)!
let rightImage: CGImage = CGImageSourceCreateImageAtIndex(source, 1, nil)!
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Reading and Writing Spatial Video with AVFoundation</title><link>https://finnvoorhees.com/words/reading-and-writing-spatial-video-with-avfoundation/</link><guid isPermaLink="true">https://finnvoorhees.com/words/reading-and-writing-spatial-video-with-avfoundation/</guid><pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate><content:encoded>&lt;hr /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#reading-spatial-video-using-avassetreader&quot;&gt;Reading Spatial Video using AVAssetReader&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#reading-spatial-video-using-avplayer&quot;&gt;Reading Spatial Video using AVPlayer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#writing-spatial-video-using-avassetwriter&quot;&gt;Writing Spatial Video using AVAssetWriter&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;p&gt;If you&apos;ve worked with AVFoundation&apos;s APIs, you&apos;ll be familiar with &lt;code&gt;CVPixelBuffer&lt;/code&gt;, an object which represents a single video frame. AVFoundation manages the tasks of reading, writing, and playing video frames, but the process changes when dealing with &lt;a href=&quot;https://www.apple.com/newsroom/2023/12/apple-introduces-spatial-video-capture-on-iphone-15-pro/&quot;&gt;spatial video&lt;/a&gt; (aka MV-HEVC), which features video from two separate angles.&lt;/p&gt;
&lt;p&gt;Loading a spatial video into an AVPlayer or AVAssetReader on iOS appears similar to loading a standard video. By default, however, the frames you receive only show one perspective (the &quot;hero&quot; eye view), while the alternate angle, part of the MV-HEVC file, remains uncompressed.&lt;/p&gt;
&lt;p&gt;With iOS 17.2 and macOS 14.2, new AVFoundation APIs were introduced for handling MV-HEVC files. They make it easy to get both angles of a spatial video, but are lacking in documentation[^1]. Here&apos;s a few tips for working with them:&lt;/p&gt;
&lt;h2&gt;Reading Spatial Video using AVAssetReader&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;AVAssetReader&lt;/code&gt; can read media data faster than realtime, and is good for reading a video file, applying some transform, and writing back to a file using &lt;code&gt;AVAssetWriter&lt;/code&gt; (for example, converting a spatial video to an SBS video playable on a Meta Quest / XREAL Air). Reading spatial video requires informing VideoToolbox that we want both angles decompressed from the video, instead of just the &quot;hero&quot; eye.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create an &lt;code&gt;AVAssetReader&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let asset = AVAsset(url: &amp;lt;path-to-spatial-video&amp;gt;)
let assetReader = try AVAssetReader(asset: asset)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Create an &lt;code&gt;AVAssetReaderTrackOuptut&lt;/code&gt;, specifying that we want both MV-HEVC video layers decompressed:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let output = try await AVAssetReaderTrackOutput(
    track: asset.loadTracks(withMediaType: .video).first!,
    outputSettings: [
        AVVideoDecompressionPropertiesKey: [
            kVTDecompressionPropertyKey_RequestedMVHEVCVideoLayerIDs: [0, 1] as CFArray,
        ],
    ]
)
assetReader.add(output)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Start copying sample buffers containing both angles:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;assetReader.startReading()

while let nextSampleBuffer = output.copyNextSampleBuffer() {
    guard let taggedBuffers = nextSampleBuffer.taggedBuffers else { return }

    let leftEyeBuffer = taggedBuffers.first(where: {
        $0.tags.first(matchingCategory: .stereoView) == .stereoView(.leftEye)
    })?.buffer
    let rightEyeBuffer = taggedBuffers.first(where: {
        $0.tags.first(matchingCategory: .stereoView) == .stereoView(.rightEye)
    })?.buffer

    if let leftEyeBuffer,
       let rightEyeBuffer,
       case let .pixelBuffer(leftEyePixelBuffer) = leftEyeBuffer,
       case let .pixelBuffer(rightEyePixelBuffer) = rightEyeBuffer {
        // do something cool
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Reading Spatial Video using AVPlayer&lt;/h2&gt;
&lt;p&gt;When dealing with real-time playback, you&apos;ll often want to use &lt;code&gt;AVPlayer&lt;/code&gt;, which manages the playback and timing of a video automatically. &lt;code&gt;AVPlayerVideoOutput&lt;/code&gt; is the new API added for reading spatial video in real time, and is straightforward to set up.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create an &lt;code&gt;AVPlayer&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let asset = AVAsset(url: &amp;lt;path-to-spatial-video&amp;gt;)
let player = AVPlayer(playerItem: AVPlayerItem(asset: asset))
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Create an &lt;code&gt;AVPlayerVideoOutput&lt;/code&gt; for outputting stereoscopic video:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let outputSpecification = AVVideoOutputSpecification(
    tagCollections: [.stereoscopicForVideoOutput()]
)
let videoOutput = AVPlayerVideoOutput(specification: outputSpecification)
player.videoOutput = videoOutput
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Add a periodic time observer for reading frames at a specified interval:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;player.addPeriodicTimeObserver(
    forInterval: CMTime(value: 1, timescale: 30),
    queue: .main
) { _ in
    guard let taggedBuffers = videoOutput.taggedBuffers(
        forHostTime: CMClockGetTime(.hostTimeClock)
    )?.taggedBufferGroup else { return }

    let leftEyeBuffer = taggedBuffers.first(where: {
        $0.tags.first(matchingCategory: .stereoView) == .stereoView(.leftEye)
    })?.buffer
    let rightEyeBuffer = taggedBuffers.first(where: {
        $0.tags.first(matchingCategory: .stereoView) == .stereoView(.rightEye)
    })?.buffer

    if let leftEyeBuffer,
       let rightEyeBuffer,
       case let .pixelBuffer(leftEyePixelBuffer) = leftEyeBuffer,
       case let .pixelBuffer(rightEyePixelBuffer) = rightEyeBuffer {
        // do something cool
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Play!&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;player.play()
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Writing Spatial Video using AVAssetWriter&lt;/h2&gt;
&lt;p&gt;Following the advice in &lt;a href=&quot;https://developer.apple.com/news/?id=prl6dp5r&quot;&gt;Q&amp;amp;A: Building apps for visionOS&lt;/a&gt;, the steps for creating a spatial video from two stereo videos are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create an &lt;code&gt;AVAssetWriter&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let assetWriter = try! AVAssetWriter(
    outputURL: &amp;lt;path-to-spatial-video&amp;gt;,
    fileType: .mov
)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Create and add a video input for the spatial video. It is important to specify &lt;code&gt;kVTCompressionPropertyKey_MVHEVCVideoLayerIDs&lt;/code&gt;, &lt;code&gt;kCMFormatDescriptionExtension_HorizontalFieldOfView&lt;/code&gt;, and &lt;code&gt;kVTCompressionPropertyKey_HorizontalDisparityAdjustment&lt;/code&gt; in the compression properties. Without these, your video will not be read as a spatial video on visionOS:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let input = AVAssetWriterInput(
    mediaType: .video,
    outputSettings: [
        AVVideoWidthKey: 1920,
        AVVideoHeightKey: 1080,
        AVVideoCompressionPropertiesKey: [
            kVTCompressionPropertyKey_MVHEVCVideoLayerIDs: [0, 1] as CFArray,
            kCMFormatDescriptionExtension_HorizontalFieldOfView: 90_000, // asset-specific, in thousandths of a degree
            kVTCompressionPropertyKey_HorizontalDisparityAdjustment: 200, // asset-specific
        ],
        AVVideoCodecKey: AVVideoCodecType.hevc,
    ]
)
assetWriter.add(input)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Create an &lt;code&gt;AVAssetWriterInputTaggedPixelBufferGroupAdaptor&lt;/code&gt; for the video input:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let adaptor = AVAssetWriterInputTaggedPixelBufferGroupAdaptor(assetWriterInput: input)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Start writing:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;assetWriter.startWriting()
assetWriter.startSession(atSourceTime: .zero)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Start appending frames. Each frame consists of two &lt;code&gt;CMTaggedBuffer&lt;/code&gt;s:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;let left = CMTaggedBuffer(tags: [.stereoView(.leftEye), .videoLayerID(0)], pixelBuffer: leftPixelBuffer)
let right = CMTaggedBuffer(tags: [.stereoView(.rightEye), .videoLayerID(1)], pixelBuffer: rightPixelBuffer)
adaptor.appendTaggedBuffers([left, right], withPresentationTime: &amp;lt;presentation-timestamp&amp;gt;)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Finish writing:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;input.markAsFinished()
assetWriter.endSession(atSourceTime: &amp;lt;end-time&amp;gt;)
assetWriter.finishWriting {
    // share assetWriter.outputURL
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;[^1]: Since publishing this article, Apple has added sample code for both &lt;a href=&quot;https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/reading_multiview_3d_video_files&quot;&gt;reading multiview 3D video files&lt;/a&gt; and &lt;a href=&quot;https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc&quot;&gt;writing multiview HEVC&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item></channel></rss>