Bite-sized videos on iOS development.
The iOS landscape is large and changes often. With short, bite-sized videos released on a steady schedule, NSScreencast helps keep you continually up to date.
Up to date with Xcode 15 and iOS 17
UIKit, SwiftUI, SwiftData, and macOS
Swift Language
High Quality Videos
Short and Focused
Any Device
Team Plans
Have I mentioned lately how awesome NSScreencast is? No? Worth the subscription. Check it out if you’re an iOS developer. Or even if you’re not and you want an example of how to do coding screencasts well.
Got tired of dead-end googling so I checked to see if @NSScreencast had covered what I was looking for. Of course he had, 4 years ago. Should have checked there first.
One 13-minute episode of @NSScreencast just paid for the yearly subscription fee in amount of time saved. Do it.
Seriously great stuff even for seasoned developers. I’ve learned a good amount from Ben’s videos.
You can really expand your development horizons in just a few minutes a week with NSScreencast.
Random PSA for iOS developers: @NSScreencast is a great resource, and worth every penny. It’s high quality, practical, and honest.
Can’t say enough good things about @NSScreencast There is gold in the Road Trip DJ Series.
I just reuppped my subscription to @NSScreencast. [An] indespensible resource if you’re into iOS or Mac Development.
Just finished @NSScreencast series on Modern CollectionViews. Strongly recommended. Programmatic UI, nicely structured code, easily approachable explanation style. 👌
#405
In this episode we implement one of the core functions of a podcast player: playing audio! Using AVPlayer we load up the track and observe its progress so we can update the UI to reflect time progressed, time remaining, as well as allowing the user to scrub to a position in the track.
#353
With special guest Yono, we dive into the system for text-to-speech and speech recognition on iOS. Yono builds an app for language practice. Along the way we become familiar with AVAudioEngine, AVSpeechSynthesizer, and SFSpeechRecognizer from the Speech Framework.
#304
In this episode we extract our camera preview layer into its own view so we can add subviews. We’ll use this to add a flash button control to the UI, which will require us to learn about locking the device and controlling the camera’s "torch".
#303
I got a tip from a subscriber about the method I used to capture the photo in the last episode. Since we're capturing the data using the preset we chose for processing the rectangles, we are bound to that preset when we export to an actual photo. By using AVCapturePhotoOutput, we can have much more control over the format, size, and quality of the resulting image. In addition, since we are leveraging the SDK for capturing the photo, we benefit from things like auto-flash, HDR, auto focus, and the built-in camera shutter sound. (Yay for deleting code!) The end result might not look very different on device, but if you are taking that image somewhere else to do OCR or other processing on it, a higher quality image is important.
#301
In this episode we take the captured image and run the perspective correction filter on it in order to turn a skewed rect back into a flat rectangle. We then display the image on the screen for a few seconds as a preview mechanism.
#300
In this episode we implement features that make the app feel more like a camera. When tapping the screen, we play the system camera shutter sound using AudioServices, then we add a small but useful flash effect to reinforce the fact the user took a photo. We'll also talk about a strategy for capturing the image by using a flag.
#298
In the last episode it was clear our coordinate math was wrong. But why? We will answer that question and solve the problem. We also add code to hide the box if it hasn’t detected a rectangle for a few seconds.
#297
We learn how to convert a CMSampleBuffer into a CIImage and how to set up our CIDetector to detect rectangles. We then draw a box around the detected rectangle and discuss why adding simple subviews won't work with full screen sublayers.
#296
We set up the preview layer so we can see the video, then we add a sample buffer queue so we can get access to the individual frames of the video coming through the capture session.
#295
We start by setting up the project to take photos. We start by disabling rotation, which is typical of a camera app. We then add the appropriate Info.plist entry so that we can inform the user why we need camera access. We organize the view controller intro sections to keep things tidy as we add code to the project. Finally we setup the camera capture session, which is the first step to capturing video from the camera.
#294
In this introductory episode I show the finished version of the application we will be building in this series.