Learn slowly to develop quickly.

Slow down

I’ve seen something happen many times during my longish tenure as a software developer being amidst to some others who are new to it. They seem to have a complete lack of wanting to understand something before tearing into it.

When someone comes to mobile development, and all they have under their belt is some solid web development using JavaScript/CSS/HTML, and maybe some PHP – they are used to manipulating a page’s DOM. And that’s cool. You can do a whole host of interesting things with that understanding and experience. However, they lack many basic concepts that are nearly required for mobile development. These aren’t trivial things you can get around knowing – they form the basis… the solid foundation of your endeavors moving forward.

I have experienced this in the past and I see it at times at work. Instead of owning up to some facts about a lack of understanding – they want to pump out some code and show everyone they are wizards. They got it.

Only wanting to break the finish line tape…

Then they ask for some working source code without worrying about how it actually works. They try to plug it into what they already cobbled together and have a hard time working with it – getting it to work. They don’t know about scope. They don’t know about classes, extensions, or how to best wire up various aspects of their UI.

They ask for completed code – but don’t typically search StackOverflow or even Google things. If they do, they don’t worry about what makes it work. In the event they do get something working, if they are asked to make tweaks or changes, they are screwed. Why? They don’t know how it works – or know enough to make the change easily.

They fight with code and concepts. They want to get to the finish line in whatever they are doing – by skipping the race. In the end, they may eventually get to what they are after. But they aren’t putting the work in, the investigation, the basics of design and development, and in the end, they are only slowing down their development.

Being a developer shouldn’t be about ego. If you don’t know something, don’t be embarrassed to ask teammates. Don’t downplay your struggles.

If you don’t understand something, take the time to learn about it. Create test projects or Swift Playgrounds and bang some code. Read articles and tutorials about making things. Try them for yourself. Once you have a solid foundation, you’ll be able to more easily approach challenges and deliver projects that you understand.

Use comments in your code – and place questions, TODO or FIXME, and MARKs there. This is especially handy if you’re working on something in source control with other team members. They have a chance to see your stuff and be able to offer up help. Don’t rely on this though.

//I don't understand this, can someone elaborate?
//TODO: Why is this not working? Need some help.
//FIXME: This isn't working, how come?
//MARK: — Section Marker

When you do this, you’ll see those called out in the source file you’re currently working in (except for comments – but still useful).

Explanation

Development shouldn’t be about wowing your co-workers. It’s about delivering great experiences in a timely manner, with the ability to iterate quickly along the way. Communicate your ideas and approaches. If you’re calling on others to help you bang things out – that’s good. But not in lieu of understanding what they are doing and why they are doing it.

If you are relying on others to do some pretty conceptually basic things over and over, they will become less likely to assist you in the future.

There is no escaping time and effort. They are no substitutes for trying things, learning things and failing at them. If you learn slowly, you’ll forget slowly. Spend time outside of projects to learn core concepts. Code, code, and code some more. Look at applications you like and think about how they might be composed (backend and frontend) – and then try to build that part yourself as a prototype.

Start simple… tables, collection views, loading assets, playing sounds, using gestural input, etc. Get those things nailed down. Then subclass some UI to do something special. Learn about delegation, Classes, extensions, etc. Grow slowly. Build upon past prototypes. You’ll have something to lean on moving forward. Every day will get better and better because of your efforts and you taking the time to learn how to develop for the platform.

Ask questions.

Read.

You’ll be much happier in the end. You’ll know more. You’ll be able to produce more. You’ll be able to iterate quicker. You’ll be a better teammate.

Verifying assets with the iTunes Store… solved

Verifying

Your mileage may vary with this post, but in my case, I was chasing my tail for some time while finally coming to a solution.

At work, I generally author all kinds of prototypes. Sometimes I use Xcode and author iOS applications with interesting levels of interaction and communication. In these cases, I often need to share my work with a designer who lives several time zones away from me. Instead of him installing Xcode and building to his own devices while utilizing source control, I can swing a binary his way using TestFlight. This works pretty well. Until recently.

I would archive my project and attempt to upload it to App Store Connect so I could assign it to the designer and they would be able to install directly from TestFlight on their device.

This time the upload stuck at “Verifying assets with the iTunes Store…”

Googling for answers, I came across a lot of chatter about getting off a corporate network because of potential port blockage that would allow the upload to complete. That did not work for me. I tethered to my iPhone X, public Wi-Fi outside my work LAN, etc. Those did not work. My upload remained stalled at the same place each and every time. I restarted my laptop. No difference.

I then did the following which actually told me what was going on:

  1. Create an Archive in Xcode of my project.
  2. Export the archive to my desktop instead of uploading to the iTunes Store. Saved to my desktop.
  3. From Xcode, I launched the Application Loader. Selected the .ipa generated in step 2 above.
  4. After getting stuck during the same activity, AL actually gave me a bunch of error feedback. I forgot to apply artwork for all of my application’s icons. Oops. Without using AL, I never would have known this.
  5. I assigned all the icon images in the project – then tried to upload a new archive from Xcode.
  6. It was stuck yet again. No idea why.
  7. I took that new archive, exported again to my desktop. Used AL to upload the newly created .ipa to the iTunes Store.
  8. This worked quickly and just fine!

I had become so used to authoring my own prototypes (apps) without worrying about icons for TestFlight that I simply didn’t include them. I spent a few hours trying to solve the problem. It seems there is still some sort of thing preventing me from uploading an archive directly through Xcode – I still needed to use Application Loader. However, I was able to get the binary uploaded, and after about 10 minutes it was processed and ready for TestFlight activity.

Monitoring the closest 20 regions – how?

iOS allows each application to monitor up to 20 regions. That seems like a lot, especially when you figure that the regions are registered and can be triggered (entry and exit) even while your app is backgrounded or your phone is on standby. So what do you do when you have more than 20? iOS will simply ignore them if added to your locationManager.

You can monitor the 20 closest (most likely) to be encountered. Why bother registering for regions that are far away? So in theory upon significant user location change, triggering a region, etc.

  • stop monitoring for all regions that may exist for your location manager
  • find the 20 closest regions from the user’s location
  • register for those 20

Rinse and repeat based on your own logic (as mentioned before as significant user location update, the addition or subtraction of a region, etc.)

For me personally, I wanted to figure out how to do this myself. I know I needed to perform a sort based on distances.

I had an array of CLCircleRegions.

var monitoredRegions:[CLCircularRegion] = []

I pushed into that when placing my pin annotations on the map. You can’t sort them though. Hmm. I needed something with a property I could sort. So I created a struct with a region and distance.

struct SortableRegion {
    var distance: Double
    var region: CLCircularRegion
}

Cool. Getting closer. We can sort on that distance property. Here is a function that basically does what I need it to do. I’ll walk through it after showing it to you.

func checkAndMonitorTwentyClosestRegions()
{
    stopMonitoringAllRegions()
        
    if monitoredRegions.count > 20
    {
        print("we need to only monitor the 20 closest regions. Call this anytime regions are added, removed, or the user moves significantly.")
            
        var sortableRegions:[SortableRegion] = []            
        let location: CLLocation = CLLocation(latitude: myMapView.userLocation.coordinate.latitude, longitude: myMapView.userLocation.coordinate.longitude)
                
        for region in self.monitoredRegions
        {
            let fenceLocation = CLLocation(latitude: region.center.latitude, longitude: region.center.longitude)
            let distance = location.distance(from: fenceLocation)
                    
            let sortRegion = SortableRegion(distance: distance, region: region)
            sortableRegions.append(sortRegion)
        }
                
        let sortedArray = sortableRegions.sorted {
            $0.distance < $1.distance
        }
                
        // Grab the first 20 closest and monitor those.
                
        let firstTwentyRegions = sortedArray[0..<20]
        for item in firstTwentyRegions {
            let region = item.region
                locationManager.startMonitoring(for: region)
            }
        }        
    } else {
        print("Less than 20 regions, monitor them all.")
        for region in monitoredRegions {
            locationManager.startMonitoring(for: region)
        }
    }
}

Right off the bat, I call a method that stops monitoring for all regions a location manager might have. Basically just:

for monitored in locationManager.monitoredRegions {
    locationManager.stopMonitoring(for: monitored)
}

Now I check how many regions there are. If there are less than 20, we register them all. Easy. If there are more than 20, we need to find the nearest 20 of them and start monitoring those.

I created an empty array typed to SortableRegion (that struct I made). I then grabbed a reference to the user’s location. We want to use that to determine the distance to the center of every region. I create an instance of SortableRegion and append that to the array. For each region. So each instance now has reference to the CLCircleRegion as well as the distance. Perfect. We then sort the whole thing when we’re done based on distance, lowest to highest.

Then we make an instance of the sorted array (using a range) – grabbing just the first 20 items. Pretty nifty. We loop through that array and using the reference to CLCircleRegion, monitor for each one. When you want to check again, make sure the monitoredRegions is properly updated first, and then call the checkAndMonitorTwentyClosestRegions function and you should be all set.

WWDC 17 Packing List

WWDC17

It’s almost time for WWDC17, this year in San Jose, CA instead of San Francisco. I haven’t been in several years because my lottery skills are obviously lacking. I managed to score one this year and I’m pretty excited. What will I take this year?

Well, to be honest, taking a laptop to sessions in the past, for me, has been serious overkill. I might jot some notes down in a Moleskine or other notebook – for transcription later. Taking notes on a laptop brings distraction to the game for me. Am I plugged in and charging? I need to get close to a plug. Oh, I can check Facebook during this session. Email. Pretend to be coding something amazing in Xcode. Taking notes on a laptop makes me feel like a court stenographer and I want to relax a bit and take the information in. I can always watch the session again later in its stored video format (to catch some details that might have gone by too quickly).

I will still take my trusty MacBook Pro R, but that will be for when I’m back in my room at night – coding, emailing, sorting photos, watching session videos again, etc. It will probably stay in my room locked up. Charged and ready for my welcome return at the close of each day. That will save me a bunch of weight to lug around too. I can use my laptop bag for other things, including hoofing multiple Odwalla juices around (if they supply those big cans of it around the venue).

For travel:

  • Bose QuietControl 30
  • Bose QuietComfort 35
  • Casio ProTrek PRW 3100y-1B (all black) watch

For the sessions:

  • iPhone 6 Plus. The most important bit of gear I have.
  • At least two portable battery chargers. Anker Astro Pro Series(20000mAH) and a RavPower (16750mAH)one I had before the Anker. Both charged up and ready to fly. The Anker needs plugged into an outlet, the RavPower does microUSB for charging.
  • Two point-and-shoot cameras – both Sony. An older one which is my favorite, and a newer model which I don’t like as much.
  • Timbuk2 messenger laptop bag.
  • Moleskine, fountain pen, EDC pens, etc.
  • Zojirushi Coffee Thermos 20oz. Über awesome and über important.
  • Various cables, USB wall plugs, etc.
  • Takeya ThermoFlask Insulated Stainless Steel Water Bottle, 40 oz, Asphalt. Huge. Perfect. Might leave at the hotel as it could be overkill.
  • Apple Watch series 2. I’m hoping Apple updates the WWDC iOS app and it gets watch support – for keeping us on our selected schedules, getting important updates, map support, etc.
  • Sunglasses. For me, very important.

Hotel:

  • Takeya ThermoFlask Insulated Stainless Steel Water Bottle, 40 oz, Asphalt. Huge. Perfect. No water bottles.
  • Couple of polos/shirts (a few older WWDC ones, a Swift logo one, etc.)
  • Shorts/jeans/Scarpa shoes. Scarpa Margherita rock.
  • A spring jacket in case it gets cool at night (Patagonia fold up for laptop bag) – not wearing a WWDC17 one if Apple swags those out.
  • Livionex tooth gel. Its amazing.
  • Wet shaving and beard gear. Very important to clean up daily.
  • Packing cubes. Although I’m not bringing a ton of clothes, they really, really, really help keep things tidy. I know I’ll be bringing back a few additional shirts and a sweatshirt at least.
  • Snooz white noise generator (I got from Kickstarter)
Tagged : / / /

Getting hour offsets for local time versus a target time zone in Swift

Swift

I am currently working on something that will display time in different time zones. However, you need to compare the user’s current time versus those at the target time zones to be accurate. This is what I figured out.

It’s easy to get the user’s local time.

let cal = Calendar(identifier: .gregorian)
let date = Date()
let hour = cal.component(.hour, from: date)
let minute = cal.component(.minute, from: date)
let second = cal.component(.second, from: date)
print("\nLOCAL TIME: \(hour):\(minute):\(second)\n")

Now, you need to get the time in other time zone targets. Here is an example for London, England. A continuation from the code above.

let locale = TimeZone.init(identifier: "Europe/London")
let comp = cal.dateComponents(in: locale!, from: date)
print("LONDON: \(String(describing: comp.hour!)):\(String(describing: comp.minute!)):\(String(describing: comp.second!)), day: \(comp.day!)")

Excellent. Now, compute the difference between the user’s local time and the time in London.

// I am in Boston currently.
print("London offset (modifier): \(comp.hour! - hour)") // 5

And there you have it. The difference is 5 hours ahead (it’s a positive value). If I were in London or that time zone, the difference would be 0. For someone like me who rarely dabbles in date work, this was a bit of discovered magic. For others, this post must be quite pedestrian and a waste of time. I apologize. But if I found a post like this earlier, I could have saved myself some time.

Tagged : /

tvOS UIFocusGuide demystified

A post about tvOS UIFocus enlightenment and a helper Class that I use to help debug my user interface work.

UIFocus

Above you’ll notice four buttons (a screenshot from my actual Apple TV). You’ll also notice a total of eight purple boxes with dashed outlines. Each is labeled in all capitals. Those are instances of my helper Class (“FocusGuideRepresentation”). You see, UIFocusGuides do not have any visual interface. So when you are deploying them, you’re essentially working in the dark. This helper Class shows you right where the guides are to help you visually lay everything out. Of course, when you get into dynamic situations where buttons are shuffling around, this can help you even more.

The focus management for tvOS works incredibly well when your buttons line up vertically or horizontally. It just works, you don’t need to do anything for that functionality. When they aren’t aligned, you can get into situations where buttons aren’t available through normal navigation. Above, without any UIFocusGuides, a user could move from button One to Two. And back up. That would be it. Buttons Three and Four would be hung out to dry without the user able to navigate to them. That’s why UIFocusGuides exist.

You can think of them as invisible buttons that pass focus to an actual button – based on rules that you provide.

I decided that in addition to being to move up and down with the Siri remote to access the buttons, left and right should also work at the top. A user swiping right from One to get to Three should work. That means 8 guides, as you can see in the diagram. The dashed rule lines show how each guide passes focus. That is a lot of guides, but in the end, the navigation ends up being buttery and simple to use. An application should be a joy to use.

Below is the code for the helper Class, followed by the full code for what you see in the image. Try it out on an Apple TV and see what’s going on and experience how nice it feels getting around.

import UIKit

class FocusGuideRepresentation: UIView {

    init(frameSize: CGRect, label: String)
    {
        super.init(frame: frameSize)
        self.backgroundColor = UIColor.blue.withAlphaComponent(0.1)
        let myLabel = UILabel(frame: CGRect(x: 0, y: 0, width: self.frame.width, height: self.frame.height))
        myLabel.font = UIFont.systemFont(ofSize: 20)
        myLabel.textColor = UIColor.white.withAlphaComponent(0.5)
        myLabel.textAlignment = .center
        myLabel.text = label.uppercased()
        self.addSubview(myLabel)
        
        // Add a dashed rule around myself.
        
        let border = CAShapeLayer()
        border.strokeColor = UIColor.white.withAlphaComponent(0.4).cgColor
        border.fillColor = nil
        border.lineWidth = 1
        border.lineDashPattern = [4, 4]
        border.path = UIBezierPath(rect: self.bounds).cgPath
        border.frame = self.bounds
        self.layer.addSublayer(border)
    }
    
    required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") }
}

And, now, the main code for the ViewController. Note: the 4 UIButtons are in the Storyboard and hooked up (as you see the @IBOutlets).

import UIKit

class ViewController: UIViewController {

    @IBOutlet var one:   UIButton!
    @IBOutlet var two:   UIButton!
    @IBOutlet var three: UIButton!
    @IBOutlet var four:  UIButton!

    var fg1: FocusGuideRepresentation!
    var fg2: FocusGuideRepresentation!
    var fg3: FocusGuideRepresentation!
    var fg4: FocusGuideRepresentation!
    var fg5: FocusGuideRepresentation!
    var fg6: FocusGuideRepresentation!
    var fg7: FocusGuideRepresentation!
    var fg8: FocusGuideRepresentation!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        setUpFocusGuides()
        
        // Pop these back up above the guide representations.
        
        self.view.addSubview(one)
        self.view.addSubview(two)
        self.view.addSubview(three)
        self.view.addSubview(four)
    }

    func setUpFocusGuides()
    {
        let firstFocusGuide = UIFocusGuide()
        view.addLayoutGuide(firstFocusGuide)
        firstFocusGuide.leftAnchor.constraint(equalTo:   one.leftAnchor).isActive =    true
        firstFocusGuide.topAnchor.constraint(equalTo:    one.bottomAnchor).isActive =  true
        firstFocusGuide.heightAnchor.constraint(equalTo: one.heightAnchor).isActive =  true
        firstFocusGuide.widthAnchor.constraint(equalTo:  three.widthAnchor).isActive = true
        firstFocusGuide.preferredFocusEnvironments = [three]
        
        let secondFocusGuide = UIFocusGuide()
        view.addLayoutGuide(secondFocusGuide)
        secondFocusGuide.rightAnchor.constraint(equalTo:  three.rightAnchor).isActive =  true
        secondFocusGuide.bottomAnchor.constraint(equalTo: three.topAnchor).isActive =    true
        secondFocusGuide.heightAnchor.constraint(equalTo: three.heightAnchor).isActive = true
        secondFocusGuide.widthAnchor.constraint(equalTo:  three.widthAnchor).isActive =  true
        secondFocusGuide.preferredFocusEnvironments = [one]
        
        let thirdFocusGuide = UIFocusGuide()
        view.addLayoutGuide(thirdFocusGuide)
        thirdFocusGuide.leftAnchor.constraint(equalTo:   two.leftAnchor).isActive =   true
        thirdFocusGuide.bottomAnchor.constraint(equalTo: two.topAnchor).isActive =    true
        thirdFocusGuide.heightAnchor.constraint(equalTo: two.heightAnchor).isActive = true
        thirdFocusGuide.widthAnchor.constraint(equalTo:  four.widthAnchor).isActive = true
        thirdFocusGuide.preferredFocusEnvironments = [four]

        let fourthFocusGuide = UIFocusGuide()
        view.addLayoutGuide(fourthFocusGuide)
        fourthFocusGuide.leftAnchor.constraint(equalTo:   four.leftAnchor).isActive =   true
        fourthFocusGuide.topAnchor.constraint(equalTo:    four.bottomAnchor).isActive = true
        //fourthFocusGuide.bottomAnchor.constraint(equalTo: two.bottomAnchor).isActive =  true
        fourthFocusGuide.heightAnchor.constraint(equalTo: four.heightAnchor).isActive = true
        fourthFocusGuide.widthAnchor.constraint(equalTo:  four.widthAnchor).isActive =  true
        fourthFocusGuide.preferredFocusEnvironments = [two]
        
        let fifthFocusGuide = UIFocusGuide()
        view.addLayoutGuide(fifthFocusGuide)
        fifthFocusGuide.leftAnchor.constraint(equalTo:   three.leftAnchor).isActive =   true
        fifthFocusGuide.bottomAnchor.constraint(equalTo: one.bottomAnchor).isActive =  true
        fifthFocusGuide.heightAnchor.constraint(equalTo: three.heightAnchor).isActive = true
        fifthFocusGuide.widthAnchor.constraint(equalTo:  three.widthAnchor).isActive =  true
        fifthFocusGuide.preferredFocusEnvironments = [three]
        
        let sixthFocusGuide = UIFocusGuide()
        view.addLayoutGuide(sixthFocusGuide)
        sixthFocusGuide.leftAnchor.constraint(equalTo:   one.leftAnchor).isActive =   true
        sixthFocusGuide.bottomAnchor.constraint(equalTo: three.bottomAnchor).isActive =  true
        sixthFocusGuide.heightAnchor.constraint(equalTo: three.heightAnchor).isActive = true
        sixthFocusGuide.widthAnchor.constraint(equalTo:  three.widthAnchor).isActive =  true
        sixthFocusGuide.preferredFocusEnvironments = [one]
        
        let seventhFocusGuide = UIFocusGuide()
        view.addLayoutGuide(seventhFocusGuide)
        seventhFocusGuide.leftAnchor.constraint(equalTo:   four.leftAnchor).isActive =   true
        seventhFocusGuide.bottomAnchor.constraint(equalTo: two.bottomAnchor).isActive =  true
        seventhFocusGuide.heightAnchor.constraint(equalTo: four.heightAnchor).isActive = true
        seventhFocusGuide.widthAnchor.constraint(equalTo:  four.widthAnchor).isActive =  true
        seventhFocusGuide.preferredFocusEnvironments = [four]
        
        let eighthFocusGuide = UIFocusGuide()
        view.addLayoutGuide(eighthFocusGuide)
        eighthFocusGuide.leftAnchor.constraint(equalTo:   two.leftAnchor).isActive =   true
        eighthFocusGuide.bottomAnchor.constraint(equalTo: four.bottomAnchor).isActive =  true
        eighthFocusGuide.heightAnchor.constraint(equalTo: four.heightAnchor).isActive = true
        eighthFocusGuide.widthAnchor.constraint(equalTo:  four.widthAnchor).isActive =  true
        eighthFocusGuide.preferredFocusEnvironments = [two]
        
        // To aid in debug placement.
        
        fg1 = FocusGuideRepresentation(frameSize: firstFocusGuide.layoutFrame, label:  "first")
        fg2 = FocusGuideRepresentation(frameSize: secondFocusGuide.layoutFrame, label: "second")
        fg3 = FocusGuideRepresentation(frameSize: thirdFocusGuide.layoutFrame, label:  "third")
        fg4 = FocusGuideRepresentation(frameSize: fourthFocusGuide.layoutFrame, label: "fourth")
        fg5 = FocusGuideRepresentation(frameSize: fifthFocusGuide.layoutFrame, label: "fifth")
        fg6 = FocusGuideRepresentation(frameSize: sixthFocusGuide.layoutFrame, label: "sixth")
        fg7 = FocusGuideRepresentation(frameSize: seventhFocusGuide.layoutFrame, label: "seventh")
        fg8 = FocusGuideRepresentation(frameSize: eighthFocusGuide.layoutFrame, label: "eighth")
        
        self.view.addSubview(fg1)
        self.view.addSubview(fg2)
        self.view.addSubview(fg3)
        self.view.addSubview(fg4)
        self.view.addSubview(fg5)
        self.view.addSubview(fg6)
        self.view.addSubview(fg7)
        self.view.addSubview(fg8)
    }
    
    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
    }
}

 

Tagged : / /

Creating a tvOS parallax UIButton

Image Stack

If you’ve been involved in tvOS application development, or you’re new to the whole process, you might find this post interesting in regards to user interface.

I have recently been involved in tvOS dabbling. I’ve been creating a custom application (no TVML) and I wanted one of those nifty parallax-style buttons that Apple uses in its media browsing (notably the TV application). You can focus an item and it will shift neatly as you move around on the Siri remote control. Perfect if you don’t need to include explanatory text. However creating one eluded me for a short while. There may be a better way to handle this, but I present to you my working solution.

TV Stack

Create an Apple TV Image Stack

You need to create a new Apple TV Image Stack in Xcode (when you have your Assets.xcassets folder selected).

The stack takes 3 files by default. I used a background white PNG, a shadow PNG, and a product PNG. If you don’t supply a solid background image, you’ll see the shadow behind your UIButton – which doesn’t look good. Drag in your images into each layer (Front, Middle, Back). You’ll get a nice preview of the parallax at the top so you can see how your images work together. This allows for you to determine if you need to make changes to get the look that you want. I’ve shown each image at the top of this post, each box representing one of them. I named mine “qc35Stack” – which you’ll need to refer to later as an image name.

stacks

Create your custom UIButton with parallax

Now that you have your image stack, you can use it to supply images for a UIButton’s various states. When you supply a stack in this way, the button will know that it should perform a parallax presentation. I created the 3 images at the same size that I want the buttons to be (i.e. the image sizes match the UIButton’s declared frame).

Here is the code I settled on.

let buttonUnpressedTexture = UIImage(named: "qc35Stack")
let buttonPressedTexture = UIImage(named: "qc35Stack")
let newButton = UIButton(type: .custom)
newButton.frame = CGRect(x: 800, y: 200, width: 180, height: 180)
newButton.setImage(buttonUnpressedTexture, for: UIControlState.normal)
newButton.setImage(buttonPressedTexture, for: UIControlState.highlighted)
newButton.setImage(UIImage(named: "qc35Stack"), for: UIControlState.focused)
newButton.imageView?.clipsToBounds = false
newButton.imageView?.adjustsImageWhenAncestorFocused = true
newButton.adjustsImageWhenHighlighted = true

Tada

Finished.

A note: I first attempted to use static PNG images for the highlighted and normal states of the UIButton, but the transitions between them and the focused were glitchy and didn’t look good. When I used the same stack for each of those states, things look good. I don’t know if this is the correct way to do it or not, but it is working.

Go ahead and try it out for yourself. It seems to work a treat, and you didn’t need to subclass a UIButton in your own Class to get it working either.

A short post because the solution is pretty simple.

Richer text for UILabels

Swift

This post is silly simple, but in the past, I remember doing things like this using Ranges.  You have a 2-line UILabel and you want a bold font for the first line, and then regular for the second. In Swift, this is quite simple and straight-forward. This is all the code you’d need to pull it off easily.

let style = NSMutableParagraphStyle()
style.lineSpacing = 5
let attString = NSMutableAttributedString(string: "I am the first line.\n",
    attributes: [NSFontAttributeName: UIFont(name:"Gotham-Bold", size:18.0)!,
    NSParagraphStyleAttributeName: style])
attString.append(NSMutableAttributedString(string: "I am the second line.",
    attributes: [NSFontAttributeName: UIFont(name:"Gotham-Book", size:18.0)!,
    NSParagraphStyleAttributeName: style]))
label.attributedText = attString

 

Returning data from an async network operation in Swift

Swift is cool

If you’re ever using asynchronous network operations (say GET or POST) and want to return data when calling a method, you’ll quickly understand that it’s not so easy. But you’ll see below how you can do this fairly easily.

Let’s say you have a Class that handles all kinds of network communication. I call mine an “adapter” for lack of a better name. I can call methods on the Class and get data returned from it. You’re never going to be sure when the data returned is available, so you need to set things up with a block. Here is an example method in an adapter Class that I have. I included the Struct so you know how it’s set up. I’m also using AEXML to parse the returned XML to make things easier for me as a developer.

struct SpeakerInformation {
    var deviceID: String
    var name: String
    var type: String
}

func getSpeakerInformation(callback:@escaping (SpeakerInformation) -> () )
{
    var request = URLRequest(url: URL(string: "http://\(ipAddress):\(port)/info")!)
    request.httpMethod = "GET"
    let session = URLSession.shared
    session.dataTask(with: request) { data, response, err in
            
        var options = AEXMLOptions()
        options.parserSettings.shouldProcessNamespaces = false
        options.parserSettings.shouldReportNamespacePrefixes = false
        options.parserSettings.shouldResolveExternalEntities = false
                        
        do {
            let xmlDoc = try AEXMLDocument(xml: data!, options: options)
                
            if let name = xmlDoc.root["name"].value {
                    
                let thisName = name
                let thisDeviceID = xmlDoc.root.attributes["deviceID"]
                let thisType = xmlDoc.root["type"].value!
                let thisInfoPacket = SpeakerInformation(deviceID: thisDeviceID!, name: thisName, type: thisType)
                callback(thisInfoPacket)
            }
        } catch {
            print(error)
        }
        }.resume()
}

See that callback with the typed object? This is how you access the data from this sample GET call. I have the blocked wrapped up in a method.

func whatAreMyDetails() 
{
    // The response object in the block.
    stAdapter.getSpeakerInformation() { (response) in
        if let responseObj = response as SpeakerInformation! {
            DispatchQueue.main.async {
                self.headerLabel.text = responseObj.name
                self.deviceTypeLabel.text = responseObj.type.uppercased()
            }
        }
    }
}

So you can’t set something up which would allow for this kind of syntax:

let foo = stAdpater.getSpeakerInformation()

I hadn’t done a lot of networking calls in quite some time and I stumbled a bit trying to force the direct operation before I thought about it some more and did some research. You can return an object, you can return an array of objects, a dictionary, all kinds of stuff. Super handy and the block keeps things nice and tidy.

Tagged : / / / / /

UILabel centered text – getting the text’s rect

Swifty

I recently had a section of user interface where I had a UILabel that took up an iPhone’s full width. It contained text which would be dynamic over time, meaning that the text would update at times. I wanted to place an image icon beside the first character of the label, but I needed to know where to place it. I had done it in the past but honestly forgot exactly how to do it.

Kyle Sluder from the Apple Cocoa-Dev list reminded me what I should be looking for.

UILabel.textRect(forBounds:limitedToNumberOfLines:)

Doh! I should have decided to RTFM — read the documentation. You can get a rect for the label, but you can’t use all of it for positioning. Here is sample code in how to get the job done.

let label = UILabel(frame: CGRect(x: 0, y: 100, width: self.view.frame.size.width, height: 30))
label.textAlignment = .center
label.font = UIFont.systemFont(ofSize: 16)
label.numberOfLines = 1
label.text = "This show is crazy."
        
let rect:CGRect = label.textRect(forBounds: label.bounds, limitedToNumberOfLines: 1)
let v = UIView(frame: CGRect(x: rect.origin.x, y: label.frame.origin.y, width: rect.width, height: label.frame.height))
v.backgroundColor = UIColor.red.withAlphaComponent(0.3)
        
self.view.addSubview(label)
self.view.addSubview(v)

Pretty easy. I just forgot how to do it. This is for the whole text, not substrings within it.

 

Getting the rotation angle after CABasicAnimation?

Spinner

I had a view that I rotate a lot, often more than 360 degrees (spins around a few times). Each time it stops, I wanted to determine the resulting “visual” angle. How does one go about doing that?

rotateView is a configured CABasicAnimation:
let rotateView = CABasicAnimation()
let randonAngle = arc4random_uniform(361) + 1440
rotateView.fromValue = 0
rotateView.toValue = Float(randonAngle) * Float(M_PI) / Float(randomBetweenNumbers(firstNum: 90.0, secondNum: 180.0))//180.0
let randomSpeed = randomBetweenNumbers(firstNum: 1.5, secondNum: 2.1)
rotateView.duration = CFTimeInterval(randomSpeed)
rotateView.repeatCount = 0
rotateView.isRemovedOnCompletion = false
rotateView.fillMode = kCAFillModeForwards
rotateView.timingFunction = CAMediaTimingFunction(name: kCAMediaTimingFunctionEaseOut)
innerRing.layer.add(rotateView, forKey: "transform.rotation.z")

Once done, I dispatch after the randomSpeed duration so I know when the animation will complete. I then wanted the resulting angle. I searched all over the place & found a lot of old broken things and things that didn’t work for me.

This, however, does work (I spent an hour at the late night kitchen table trying things out)…

let transform:CATransform3D = innerRing.layer.presentation()!.transform
let angle: CGFloat = atan2(transform.m12, transform.m11)
var testAngle = radiansToDegress(radians: angle)
if testAngle < 0 {
    testAngle = 360 + testAngle
}
print("final: \(testAngle)º")

// Here is the helper function

func radiansToDegress(radians: CGFloat) -> CGFloat {
    return radians * 180 / CGFloat(M_PI)
}

It rocks. Spin the hell out of your view and get the result using the code above.

Tagged : /

keep it simple. Frameworks?

keep it simple

Recently I had a project in a workspace to produce a framework of Objective-C and Swift code – supplied by someone else. I had a devil of a time getting it to build for me. And when I did, I tried to copy it into my own project to use it – to find out somehow the project thought I already had it. So I had to link to it. Which meant building an archive didn’t work, so I added a path for embedded frameworks, and then found out there was a problem with how the framework was built in the first place. I was jumping through hoops. I’m no Xcode guru. It’s great until it doesn’t do what you want.

I spent hours trying to get things to work. Hours. The dry-heaving Xcode project red sweats. The kind you want to roll a d20 saving throw to see if it will make it to the end of the day and live on.

I took the framework project code, copied it to my swift project. Added a bridging header for an Objective-C header, and I was done. In the span of about (really) 10 minutes I could have saved myself hours. Now… if that framework changes I need to stay on top of that. If I used a linked framework under source control, I’d be better. But that framework won’t really change much at all.

Sometimes it really is better to keep it simple.

Sending and playing an audio file from Apple Watch Extension to iOS

Watch Image

I recently played around recording audio on my Apple Watch. Doing that was easy enough, and I wanted to send the recorded file from the watch extension to the iOS application – and play it. I started messing around and used sendFile as the communication protocol since it can work in the background (without the iOS app needing to be open). I banged on it a ton and it worked – but I couldn’t figure out a way to actually play it. The file.fileURL.path had some dynamic stuff in the URL which prevented me from successfully creating a valid AVAudioPlayer.

Here is my solution in sending a file (wav) to the iPhone and playing it upon receipt.

//Watch Extension code

override func awake(withContext context: Any?) {
    super.awake(withContext: context)
        
    if WCSession.isSupported() {
        WCSession.default().delegate = self
        WCSession.default().activate()
    }
        
    let fileManager = FileManager.default
    let container = fileManager.containerURL(forSecurityApplicationGroupIdentifier: "group.net.ericd.WatchRecord")
    let fileName = "audioFile.wav"
    // class variable
    saveURL = container?.appendingPathComponent(fileName) as NSURL?    
}

@IBAction func sendAudio() {
    let data = NSData(contentsOf: saveURL as! URL)
    sendAudioFile(file: data!) // Quicker.
}

func sendAudioFile(file: NSData) {
    WCSession.default().sendMessageData(file as Data, replyHandler: { (data) -> Void in
        // handle the response from the device
    }) { (error) -> Void in
        print("error: \(error.localizedDescription)")
    }
}

The presentAudioRecorderController, when successfully saving the audio file recorded, enables a button that calls the sendAudio function. That code is here.

@IBAction func recordAudio() {
    let duration = TimeInterval(10)
    let recordingOptions = [WKAudioRecorderControllerOptionsMaximumDurationKey: duration]
    print("record:", saveURL as! URL)
    presentAudioRecorderController(withOutputURL: saveURL as! URL,
                                                    preset: .narrowBandSpeech,
                                                    options: recordingOptions,
                                                    completion: { saved, error in
                                                        
                                                    if let err = error {
                                                        print(err.localizedDescription)
                                                    }
                                                        
                                                    if saved {
                                                        print("saved file.")
                                                        self.playButton.setAlpha(1.0)
                                                        self.sendButton.setAlpha(1.0)
                                                        self.playButton.setEnabled(true)
                                                        self.sendButton.setEnabled(true)
                                                    }
    })
}

Now, in the iOS application, I handle the receipt of the sendMessageData method.

func session(_ session: WCSession, didReceiveMessageData messageData: Data, replyHandler: @escaping (Data) -> Void)
{
    DispatchQueue.main.async
    {
        self.someLabel.text = "We got an audio file: \(messageData)" //Show bytes
        self.versionLabel.textColor = UIColor.blue
            
        do {
            self.player = try AVAudioPlayer(data: messageData)
            guard self.player != nil else { return }
            self.player?.prepareToPlay()
            self.player?.play()
                
        } catch let error as NSError {
            print(error.localizedDescription)
        }
    }
}

And there the audio file is rendered upon receipt. Not using a file transfer (which I would prefer since it’s a background task that doesn’t require the iOS application to be running in the foreground).

Tagged :

watchOS3 communication to iOS (speed)

Speed with caution

Recently I was working with watchOS 3 and iOS – transferring data back and forth (not using reply callbacks). I was using .sendMessage, and messages from the iPhone to the watch were very quick. However, messages from the watch to the phone seemed very slow, or so I thought.

This is what my receive delegate looked like on the iPhone side of the BLE fence.

// Message received from the Apple Watch Extension.
func session(_ session: WCSession, didReceiveMessage message: [String : Any])
{
    let val = message["ancValue"] as! String
    messageFromWatchLabel.text = "ANC: \(val)"
}

That took anywhere from 1 to 25 seconds to populate the UILabel. Now, I miraculously remembered reading somewhere that on the iPhone, the delegate call comes back on a background thread. My eyes lit up, and I tried updating my UI on the main thread (where it needs to happen).

// Message received from the Apple Watch Extension.
func session(_ session: WCSession, didReceiveMessage message: [String : Any])
{
    let val = message["ancValue"] as! String
    // Main thread - super snappy.
    DispatchQueue.main.async {
        self.messageFromWatchLabel.text = "ANC: \(val)"
        self.slider.value = Float(val)!
        self.sliderValue.text = "\(Int(val)!)"
    }
}

Shazaam. Snappy as can be. So try not to forget about this. I didn’t need to do this on the watch side because I assume there is no threading on the watch (that we have access to)? It’s very snappy in response to the same delegate on the watch side, so I’m not sure what’s going on there. But that main thread for iOS saved my bacon.

Tagged :

iOS Bug: Occluded UIView with CABasicAnimation not redrawn

I have a UIImageView that holds album artwork in a project. When I get a new now playing notification, instead of simply setting the image to the image view, I perform a crossfade. It’s a lot prettier that way. No problems so far.

Here is a snippet of the crossfading code.

myfade = CABasicAnimation(keyPath: "contents")
myfade.duration = 0.4
myfade.fromValue = largeAlbumCover.image!.cgImage
myfade.toValue = artImage!.cgImage
myfade.delegate = self
largeAlbumCover.layer.add(myfade, forKey: "animateContents")
largeAlbumCover.image = artImage!

Nothing remarkable there. Now… if I add a sliding panel above the UIImageView, and it expands over the image view, and a crossfade is initiated, when the panel slides away later, the image won’t have updated with the new one. It shows the old artwork associated with the previous song. A stale image state. I thought I was going mad for a bit.

I had a suspicion that the image view needed to be visible to perform the redraw update, so I added a small delay to crossfade after the panel responded to my close event. Enough for a tiny sliver of the image view to be visible in the UI before the crossfade was called. It worked.

My final solution: set the panel on top to an alpha value of 0.99 instead of 1.0 – you can’t visually tell that the panel is a tiny bit transparent. This works – I removed the delay thing because that was even more hackish in my opinion.

I have filed this iOS UIKit bug with Apple. My device is running 10.1 and not 10.1.1 – because I didn’t want to potentially change the core OS before my application delivery (just in case it caused a bug or two someplace).

Swift 3: Always display the UITableView scrollbar

I recently had a strange request in regards to a UITableView. You see, the table in a UI could only display 3 rows before scrolling was enabled. Given the minute stature of this table, showing a user that there were more items in it was important. So it was decided that if this table had more than 3 rows in it, always display the scrollbar. The trouble with this was that Apple Human Interface Guidelines and the iOS SDK don’t allow for this behavior. One is able to flash the scrolling indicator, but that’s about it.

So it was decided that if this table had more than 3 rows in it, always display the vertical scrollbar. The trouble with this was that Apple Human Interface Guidelines and the iOS SDK don’t allow for this behavior. One is able to flash the scrolling indicator, but that’s about it.

So the problem here is twofold.

  1. Display the scrollbar upon table reload.
  2. Maintain the visible scrollbar when the table has been scrolled.

These actually aren’t too bad to nail down. You’ll want to set your showsHorizontalScrollIndicator to false. The solution below relies on only having vertical available as we’re going to loop through the table to find a UIImageView (which holds the stretching image used by the table (subclass of UIScrollView)). Set your table up normally beyond this. Make sure you’re using delegates for the table – of course, right?

How do you loop through a table to find it’s scrollbar now?

for view in self.upNextTableView.subviews
{
    if view is UIImageView
    {
        let imageView = view as? UIImageView
        imageView?.isHidden = false
        imageView?.layer.removeAllAnimations()
        imageView?.alpha = 1.0
    }
}

There you go. This is brittle as HELL. If you don’t turn off the horizontal scroll indicator, you’ll affect it t0o – and make that display in addition to the vertical. Didn’t want that. And it’s brittle because Apple could update the underpinnings of how these things are constructed and this method won’t work accurately anymore.

So this turns the scrollbar on. If a user scrolls it at all, though, it’s going to fade away again. This part is a tiny bit tricky. Since you’ve set up delegates for the table, you can implement a scroll view delegate in regards to scrolling… or namely dragging.

func scrollViewWillEndDragging(_ scrollView: UIScrollView, withVelocity velocity: CGPoint, targetContentOffset: UnsafeMutablePointer<CGPoint>) {
    if scrollView == upNextTableView {
        if upNextTableView.numberOfRows(inSection: 0) > 3 {
            for view in self.upNextTableView.subviews
            {
                if view is UIImageView
                {
                    let imageView = view as? UIImageView
                    delay(bySeconds: 0.7) {
                        imageView?.isHidden = false
                        imageView?.layer.removeAllAnimations()
                        imageView?.alpha = 1.0
                    }
                }
            }
        }
    }
}

There is the delegate method – and because I had more than one table, I perform a check first. You probably don’t need that. Anyway, we wait 0.7 seconds before turning the scrollbar back on after a drag. If we attempt before, the animation will prevent the scrollbar alpha change to “take”. Here is the helper stuff for the delay call:

public func delay(bySeconds seconds: Double, dispatchLevel: DispatchLevel = .main, closure: @escaping () -> Void) {
    let dispatchTime = DispatchTime.now() + seconds
    dispatchLevel.dispatchQueue.asyncAfter(deadline: dispatchTime, execute: closure)
}
    
public enum DispatchLevel {
    case main, userInteractive, userInitiated, utility, background
    var dispatchQueue: DispatchQueue {
        switch self {
        case .main: return DispatchQueue.main
        case .userInteractive: return DispatchQueue.global(qos: .userInteractive)
        case .userInitiated: return DispatchQueue.global(qos: .userInitiated)
        case .utility: return DispatchQueue.global(qos: .utility)
        case .background: return DispatchQueue.global(qos: .background)
        }
    }  
}

So without using a custom scrollbar or subclassed table, you’re able to get what you need to be done. This is most of the code, but the gist of this is to show that it can be done, albeit in a very brittle way that makes me cringe.

 

Swift 3: UITableView snap to cells

If you have your UITableViews snap to their cells, you’ll end up with a more refined user experience. This solution I have works with cells that are consistent in their height (static). You could always work with dynamic heights if you wanted to.

func scrollViewWillEndDragging(_ scrollView: UIScrollView, withVelocity velocity: CGPoint, targetContentOffset: UnsafeMutablePointer<CGPoint>) {
        if scrollView == suggestionsTableView {
            let cellHeight = CGFloat(60.0)
            let y          = targetContentOffset.pointee.y + scrollView.contentInset.top + (cellHeight / 2)
            let cellIndex  = floor(y / cellHeight)
            targetContentOffset.pointee.y = cellIndex * cellHeight - scrollView.contentInset.top
        }
    }

As you can see cellHeight is a set float value, it’s there where you would want to determine the dynamic height. This little bit of code makes a UITableView a lot more refined as it doesn’t clip cell contents when a user lifts their finger from a drag.

Tagged :

Swift 3: Distance between two CGPoints

Here it is in Swift 3, it’s a little different. I tried to use hypotf but it didn’t like CGFloat subtraction.

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
    if touches.first != nil {
        let pointCenter = CGPoint(x: self.view.frame.width / 2, y: self.view.frame.height / 2)
        let touchPoint = touches.first?.location(in: self.view)
        let distance = CGPointDistance(from: pointCenter, to: touchPoint! )
        print("distance: \(distance)")
    }
    super.touchesBegan(touches, with: event)
}
    
func CGPointDistanceSquared(from: CGPoint, to: CGPoint) -> CGFloat {
    return (from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y)
}
    
func CGPointDistance(from: CGPoint, to: CGPoint) -> CGFloat {
    return sqrt(CGPointDistanceSquared(from: from, to: to))
}

 

iOS to macOS: Reading keyboard input

Coming from iOS to macOS, I wanted to read keyboard input in my macOS application. Not from a textfield but rather from the window itself. I thought this would be an easy task. After some trial and error, Googling, and then some StackOverflow fishing, I managed to get this to work. It wasn’t exactly straightforward for me to get but now that I did, I’d like to share the code that makes it work.

In my ViewController.swift file, I set up a few delegate callbacks

override var acceptsFirstResponder: Bool { return true }
override func becomeFirstResponder() -> Bool { return true }
override func resignFirstResponder() -> Bool { return true }

In my viewDidLoad, I added event monitoring (which eluded me for some time).

NSEvent.addLocalMonitorForEvents(matching: .keyUp) { (aEvent) -> NSEvent? in
    self.keyUp(with: aEvent)
    return aEvent
}

NSEvent.addLocalMonitorForEvents(matching: .keyDown) { (aEvent) -> NSEvent? in
    self.keyDown(with: aEvent)
    return aEvent
}

I then added keyUp and keyDown functions for the NSViewController itself.

override func keyDown(with event: NSEvent) {
    if keyIsDown == true {
        return
    }
    keyIsDown = true
    if event.keyCode == 1 {
        print("s key pressed")
    } else if event.keyCode == 49 {
        print("spacebar pressed")
    }
 }
    
override func keyUp(with event: NSEvent) {
    keyIsDown = false
    if event.keyCode == 1 {
        print("s key released")
    } else if event.keyCode == 49 {
       print("spacebar released")
    }
}

I have a class variable (keyIsDown) which prevents keyboard repeats to flood the methods. Set that to false at the start.

This works a treat – and I didn’t have to subclass NSView, do anything in AppDelegate, etc. It’s clean, legible, and it does exactly what you’d expect it to do. This is for a single windowed application.

Protocol/delegates in Swift 3.0

I declare this day a fine day having discovered non-Objective-C protocols! No more @objc protocol usage. Now it’s as simple as something like this:

protocol myDelegate {
    func report(info:String)
}

class MyClass: NSObject {
    var delegate:myDelegate?
    var serviceType = "hephaestus"

    init(withServiceType: String){
        self.serviceType = withServiceType
        DispatchQueue.main.asyncAfter(deadline: .now() + 0.1) {
            self.delegate?.report(info: UIDevice.current.name)
        }
    }

    convenience override init() {
        self.init(withServiceType: "hephaestus")
    }
}

And then in implementation

import UIKit

class ViewController: UIViewController, myDelegate {
    
    // Delegate method
    func report(info: String) {
        print("delegate: \(info)")
    }
    
    override func viewDidLoad() {
        super.viewDidLoad()
        let mc = MyClass()
        mc.delegate = self
        // Do any additional setup after loading the view, typically from a nib.
    }

    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }
}

This is a really simple example, but it works and seems cleaner to me in Swift.

Tagged :