wingu at WWDC - day 2

Our iOS Developer Jakub Mazur is representing wingu at this years WWDC. He will share his first impressions and thoughts and give some insights about the future development of location based services in iOS 12:

05/06/2018 - WWDC 2018_day 2


Day before was all about new frameworks and enchantment to existing ones. Today it’s start with “what’s new in swift” and turns out to be a lot!

Swift & Xcode

In Swift 4.2 for me one important change is SE-0185, that finally we will be able to conform to Codable protocol in extension, what means to write extension YourClass: Codable { }  . This will allow us to move a lot boilerplate code to Sourcery.

There is also a lot of small improvements like derived collection of enum cases or random unification that put order to chaos of copy-paste code over and over again.

And what's more in Xcode there is new build tools that allow you to configure your build to Optimize for Speed or Optimize for Size, you cannot have both of course, but it's great to have an option too choose.

What's more that was a nice touch from Apple to highlight community driven changes in Swift - that's really encouraging. Looking forward for Swift 5.0 that will be introduced in 2019!

ARKit 2 

This is a major improvements to ARKit. Sample app with object detection works really well even in pure lighting condition and objects that reflect lights.


It's now really easy to setup and with extra reflections and textures objects placed on the scene looks surprisingly realistic.

Siri Shortcuts

It's a natural extension for NSUserActivity . Looks great, but you got a feeling that's it's just a beginning. There is limited thing you can do with it, but there is a lot of things you cannot do such as performing actions in a background similar to Background Fetch or react on specific behaviour in a way that is invisible for end user.

In the documentation note you can find: SiriKit encompasses the Intents and Intents UI frameworks, which you use to implement app extensions that integrate your services with Siri and Maps. Why this should be limited to Siri and Maps? This should be much more. It's clearly the right step to the direction that needs to be further explored.

CoreML & Create ML

Create ML is a tremendous step in adopting ML in the app. You can create a fairly accurate ML model in just a few step and use it in the application. In my opinion it's really helps adopt ML in every app even with very limited knowledge about ML. Now everyone can try, how to do it? Just watch this WWDC session: Introducing Create ML (WWDC 2018).


Another time saving feature from Xcode Core team. Now with a playground you can execute Swift code line by line and make changes to a code does not yet executed! That's huge. It's great for exploring new APIs and testing existing behaviour against different scenarios. Waiting for tomorrow to get more informations about this on labs!

... But for now beer & live music...