Tuesday 30 December 2014

Shooting the breeze visually (Aeolian Light)

I'd love to see this! After reading the article in Wired about the Italian duo making sound from light, this Wired article then caught my eye about an installation at Salford Quay that is taking unseen patterns in the environment and visualising them.

Aeolian Light is the work of the art collective Squidsoup and has placed tendrils of LED lights which illuminate following the breeze. I particularly liked the observation that it also shows the passage of non-breeze and people walk past and obstruct the wind.




Electric Light Orchestra of the 90s - Quiet Ensemble

I came across this from Wired in my Christmas catch-up reading and really liked it. As with my previous post, I'm going to make a comment and get this up online and maybe add to it later rather than taking the time to follow-up everything that is interesting about it and then post.


This has nothing to do with the ELO of the 70s from Brum, but Quiet Ensemble, a duo of Italian artists who have taken electronic Elightenment as an opportunity to produce a unique sound from the barely and inaudible electromagnetic effects from different light appliances. It's really hard to describe this without visualising and hearing the concept. Thankfully you can see the conceptual build up on Vimeo.




Been a while ... New Year's resolutions

Well, sorry, (to myself as much as any readers) that I've not been posting regularly over the past couple of months. I got a bit distracted in October with the day-job and didn't really get to properly follow-up any of the interesting stuff that I was following through Oct-Dec.

It's been an interesting period, which I'd love to blog more about, but can't easily do without tripping over the aforementioned day-job. Still, in the time, I've had a good technical grilling, returned to China after nearly 20 years and have been astonished by the scale of change and been able to give my annual guest lecture at one of the local universities.

As Christmas and the run-up is usually a manic period, for the usual festive and other family reasons I also haven't had a quiet period until now.

So, my New Year's resolution last year was to start a blog, which I've done and been quite pleased how it works. It has been a great experience and a chance to get down some of the things I have been noodling about with and great ideas from others I've noticed in the press or otherwise.

What have I learnt? Well, writing good technical posts takes a lot longer than I'd thought. However, the discipline of getting those thoughts down is really worthwhile. I have also found myself half writing posts and sort of writing them in draft and not getting to finish them almost as a thought placeholder to myself. Useful, but also less useful in terms of sharing or getting comments. So, next year, I'm going to try and post shorter stuff more regularly.

I also thought I'd blog more about completed projects, but have found myself wanting to write (if only for myself) about what I'm finding that interests me and can see this as a series of flirting interests that are connected up myself, but not so easy to see externally without context.

Anyway, my thoughts for the lull period between Christmas and New Year was to try to get some of those drafts into finished posts, so if you've been looking, thank you! There will be a bunch of posts from over the summer and odd other points that may well just start appearing!

Happy New Year!

Tuesday 16 December 2014

Sony (FES) eInk Watch Concept

I've long been a fan of eInk technology and like everyone else have been following the various offerings into the new eWatch space. The problem is that all of the Android and AppleWear watches seem to try to be being too clever and missing the main point of a watch which is to reliably tell the time. Now, when I mean reliably, I mean it has power. The problem with most of the offerings is that they have minimal life, which means that they can't meet the basic purpose without a lot of hassle.

Now, here is the FES Watch prototype that has come out of Sony. There are a couple of things I really like about this and it particularly marks a return to form for Sony with something clearly innovative and with a good understanding of what the user might want. Kind of like the first inkling that made them famous, understanding the need for personal audio and matching that.


Firstly, I love the clean design. Simple, minimalist, I could see myself wearing this. I particularly like that they have looked at what eInk and an eWatch could do that is different from a normal watch. Yes, you can choose a customised design for the face and the strap. Neat!

The battery life is purported to be 60 hours. Again, I like that. No chance of starting a 13 hour flight to Tokyo and then finding out just as the watch is about to adjust timezones that the thing is dead due to the short battery life of the 'smart' watches. Sure, I'm certain they will solve some of these loop-holes, but I really hope Sony get this out and give a sense of some alternatives at the 'low-tech' meets the purpose of the device end. Kind of like phones that can reliably make phone calls.

The other thing I liked about this and some might baulk that this was a device by Sony, is that they tried this out on the world through a separate subsidiary and looked at getting some test input via a Japanese croud-funding site. Is this disingenuous for a large multinational to do this? I'm not so sure. Does it meet the purpose, e.g. trying out something new and innovative on the market-place, yes. Does it allow steerage and feedback on the concepts much more directly, yes. And finally, does it allow companies to take a chance on such things, yes. Back in the day, some will remember the Sony only cautiously broke into the video-games market with the original PlayStation. Known as an AV manufacturer there was concern that this would potentially damage the brand, so the original PlayStation launched without a Sony logo, and the same for the original PS2.

Would I like to see more technology chances? Yes! Is is fair for large companies to try this? I think so, otherwise we're going to see concepts stifled as marketing and brand-police prevent companies taking chances..... good on you Sony..... let's hope we see more. Not so sure about the eInk bow-tie designs though.

Saturday 6 December 2014

Spiro @ South Street and The Vapourer Moog

Had a fantastic evening just before we get into the Christmas rush to see Spiro. I first saw them at WOMAD about 6 years ago and have long enjoyed their music. And what a treat it was to listen to them in more intimate surroundings at South-Street with some cabaret style tables. Each of the members took a turn in explaining the songs and bringing about a wonderfully dream-like state of enjoyment from the musical patterns and melodies.


In the interval I purchased the one EP we don't have at home, The Vapourer which includes an int

eresting version of some songs using a Moog to complement their traditional instruments. I asked Alex Vann, the Mandolinist how this had come about. He said that Adrian Utley (of Portishead) and fellow Bristolian had suggested it as the instrument would complement their sound. And indeed it does. As the quote on the RealWorld records site says:

A new six-track mini album featuring Moog synthesizer mixes by Adrian Utley (Portishead), which transport Spiro's music to an ethereal place somewhere between Sci-Fi, Bach and Kraftwerk - plus a new recording and three dazzling live tracks from WOMAD Charlton Park 2012. 




Tuesday 4 November 2014

Homemade Kora from a Drum

I had a fantastic weekend away in the Cathederal city of Wells where I saw this wonderful Kora made from a drum. He was more than happy to show off the cracking instrument and belt out a few 'Irish' folk tunes on it. Tricks of the make were to use a soldering iron to make holes in the drum skin and using zither pins on the neck as they're cheaper than buying 20 guitar tuning nuts. 

The sound was fantastic and came from a similarly great retro looking amp in a radio chassis. I'm not going to give any secrets away there...




Wednesday 29 October 2014

Simpler Silverlight/C# syntax for async web service calls using delegates

So, I've been using Web-Service calls in Silverlight for a number of years now in a bunch of different applications with the cumbersome async BeginGetResponse, callback, EndGetResponse syntax. It's all been working great, I happily have a template for this and can bash any new service integrations pretty quickly and has not been getting in the way.

For a simple Get request they look something like this:


public string server = "123.123.123.123";


public void GetExample()
{
    String url = "http://" + server + "/getexample";

    try
    {
        HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);

        IAsyncResult result = null;

        result = request.BeginGetResponse(GetExampleCallback, request);
    }
    catch (Exception)
    {
    }
}

void GetExampleCallback(IAsyncResult ar)
{ 
    var request = ar.AsyncState as HttpWebRequest;
    var response = request.EndGetResponse(ar) as HttpWebResponse;

    using (var reader = new StreamReader(response.GetResponseStream()))
    {
        string result = reader.ReadToEnd();

        // now do something with this
    }
}

Which is all good and I usually have some additional code here to callback to a delegate which can then do something, such as display the result asynchronously with the usual fun and games of getting this back onto the GUI thread using a BeginInvoke Dispatch.

Great. It then gets a little more complicated when you want to make a post request and have to push the XML parameters in another asynch BeginGetRequestStream function which means you're getting callback after callback. Easy enough, these can be bundled into a class for each WebService function and can use some templating to reduce the effort, but it's still pretty tedious. I've stuck with it because it works, I have a pattern and usually it's not too much trouble and once done means I can focus on the other interesting bits of the application.

Just this week though I needed to make a new little application and having some brain-space to look at this again and thinking Swift closures I thought I'd explore a bit how to get rid of the callbacks explicitly in the calls and see if this could be made into a single function. Sort-of and there are pros and cons.

What I came up with looks like this:

using System.Threading.Tasks;

public string server = "123.123.123.123";


public void GetExample2()
{
    String url = "http://" + server + "/getexample2";


    try
    {
        HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);

        IAsyncResult result = null;
        ManualResetEvent mre = new ManualResetEvent(false);

        result = request.BeginGetResponse((cb) =>
        {
            // Callback
            using (var response = request.EndGetResponse(cb) as HttpWebResponse)
            {
                using (var reader = new StreamReader(response.GetResponseStream()))
                {
                }
            }

            mre.Set();

        }, request);


        mre.WaitOne();
    }
    catch (Exception)
    {
    }
}


In this case the callback has been put in as an anonymous delegate so the code can be written in the same function. Now what is all the ManualResetEvent stuff about? Basically this is to handle the aynchronous nature. If you run this in the debugger you can see the BeginGetResponse call being made and then jumping down to mre.WaitOne() which is the normal thread flow of the operation. You can then see the debugger jump back up to the callback. The mre.Set then sets the flow to continue in the main thread once the callback has finished.

So, pros and cons.

The big pro is that the whole operation is now contained in a single function statement and local variables can be used to return the result. You can either make this synchronous now (as the synchronicity has been put into the call with the ManualResetEvent) or callback via a delegate (recommended) with the result.

The con is that there are seemingly multiple code execution entries in a single statement. Remember all those gotos and horrible code years ago. Well, lots of coding best practice is to make code more readable and the execution paths clearer and more understandable. [See also later note on calling from the GUI thread]

The nub of the question is if this is easier to read and understand and means there will be fewer problems. I kind of think so as once the template/pattern is established it's much easier to put together and therefore for me less prone to errors.

The big advantage now is if you need to do a Post, it looks sort of like this:


public void PostExample()
{
    String url = "http://" + server + "/postexample";


    try
    {
        HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
        request.AllowReadStreamBuffering = false;
        request.Method = "POST";
        request.ContentType = "text/xml";


        IAsyncResult result = null;
        ManualResetEvent mre = new ManualResetEvent(false);


        result = request.BeginGetRequestStream((ac) =>
        {
            // post request callback
            using (Stream stream = request.EndGetRequestStream(ac))
            {

                StreamWriter writer = new StreamWriter(stream);


                string post = "";

                post += "<?xml version='1.0' encoding='UTF-8'?>";
                post += "<somexml/>";

                writer.Write(post);

                writer.Flush();
                writer.Close();
            }

            mre.Set();

        }, null);

        mre.WaitOne();
        mre = new ManualResetEvent(false);

        string reply = "";

        result = request.BeginGetResponse((cb) =>
        {
            // callback response
            var response = request.EndGetResponse(cb) as HttpWebResponse;

            using (var reader = new StreamReader(response.GetResponseStream()))
            {
                reply = reader.ReadToEnd();
            }

            mre.Set();

        }, request);

        // this needs to stay in for some strange reason
        Thread.Sleep(100);

        mre.WaitOne();
    }
    catch (Exception)
    {
    }
}

Which is pretty great.

Now, the additional funny. I don't like the Sleep at the end, but however I tried to work this out, I could not get things to work and the last wait just waited on for ever, so pragmatically it is working for me but is an ugly little hack.

Second thing to note with this method is that these functions cannot be called from the GUI thread directly as the callback in BeginGetResponse never comes back. That caused me a lot of headaches until I found the result. In the case of my application this is not a problem as all the service calls are running from a separate thread and updating the GUI. Still, it's messy. There are ways around this similar to the Dispatch back the other way, but it's not completely clean.

I did mess around with the new C# async keyword and using the task framework but the code really looked ugly to my eyes. Maybe a little more work on that and another post in the future.

Friday 17 October 2014

Silverlight Multi-Select User Control

So back to some Silverlight fun. I had a requirement for some applications I was putting together to have a multi-select drop-down that allowed typing in and selection from a list of available values. I'm sure there was some code around but I took a quick look and this didn't jump out at me immediately so I had a bit of an evening code. Here's the results. Please do take yourself and improve, please drop me a comment on this blog if you find it useful or use it for anything.

There's a demo below... try typing into the box. The version used below does not allow duplicates. This is an option in the user control.


The code files are available below:
The code is a little klunky in places and could certainly be improved. I'll try to get round to that when I have some spare time. However, it met the need that I had at the time. 

There are a number of areas for improvement, putting in disable functionality and allowing different colours for the selected items. 

There are a couple of little tricks that are worth calling out...

In a couple of places I needed to send a message to a control to get it to have a visual interaction. The easiest way I found for doing this was to use a lambda to drop it onto the dispatcher:

Dispatcher.BeginInvoke(() => options.Visibility = Visibility.Collapsed);

These are paths for the right and down arrow:

<Path x:Name="arrowright" VerticalAlignment="Center" Margin="0" Stroke="Gray" 
      Data="M2,8 L6,4 L2,0"  StrokeThickness="2"/>
<Path x:Name="arrowdown" VerticalAlignment="Center" Margin="0" Stroke="Gray" 
      Data="M8,0 L4,4 L0,0"  StrokeThickness="2" Visibility="Collapsed"/>

And this is the path for the cross symbol used in the buttons:

<Path VerticalAlignment="Center" Margin="3" Stroke="Gray" Data="M0,0 L8,8 M8,0 L0,8"  StrokeThickness="2"/>

Thursday 16 October 2014

Polifiller and Buzzword Bingo

Caught this on Radio4's Today programme driving in this morning (quote from the Beeb)...

"A new online tool, Polifiller.com, is being launched, which will supposedly automatically strip jargon and clichés out of politicians' speeches and statements - to help politicians rid their vocabulary of hackneyed phrases and give the electorate the clarity they deserve. Hamish Thompson, managing director of Houston PR developing the online tool and Robert Hutton is the UK Political Correspondent for Bloomberg News and author of 'Would They Lie To You? How To Spin Friends and Manipulate People'."

Love the idea!!

At work to keep management on their toes we always have a good round of buzzword bingo at the annual comms sessions. Which made me think, there's a ripe opportunity here for a PPT parser that takes in techno/biz-speak babble and either strikes it out or replaces it randomly with something else....  now if I have a spare evening, that might just appeal

Tuesday 30 September 2014

AVAudioPlayerNode Audio Loop

Well, I think I've finally concluded that AVAudioPlayerNode can't really be used for any sensible audio synthesis or effects in Swift. It's a shame as Swift certainly seems to be up to the job and the processing isn't really even touching the CPU at 3% but despite the seeming promise of the application descriptions given in the WWDC notes, playback in all the modes I've tried seems glitchy even when using extremely long buffers, so it seems that the ScheduleBuffer function is really not useful for anything more than basic trivial sound generation.

So, following the previous posts my last little test to ensure that my generation code itself was not just the problem was to try a loop-back and see how this worked. To do this I created a simple test case which has the default input going into a mixer with it's volume set to zero and then a player also going into the other mixer that is connecting to the output. I have then installed a tap on the input and used that buffer to pump straight into ScheduleBuffer of the player.

Here's the example code:

import Cocoa
import AVFoundation

// Setup engine and node instances
var engine = AVAudioEngine()
var mixer = engine.mainMixerNode

var inputstub = AVAudioMixerNode()
var player = AVAudioPlayerNode()
var input = engine.inputNode
var output = engine.outputNode
var format = input.inputFormatForBus(0)
var error:NSError?

engine.attachNode(inputstub)
engine.attachNode(player)

// Connect nodes
engine.connect(player,to:mixer, format:format)

// Start engine
engine.startAndReturnError(&error)

player.play()

engine.connect(input, to: inputstub, format: format)
engine.connect(inputstub, to: mixer, format: format)
engine.connect(mixer, to: output, format: format)

inputstub.outputVolume = 1.0
mixer.outputVolume = 1.0

let length = 24256

var audioBuffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0), 
                                   frameCapacity: UInt32(length))
audioBuffer.frameLength = UInt32(length)

input.installTapOnBus(0, bufferSize:UInt32(length), format: input.outputFormatForBus(0))
{
        (buffer, time) in
        
        var sum:Float = 0
        
        // fill up the buffer with some samples
        for (var i=0; i<length; i++)
        {
            audioBuffer.floatChannelData.memory[i] = 1.0 * buffer.floatChannelData.memory[i]
        }
                          
        player.scheduleBuffer(audioBuffer,atTime:nil,options:.InterruptsAtLoop,
                              completionHandler:nil)        
}


while (true)
{
    NSThread.sleepForTimeInterval(1)
}

You'll want to put headphones on to test this and have the input take an external feed and put this into the headphones so that it does not create an oscillating feedback loop.

Testing this, there is a sense that the loop must be working ok to fill the buffer without too much trouble and the CPU is barely hitting 3% but the audio playback is glitching which becomes more pronounced as the buffer size is reduced (try playing around with length) until values below 1000 or so become too choppy for anything sensible.

I tried scheduleBuffer with various different options, nil and InterruptsAtLoop, neither made much difference. This is a shame as it means that the scheduleBuffer function is just a bit too simplistic in implementation and is not implementing some fairly basic double buffering to transition between two buffers. Hmmm.... will have to look at another way of doing this then and get back to basics.

For information, I'm using two separate buffers as if the buffer in the tap closure is used the memory is not freed and just increases.


AVFoundation audio monitoring - Installing a Tap on a Bus in Swift

Having had disappointing results with the AVAudioPlayerNode so far in Swift I haven't had a chance to return and continue bashing my head against a wall yet to try to get an answer, but today I had a quick chance to try something else that I'd been wanting to do and should certainly be in the scope of what Apple communicated in the WWDC info on the new AVFoundation classes.

If I can't quite work out how to synthesise audio just yet, it will be interesting to see if Swift has the real-timeness to do some audio monitoring and processing, so I wanted to quickly knock up a few lines of code to see if I could read the audio input and see what the rough, averaged 'energy'/volume was as a quick test.

It seemed pretty simple and the code worked without too much trouble, so here it is....

WARNING - turn your speaker off before doing this!!

I've just linked the input through a delay to the output to generate a signal to monitor.

import AVFoundation

// Setup engine and node instances
var engine = AVAudioEngine()
var delay = AVAudioUnitDelay()
var input = engine.inputNode
var output = engine.outputNode
var format = input.inputFormatForBus(0)
var error:NSError?

// Attach FX nodes to engine
engine.attachNode(delay)

// Connect nodes
engine.connect(input, to: delay, format: format)
engine.connect(delay, to: output, format: format)

// Start engine
engine.startAndReturnError(&error)

let length = 256
var audioBuffer = AVAudioPCMBuffer(PCMFormat: input.outputFormatForBus(0), frameCapacity: UInt32(length))
audioBuffer.frameLength = UInt32(length)

input.installTapOnBus(0, bufferSize:UInt32(length), format: input.outputFormatForBus(0), block:
    {
        (buffer, time) in
        
        var sum:Float = 0
        
        // do a quick calc from the buffer values
        for (var i=0; i<length; i++)
        {
            sum += Float(buffer.floatChannelData.memory[i]) * 10_000
        }
        
        println(NSString(format:"%.0f",sum/Float(length)))
})


while (true)
{
    NSThread.sleepForTimeInterval(1)
}

Remember, if you do this in a playground the looping in the playground is not going to work real-time.

hmmm, maybe there is just something really simple that is not working with the generation and I need to go back to it....


Update.... as I just bashed this together this morning, it got the needed result, but I didn't really like that I needed to link the audio to a delay and then the output to get this working. With a  bit more time looking at this it's easy to connect this just with the input as follows.... just use an input connected to a mixer, which is enough as follows:

var mixer = engine.mainMixerNode
var input = engine.inputNode
var format = input.inputFormatForBus(0)
var error:NSError?

// Connect nodes

engine.connect(input, to: mixer, format: format)

mixer.outputVolume = 0

And the rest of the code is the same. The important part here is that the mixer output needs to be zero otherwise it's creating another feedback loop like before.

Friday 26 September 2014

Thom Yorke Bittorrent album (and Stanley Donwood artwork)

Well, not to be outdone and hot on the heels of U2 forced installation into everyone's iTune's library Thom Yorke has released his latest work on Bittorrent to bypass the 'gatekeepers'. Expect this to generate further blog-o-sphere discussion (isn't he one of the gatekeepers?) on from the In-Rainbows 'choose-your-own-pricing'. Looks like we're being treated to a range of new economic approaches, or should I really say 'marketing' by major brand stars as they transition from their 40s into their 50s.....



Here's the accompanying propaganda, erm, 'marketing' statement:

"As an experiment we are using a new version of BitTorrent to distribute a new Thom Yorke record. The new Torrent files have a pay gate to access a bundle of files.. The files can be anything, but in this case is an ‘album’.
"It’s an experiment to see if the mechanics of the system are something that the general public can get its head around … If it works well it could be an effective way of handing some control of internet commerce back to people who are creating the work.
"Enabling those people who make either music, video or any other kind of digital content to sell it themselves. Bypassing the self elected gate-keepers. If it works anyone can do this exactly as we have done. The torrent mechanism does not require any server uploading or hosting costs or ‘cloud’ malarkey. It’s a self-contained embeddable shop front… The network not only carries the traffic, it also hosts the file. The file is in the network. Oh yes and it’s called Tomorrow’s Modern Boxes. Thom Yorke & Nigel Godrich"
Anyway, hype and marketing aside, it's about the music and I'm looking forward to getting my ears onto this.

It's also good to see that Thom is keeping his collaboration with Stanley Donwood going, so this is a music/art project as well. I loved the work on The Eraser as this was classic linocut, which is one of my favourite formats to work with. They've been enigmatically twittering related artwork for some months now...










Wednesday 24 September 2014

Swift ?? ! ? (an excalmation explanation question)

Yesterday's exploration using Swift to access the Pinterest API was interesting and instructive. I glossed over some interesting details to focus on looking at the functional objectives of getting an end result rather than some of the Swift constructs. Today I thought it would be good to just walk through some of the syntax and concepts that were used and I haven't had a chance to note down cohesively.

This is really an opportunity to make a little reference review for myself of some Swift syntax features around Optionals.

So, kicking off, writing anything in Swift using the Cocoa APIs seems to require quite a bit of using quotation and exclamation marks. What's that all about? These are called Optionals and provide a language facility for describing types and handling variables that can have either a value or are 'unset'.  This idea has been around for a long time as 'tri-state' logic in digital circuits. C-sharp has the concept of nullable types built into the language which allow variables to hold a value or null and are represented by a type defintion T? The variable then can be checked against null before using. Swift brings this into the language along with a bunch of other useful features.

One of the usual ways of coming across this concept is with pointers in C++, where an object pointer is typically assumed (or set) to be null until the class is instantiated and set to the object pointer. In this case the pointer can be considered to point to null or the value of an object. Code then checks for null before using the object. Obviously this can only be used with objects in C++, Swift and C-sharp extend the concept of having a null/nil value to any type.  In Swift, these are called optionals and are written as follows:

var myOptional:Int?

This will be implicitly assumed to be the value nil until it is assigned to. Also, when using the variable, it must be checked against to ensure it is non-nil before using. In Swift-speak this is known as unwrapping. There are a few ways of doing this, we'll step thorugh them. Firstly, the obvious way is to check to see if the value is nil or not before using:

if myOptional != nil
{
   // use myOptional
}

Now that we're assured that the optional does indeed contain a value (e.g. it's not nil) the value can be 'unwrapped' using an exclamation mark as follows

// use myOptional
println("the value = \(myOptional!)")

In this case we're using the Swift string interpolation features to put the variable into the string. The syntax of 'implicitly unwrapping' an optional by adding an exclamation mark can be used where we're certain that the variable is non-nil and can safely use it e.g.

var myValue:Int = myOptional!

Another way of doing this is also to define a type that once it has been set will never be given a nil value, it's sort of like defining a const. In this case the exclamation mark is used in the type definition:

let anotherValue:Int! = 2_000

Swift also provides a short-hand way of doing this without explicitly putting an exclamation mark in if and while clauses as follows:

if let myValue = myOptional
{
}

Another way of unwrapping an optional (that I didn't use yesterday) is to use the 'Nil Coalescing Operator' which allows setting either the unwrapped value or an alternative value in the case that the optional value is nil. The Nil Coalescing Operator is written as a double quotation mark and can be used as follows:

var newValue:Int = myOptional ?? 404

In this case if myOptional is nil, then newValue will be set to 404. The Nil Coalescing Operator can be written in terms of the ternary conditional operator as follows:

a ?? b         is equivalent to        a != nil ? a! : b 

So, that explains the ! and ?s, but I think they were used a bit more than that, right? Well spotted, yes! Swift puts this concept in pretty deep. Most of the examples in the JSON code previously were using the concept of Optional Chaining. This is the idea of being able to call and query prorperties, methods and subscripts on optionals that might be nil. Optional Chaining is achieved by placing a question mark after the method, property or subscript, similar to how the exclamation mark is used to implicitly unwrap an optional value.

I haven't got time just now to go through this, but wanted to make sure that I'd referenced it to explain some of the usage of ? in the examples yesterday.

Lastly, the other concept uses extensively yesterday was the optional form of the downcast operator as?. In this case what is happening is that we know the type that the object needs to be downcast to, but we're not sure if the variable is nil or not and want to preserve that typing.





Tuesday 23 September 2014

Pinteresting playing around with Swift and JSON

So after the rather disappointing dead-end using the Pinterest API to download my boards and pinned images I decided to not waste the experience and as the code was relatively simple use this as a springboard to get into using Swift for HTTP requests and see how easy it would be parsing JSON.

I'd had a play around with making some basic GET requests previously and tried out NSURLConnection (Shephertz has a good intro), then noticed the general advice that Apple has switched from using this in Mavericks to the newer NSURLSession API. There are some good online examples of this as well (see Stackoverflow). The Playgrounds feature is yet again proving it's worth in prototyping that these new ideas can easily be explored and the various trial workings put into Playgrounds to reference, explore and come back to.

I did a bit of googling the other day around JSON processing in Swift and was surprised at the number of posts dealing with this and describing it in various shades of 'chewing glass'. There were a lot of 'simple-support' classes, a lot of really clever ideas overloading various operators and using the new Swift features for 'roll-your-own' operators. Given one of the ideas of Swift was for readability to aid understanding I was less than happy with these approaches as it looked like adding a whole new set of custom semantics. So, in my books, maybe a few more lines of code, but if this is easier to understand then all the better. Writing code is easy, it's the reading and understanding bit that takes time.

So, I plunged for the boring, long winded route to just see how easy it would be to walk through the JSON response for asking for the pins from a particular board. See my previous post to take a look at this description.

After navigating through the !s and ?s this turned out to not be too much trouble.

First of all let's get the HTTP request in the bag and take it onwards from there:

import Cocoa

//  allow the asynchronous task to continue, set timeout in console
import XCPlayground
XCPSetExecutionShouldContinueIndefinitely(continueIndefinitely:true)

//  this isn't my pinterest board, you'll need to change [hondrou] and [board]
let url = NSURL(string:"https://api.pinterest.com/v3/pidgets/boards/hondrou/board/pins/")
var session = NSURLSession.sharedSession()



var task:NSURLSessionDataTask = session.dataTaskWithURL(url)
{
    (data, response, error) in
    
    if let err = error
    {
        println("Error: \(err)")
    }
    else
    {
        // let's print out what we've got to check the look of the JSON response
        println(NSString(data:data,encoding: NSUTF8StringEncoding))


    }

    // do all the JSON response processing here
}



task.resume()

Nice! This is pretty short and succinct. If you want to see the console output easily, just open up the Assistant Editor.

Things that are worth noting here are that the dataTaskWithURL function has the following prototype:

func dataTaskWithURL(_ urlNSURL,
   completionHandler completionHandler: ((NSData!,
                              NSURLResponse!,
                              NSError!) -> Void)?) -> NSURLSessionDataTask


You'll notice that the second parameter of the function is a completionHandler. As I explored previously (and I'm very thankful to have got this now in my back-pocket) this can either be implemented as a more classic callback function or can be written as a Closure (as it is above). The way it is written above takes advantage of the Swift feature called Trailing Closures.

This is a bit of Swift syntactic simplification that where the last parameter can be expressed as a closure it's possible to write it without putting the expression into the function parameter brackets. So, in the about the completion handler is the part that is written as a closure like this:

{
    (data, response, error) in

    // this is the closure code
}

As this dataTaskWithURL calls out to the completionHandler to execute this please do not forget to include the last task.resume() and remember to keep the playground running indefinitely.

Now for the JSON processing, which looks like this:

    
    var jsonError:NSError?
    
    var json:NSDictionary = NSJSONSerialization.JSONObjectWithData(data, options: NSJSONReadingOptions.MutableContainers, error: &jsonError) as NSDictionary
        

    if let err = jsonError
    {
        println("Error parsing json: \(err)")
    }
    



    let status:String? = json["status"] as? String

Wow! where did all the questions come from! Just roll with me and we'll tackle this in another post another time. This picks out the status part of the first bit of the JSON response. We can now take a look at parsing through the structure in luddite fashion for the moment.

What I want to do is walk through the structure, collect the description, link url, image url and id for the pin. First of all we can work through the structures like this:

    let data:NSDictionary! = json["data"] as? NSDictionary!
    
    let board:NSDictionary! = data["board"] as? NSDictionary!
    
    let url:String? = board["image_thumbnail_url"] as? String
    
    let pins:NSArray! = data["pins"] as? NSArray!

Now, I want to iterate through the pins, but before doing that it's useful as we'll be in an iteration loop that is a bit more difficult to follow in the playground it's useful to see what the structure of the first element of the array (e.g. the first pin) is like so we can explore that in the playground. This line does the trick:

    let p = (data["pins"] as? NSArray)![0]

It also shows a little yellow warning triangle to tell you that the type is inferred to be AnyObject. We're ok with that as this is just exploratory.

So, using this I dabbled around a bit to make sure I could get the syntax right and then put the loop in.  There is very nice post here that gave me the neat syntax for describing this cleanly:

    for (index,pin) in enumerate(pins)
    {
        // iteration
    }

I really like this after being familiar with the C-sharp 'foreach' concept, this builds on the use of Tuples in Swift nicely (it's like the more clunky KeyValuePair iteration for Dictionary used in C-sharp).

Which I then finished off like this:

    for (index,pin) in enumerate(pins)
    {
        let description:String? = pin["description"] as? String
        let link:String? = pin["link"] as? String
        
        let images:NSDictionary! = pin["images"] as? NSDictionary
        
        let id:String? = pin["id"] as? String
        
        let image237x:String = (images["237x"] as? NSDictionary)!["url"] as String
        
        println("\(index): \(description!), \(id!) \(image237x)")



    }

Nice. I can now list each of my pins, give a description, get the id and show the url to the image (although only the 237x sized image as I explained previously).

I'm quite pleased with this. It was pretty straightforward and easy to explore in a Playground and allowed me to tinker with some more Swift syntax and get a meaningful result. I expect I'll be building on this quite a bit.

Before I hit the sack.... last but not least..... I wanted to actually download those images.

This took a bit longer than I thought, just as I was fighting through getting the correct path to write to, in the end I just let it put the files in the easiest place. I think this might be a Playground security thing that I need to work through another way or just my lack of understanding Mac file-system issues at the moment. Anyway, here it is:

This goes into the loop behind the other functions

        let imgURL: NSURL = NSURL(string: image237x)
        // Download an NSData representation of the image at the URL
        let imgData: NSData = NSData(contentsOfURL: imgURL)
        
        
        let documentsPath = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0] as String
        
        var error:NSError?
        imgData.writeToFile("\(documentsPath)/\(id!).jpg", options: .AtomicWrite , error: &error)

        
I'm downloading each of the images into a file that is written as a jpg image using the pin-id as the file name. Now, just would have been good to get them at a useful resolution :-(


Update!
I had a note from Reto Kuepfer on my previous post doing this with C-sharp and he found the answer to getting hold of the original resolution images.

Using the code above, what needs to be done is to swap out the 237x image url with 'originals' instead like this:

        let image237x:String = (images["237x"asNSDictionary)!["url"as String
        let imageO = image237x.stringByReplacingOccurrencesOfString("237x", withString: "originals", options: NSStringCompareOptions.LiteralSearch, range: nil)

Then use imageO in the creation of NSURL.

In revisiting this code, in the intermediate time I have updated to the latest version of xCode, which brings a couple of little changes as they harmonise the API. The following adjustments need to be made:



        let imgURL: NSURL! = NSURL(string: image237x)

and

        let imgData: NSData! = NSData(contentsOfURL: imgURL)

and in the earlier code, the following change:

        let p: AnyObject = (data["pins"asNSArray)![0]