I mentioned in a previous post that it really helped reading about Swift Closures to understand some of the concepts and approaches that are programming patterns in the language. This post isn't about closures, but some notes on functions and tuples that I cleared up in my mind reading about them,
The first thing to grapple with (for a C-sharp/Java programmer) is the Swift style function definitions. We're used to the C heritage of showing the return type first and then the function name and parameters. Initially the syntax just seemed odd, but after reading around this I gradually got an appreciation for the style and although still not natural it has a good logic to it and makes sense. The definition is a more traditional maths/function way of looking at things, you have a set of inputs that have some function operate on them to generate some output:
(inputs) -> (output)
Which is a pretty neat and understandable function definition. Now, if you've been programming for a while you'll soon fall into the dilemma that you've written your fantastic function, got a set of input parameters and the usual return value to indicate success/error conditions or you've been extra clever and all the error conditions are considered as exceptions and you're returning some important result value (note, this can be an abuse of exceptions as they are truly intended to be exceptional cases, rather than error conditions which may be legitimate returns, but not exceptions), then a change or extension of needs means you need to return multiple values.
Usually what happens is you either end up changing the function to have some return parameter - in C-sharp this is an 'out' parameter of the function or bundling the outputs into a class, creating an object, setting the attributes, returning it and then checking the values. Hmmm, either lots more code or a messy function parameter. Neither is ideal.
Tuples
Now, in Swift, there is a relatively simple conceptual extension, why not, given the above syntax definition for a function not just allow multiple parameters in both the input and output parts of the function. Neat!! This is where Tuples come in. It took me a while to understand the linkage of the two first of all, but in C-sharp speak the one you're probably most used to is the Tuple for Dictionary KeyValuePair, which is sort of snuck in to resolve this dilemma. In the case of Swift, it's a commonplace approach seemingly.
So, back to our Swift function example, we can define something like this:
func modify(freq:Float, amp:Float) -> (freq:Float, amp:Float)
In this case (I know it's not a good example) we're putting a freq and amp and getting out a modified freq and amp. The function has a type definition like this:
(Float, Float) -> (Float, Float)
Using the inferred syntax for the result (where the type is not explicitly given), it can then be easily called as follows:
let result = modify(freq:2.0, amp:1.0)
Nice! Now the values in result can each be accessed by qualifying the variable: result.amp or result.freq. It's also possible to create inferred tuples (unnamed tuples in Swift-speak) as general variables as follows:
let freqandamp = (2.0, 1.2)
which does not qualify the two values explicitly, but just orders them. The qualification can be easily specified below which is called a 'named tuple'.
let anotherfreqandamp = (freq:1.0, amp:1.2)
or using an explicit syntax:
let yetanotherfreqandamp(freq:Double, amp:Double) = (2,0,1.2)
Thursday, 18 September 2014
Monday, 15 September 2014
More AVAudioPlayerNode with Swift and CompletionHandlers
Continuing on the subject of my previous posts looking at the new AVFoundation Audio classes in OSX10.10/iOS8 with Swift I finally found the error and a relatively obvious one at that. It wasn't closures specifically, but reading about them extended my knowledge and helped finding the cause of the problem.
I had the type-alias for the AVAudioNodeCompletionHandler all wrong. Not sure where I got that definition from, but my newness to the terminology of the error report put me off the scent. Xcode was quite clearly saying what the problem was:
Taking out atTime and options brings the function down to the simpler case of:
func scheduleBuffer(_
completionHandler
What the error message is saying (when I finally understood it) was that the completion handler Tuple - e.g. parameter types were not correct - the correct one is () and I had used (AVAudioPCMBuffer!, AVAudioTime!). It helped reading about closures to understand this, although that wasn't the cause of the problem. It does help understanding the syntax and concepts of Swift in a good deal more detail though.
The type alias of AVAudioNodeCompletionHandler is far much simpler, and for completeness is described below:
typealias AVAudioNodeCompletionHandler = @objc_block () -> Void
Putting this into the code (again in a Playground this is too slow), you get something like this:
fun handler() -> Void
Or, now with my new found understanding of closures, like this:
Trying this again with the completion handler trick works nicely this time, but still annoyingly beats, so there is some other effect here that isn't working
I had the type-alias for the AVAudioNodeCompletionHandler all wrong. Not sure where I got that definition from, but my newness to the terminology of the error report put me off the scent. Xcode was quite clearly saying what the problem was:
Taking out atTime and options brings the function down to the simpler case of:
func scheduleBuffer(_
buffer
: AVAudioPCMBuffer!,completionHandler
completionHandler
: AVAudioNodeCompletionHandler!)What the error message is saying (when I finally understood it) was that the completion handler Tuple - e.g. parameter types were not correct - the correct one is () and I had used (AVAudioPCMBuffer!, AVAudioTime!). It helped reading about closures to understand this, although that wasn't the cause of the problem. It does help understanding the syntax and concepts of Swift in a good deal more detail though.
The type alias of AVAudioNodeCompletionHandler is far much simpler, and for completeness is described below:
typealias AVAudioNodeCompletionHandler = @objc_block () -> Void
Putting this into the code (again in a Playground this is too slow), you get something like this:
fun handler() -> Void
{
// do some audio work
}
player.scheduleBuffer(buffer,atTime:nil, options:nil, completionHandler: handler)
player.scheduleBuffer(buffer,atTime:nil, options:nil,
completionHandler: { () -> Void in
// do some audio work
})
//
// main.swift
// Audio
//
// Created by hondrou on 11/09/2014.
// Copyright (c) 2014 hondrou. All rights reserved.
//
import Foundation
import AVFoundation
let twopi:Float = 2.0 * 3.14159
var freq:Float = 440.00
var sampleRate:Float = 44100.00
var engine = AVAudioEngine()
var player:AVAudioPlayerNode = AVAudioPlayerNode()
var mixer = engine.mainMixerNode
var length = 4000
var buffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0),frameCapacity:AVAudioFrameCount(length))
buffer.frameLength = AVAudioFrameCount(length)
engine.attachNode(player)
engine.connect(player,to:mixer,format:mixer.outputFormatForBus(0))
var error:NSErrorPointer = nil
engine.startAndReturnError(error)
var j:Int=0;
func handler() -> Void
{
for (var i=0; i<length; i++)
{
var val:Float = 5.0 * sin(Float(j)*twopi*freq/sampleRate)
buffer.floatChannelData.memory[i] = val
j++
}
player.scheduleBuffer(buffer,atTime:nil,options:.InterruptsAtLoop,completionHandler:handler)
}
handler()
player.play()
while (true)
{
NSThread.sleepForTimeInterval(2)
freq += 10
}
Hmph.... more still to get sorted
Now, just in case you wondered (I did) where I cooked up the handler function parameters, it was all down to mixing up two function type aliases that I'd been looking at. The previous incorrect handler function for the completionHandler below:
Is completely the proper type of function if you are installing an audio node tap block:
typealias AVAudioNodeTapBlock = (AVAudioPCMBuffer!, AVAudioTime!) -> Void
Which I'd also been thinking about at the time (and we will most likely be coming to next in our investigations).
For completeness, this is used in the AudioNode function installTapOnBus below:
func installTapOnBus(_
bufferSize
format
block
Now, just in case you wondered (I did) where I cooked up the handler function parameters, it was all down to mixing up two function type aliases that I'd been looking at. The previous incorrect handler function for the completionHandler below:
func handler(buffer:AVAudioPCMBuffer!,time:AVAudioTime!) -> Void
{
}
typealias AVAudioNodeTapBlock = (AVAudioPCMBuffer!, AVAudioTime!) -> Void
For completeness, this is used in the AudioNode function installTapOnBus below:
func installTapOnBus(_
bus
: AVAudioNodeBus,bufferSize
bufferSize
: AVAudioFrameCount,format
format
: AVAudioFormat!,block
tapBlock
: AVAudioNodeTapBlock!)Thursday, 11 September 2014
Swift achieves GM (Gold Master) status and can ship iOS apps / XCode 6.1 beta
Finally good to go and ship those apps for iOS. We still have to wait until later in Autumn (Fall) until Yosemite is released for Mac OSX apps.
Swift acheives GM.
I'd better download XCode 6.1 and check out the changes.....
Looks like there are no changes from beta7 to 6.1 for the standard library, but the XCode 6.1 beta 1 does look like it has some changes for Yosemite.
Swift acheives GM.
I'd better download XCode 6.1 and check out the changes.....
Looks like there are no changes from beta7 to 6.1 for the standard library, but the XCode 6.1 beta 1 does look like it has some changes for Yosemite.
AVFoundation Audio with Swift using AVAudioPlayerNode
Having been blocked using AudioUnit callbacks in Swift in my previous exploration, I decided to take a different direction and had another look at the WWDC video, presentation and transcript given for the new AVFoundation changes for Audio. Unfortunately it's described in terms of Objective-C, but the one of the interesting points is around the description of using AVAudioPlayerNode and the scheduleBuffer function.
My first thoughts were, great, look there's a callback to indicate that the buffer has played out which can then be called and re-filled, it's in Swift, which means we can workaround the previous callback problems. So, I knocked up another playground to test this.
The scheduleBuffer call allows for options to be set to Loops, Interrupts, InterruptsAtLoop or nil. Check out the WWDC material which explains this with some diagrams and an ADSR sort of example.
Taking baby steps, I thought I'd basically fill-up a buffer with a simple Sine wave and then play that out as a continuous loop to get started. The buffer needs to be an AVAudioPCMBuffer. If you take a look at Bob Burns' post on Gene de Lisa's blog, he's trying something similar. My code looks like this:
// fill up the buffer with some samples
Fantastic, I got some audio out, seemed like a tone, but was getting some glitchy audio effects I expect due to the buffer not smoothly containing a single cycle. After trying this and googling a bit I found Thomas Royal had also tried something similar. At least I'm getting some sound out now.
So, taking this further I thought, rather than making the Sine fit a cycle I could simply set the completionHandler callback and get an indication of when to play the next buffer chunk and I'd be away generating what I liked. [Just as a note, my assumption was that options could be set to nil or InterruptsAtLoop and effectively we'd be creating audio double-buffering so that samples could be created during the buffer playout and there would be no wait from getting the completion handler to setting the next buffer].
The empty completion handler looks like this:
I then tried setting as follows:
And got this 'helpful' error:

Hmmm. I tried taking this out of the Playground, I tried a number of different ideas. None worked. Damn! I googled a lot on this and completion handlers generally and didn't get any results. Shame.
That avenue blocked (hopefully for now), undeterred I thought I'd give this another go. Changing approach again, I thought, well, if I'm not getting a callback, maybe I just create a thread and stuff buffers into the player, I could get cleverer later on and use the atTime parameters (assuming that would work) and put the buffers in given some consideration for timing. Indeed doing this might be a nice way to ensure that the timing alignment of various players were synched. But I'm getting ahead of myself now.
The revised fragment looks like this:
Rather than the keep-alive for the playground at the end I'm keeping the main thread alive with a simple loop (which I'll use later to adjust the frequency to check that this is not just playing a single tone).
This played back and ok, I got audio, but those funny glitches were still there. So I played around with the sleep loop interval and the size of the buffer with varying results, but none of them nice, then decided to go to bed! Stumped and not too happy about it.
Hmmmm, not all lost yet as I have some other ideas, but I'm away for the next few days on a biz trip so will have to try this later on. If anyone has any good comments/suggestions before then I'd be most grateful. I'm hoping that Swift should be man-enough for the job. C# certainly can cope with this kind of relatively simple synthesis and it's running in the CLR.
Update
doh! that'll teach me for late night coding. I finally found the problem with the completion handler and have just posted another blog entry
func scheduleBuffer(_ buffer
: AVAudioPCMBuffer!,
atTime when
: AVAudioTime!,
options options
: AVAudioPlayerNodeBufferOptions,
completionHandler completionHandler
: AVAudioNodeCompletionHandler!)
My first thoughts were, great, look there's a callback to indicate that the buffer has played out which can then be called and re-filled, it's in Swift, which means we can workaround the previous callback problems. So, I knocked up another playground to test this.
The scheduleBuffer call allows for options to be set to Loops, Interrupts, InterruptsAtLoop or nil. Check out the WWDC material which explains this with some diagrams and an ADSR sort of example.
Taking baby steps, I thought I'd basically fill-up a buffer with a simple Sine wave and then play that out as a continuous loop to get started. The buffer needs to be an AVAudioPCMBuffer. If you take a look at Bob Burns' post on Gene de Lisa's blog, he's trying something similar. My code looks like this:
import Cocoa
import AVFoundation
let twopi:Float = 2.0 * 3.14159
var freq:Float = 440.00
var sampleRate:Float = 44100.00
var engine = AVAudioEngine()
var player:AVAudioPlayerNode = AVAudioPlayerNode()
var mixer = engine.mainMixerNode
var buffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0),frameCapacity:100)
var length = 100
buffer.frameLength = AVAudioFrameCount(length)
// fill up the buffer with some samples
for (var i=0; i<length; i++)
{
var val:Float = 10.0 * sin(Float(i)*twopi*freq/sampleRate)
buffer.floatChannelData.memory[i] = val
}
engine.attachNode(player)
engine.connect(player,to:mixer,format:mixer.outputFormatForBus(0))
var error:NSErrorPointer = nil
engine.startAndReturnError(error)
player.scheduleBuffer(buffer,atTime:nil,options:.Loops,completionHandler:nil)
player.play()
// keep playground running
import XCPlayground
XCPSetExecutionShouldContinueIndefinitely(continueIndefinitely:true)
Fantastic, I got some audio out, seemed like a tone, but was getting some glitchy audio effects I expect due to the buffer not smoothly containing a single cycle. After trying this and googling a bit I found Thomas Royal had also tried something similar. At least I'm getting some sound out now.
So, taking this further I thought, rather than making the Sine fit a cycle I could simply set the completionHandler callback and get an indication of when to play the next buffer chunk and I'd be away generating what I liked. [Just as a note, my assumption was that options could be set to nil or InterruptsAtLoop and effectively we'd be creating audio double-buffering so that samples could be created during the buffer playout and there would be no wait from getting the completion handler to setting the next buffer].
The empty completion handler looks like this:
func handler(buffer:AVAudioPCMBuffer!,time:AVAudioTime!) -> Void
{
}
player.scheduleBuffer(buffer,atTime:nil,options:.InterruptsAtLoop,completionHandler:handler)
And got this 'helpful' error:

Hmmm. I tried taking this out of the Playground, I tried a number of different ideas. None worked. Damn! I googled a lot on this and completion handlers generally and didn't get any results. Shame.
That avenue blocked (hopefully for now), undeterred I thought I'd give this another go. Changing approach again, I thought, well, if I'm not getting a callback, maybe I just create a thread and stuff buffers into the player, I could get cleverer later on and use the atTime parameters (assuming that would work) and put the buffers in given some consideration for timing. Indeed doing this might be a nice way to ensure that the timing alignment of various players were synched. But I'm getting ahead of myself now.
The revised fragment looks like this:
let queue = NSOperationQueue()
queue.addOperationWithBlock({
var j:Int=0;
while(true)
{
for (var i=0; i<length; i++)
{
var val:Float = 5.0 * sin(Float(j)*twopi*freq/sampleRate)
buffer.floatChannelData.memory[i] = val
j++
j++
}
player.scheduleBuffer(buffer,atTime:nil,options:.InterruptsAtLoop,completionHandler:nil)
let thread = NSThread.currentThread()
NSThread.sleepForTimeInterval(0.1)
}
})
This proved to be problematic in the playground as it tried to show filling the loop each cycle, which took longer than the playback, so I first tried to move this part of the code to a Framework to import (unsuccessfully, something I'll come back to later as it's going to be key to being able to use Playgrounds effectively) and then just into a normal Console application:
//
// main.swift
// Audio
//
// Created by hondrou on 11/09/2014.
// Copyright (c) 2014 hondrou. All rights reserved.
//
import Foundation
import AVFoundation
let twopi:Float = 2.0 * 3.14159
var freq:Float = 440.00
var sampleRate:Float = 44100.00
var engine = AVAudioEngine()
var player:AVAudioPlayerNode = AVAudioPlayerNode()
var mixer = engine.mainMixerNode
var length = 4000
var buffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0),frameCapacity:AVAudioFrameCount(length))
buffer.frameLength = AVAudioFrameCount(length)
engine.attachNode(player)
engine.connect(player,to:mixer,format:mixer.outputFormatForBus(0))
var error:NSErrorPointer = nil
engine.startAndReturnError(error)
let queue = NSOperationQueue()
queue.addOperationWithBlock({
var j:Int=0;
while(true)
{
for (var i=0; i<length; i++)
{
var val:Float = 5.0 * sin(Float(j)*twopi*freq/sampleRate)
buffer.floatChannelData.memory[i] = val
j++
j++
}
player.scheduleBuffer(buffer,atTime:nil,options:.InterruptsAtLoop,completionHandler:nil)
let thread = NSThread.currentThread()
NSThread.sleepForTimeInterval(0.1)
}
})
player.play()
while (true)
{
NSThread.sleepForTimeInterval(1)
//freq += 10
}
This played back and ok, I got audio, but those funny glitches were still there. So I played around with the sleep loop interval and the size of the buffer with varying results, but none of them nice, then decided to go to bed! Stumped and not too happy about it.
Hmmmm, not all lost yet as I have some other ideas, but I'm away for the next few days on a biz trip so will have to try this later on. If anyone has any good comments/suggestions before then I'd be most grateful. I'm hoping that Swift should be man-enough for the job. C# certainly can cope with this kind of relatively simple synthesis and it's running in the CLR.
Update
doh! that'll teach me for late night coding. I finally found the problem with the completion handler and have just posted another blog entry
Wednesday, 10 September 2014
Lunchtime thoughts - LiveCoding, Ephemeral Code and Engineering
I've recently been swimming around in the new pool of Swift like many others and am particularly enjoying the Playgrounds concept which is allowing me to explore as I learn and develop and keep a set of code scratchpads to one side. I was mulling this over in the car this morning and recollected the article by Alex McLean on Transient and Ephemeral code that I read with interest over the summer which resonated.
In the modern way that new thinking spreads in the internet age, I'd come to Alex's post by way of getting a FB update from the band 65 days of static, who I'd first heard of by way of the Comfort Conspiracy podcast which referenced the LiveCoding tool Gibber, from which I'd explored live-coding languages and found Tidal and Alex's fantastic blog on his various explorations making music with text.
Seeing coding as a performance art was a bit of a mind-blowing revelation to me. Having an engineering background and being used to pair-programming and code demonstrations this was quite a different way of seeing code being used as an artistic tool, just for the joy of it, it's purpose being the 'shape' or effect it generated at that time rather than to craft something that had a longer term goal and necessarily repeated use.
Which had me thinking this morning about the differences of engineering and arts approaches. I'll put my cards on the table as an artist now and say that I'm a printmaker. However, monoprinting never really resonated with me and I always gravitated towards wood/lino-cut and etching which has a strong process element and on the engineering side of fine-art. This again, is about creating something with longevity and practicing/honing skills and the end result into something tangible and permanent.
Where my mind drifted was towards performance art - video, animation, dance and obviously music, where the training was effectively directed to having a set of patterns and expertise that could be called upon at an instant in time to achieve a controlled or experimental result. Isn't this where LiveCoding really challenges and starts to come into its own?
Coding, due to the syntax and range of effort required on a time-basis to achieve a certain outcome result has always seemed (to me) too slow to make a real performance without it being pre-canned and choreographed like a keynote speech. Then it occured to me that isn't that what all performance is to a certain extent? We learn scales and chord patterns repeatedly to get good at being able to reproduce them not so much to conjure them creatively live. Even jazz is building a skilled repertoire that allows the creativity to work within the trained/learnt/explored boundaries. And Tidal is inherently designed to explore patterns.
So, where does this leave us with code? My thinking returned back to the experiments in the playground. I'd seen from Mike Hodnick (kindohm)'s explorations with Tidal in his 365 Tidal Patterns learning and seeing text as a musical tool. Tidal and Gibber, which led me here, and many other custom languages are being driven by the relative ease with which it is now possible to make our own 'text-musical' instruments aka compilers or interpreters and the relative ease with which many concepts are being exposed as high-level APIs or frameworks which can be exploited (similar to the possibilities that Mashups initially simulated through the easy combination of many open, widely available APIs). I'll be interested to see if the open-source release of Rosyln by Microsoft creates any new musical languages, since like a traditional musician this is the syntax (or fingering) that I am most familiar with.
I've found that this has changed my behaviour and thinking more radically than I initially thought. Firstly, I'm sold now on text being an input device for a musical instrument and that live-coding to generate music is going to be an increasing phenomena as a performance art form. Second, I've noticed that I'm much more likely to kick-off some code to solve a problem and use it as a throw-away scratchpad. I still don't delete it, but it's more like one of my many scribbed moleskines and notebooks that contain half captured and explored ideas for code, creations, art and ideas. I've got a littered directory that is building up with results that are going no further than having achieved the results at the time. It's similar to all those spreadsheets that I call-up to do a set of complex calculations rather than using a calculator and then close without saving. I'm finding myself much more used to writing some code.
Which gets me on to the final point, which called to mind an article in the Guardian earlier this year on the need for a new revolution to teaching maths in schools and the approach now used in Estonia for a computer based approach to teaching maths. Seeing how my own children approach education and problems, maybe we need to increasingly consider ourselves that some of these text input tools that we're using have a far wider set of application and we should be using them more ephemerally for achieving our daily tasks - coding television programs anyone, rather than using an expensive, custom video editor??? I'm certain it's being done already.... which is why it's all the more laudable to see the work of the BBC once again supporting education for coding in schools (again, like Estonia)....
Lunchtime over
In the modern way that new thinking spreads in the internet age, I'd come to Alex's post by way of getting a FB update from the band 65 days of static, who I'd first heard of by way of the Comfort Conspiracy podcast which referenced the LiveCoding tool Gibber, from which I'd explored live-coding languages and found Tidal and Alex's fantastic blog on his various explorations making music with text.
Seeing coding as a performance art was a bit of a mind-blowing revelation to me. Having an engineering background and being used to pair-programming and code demonstrations this was quite a different way of seeing code being used as an artistic tool, just for the joy of it, it's purpose being the 'shape' or effect it generated at that time rather than to craft something that had a longer term goal and necessarily repeated use.
Which had me thinking this morning about the differences of engineering and arts approaches. I'll put my cards on the table as an artist now and say that I'm a printmaker. However, monoprinting never really resonated with me and I always gravitated towards wood/lino-cut and etching which has a strong process element and on the engineering side of fine-art. This again, is about creating something with longevity and practicing/honing skills and the end result into something tangible and permanent.
Where my mind drifted was towards performance art - video, animation, dance and obviously music, where the training was effectively directed to having a set of patterns and expertise that could be called upon at an instant in time to achieve a controlled or experimental result. Isn't this where LiveCoding really challenges and starts to come into its own?
Coding, due to the syntax and range of effort required on a time-basis to achieve a certain outcome result has always seemed (to me) too slow to make a real performance without it being pre-canned and choreographed like a keynote speech. Then it occured to me that isn't that what all performance is to a certain extent? We learn scales and chord patterns repeatedly to get good at being able to reproduce them not so much to conjure them creatively live. Even jazz is building a skilled repertoire that allows the creativity to work within the trained/learnt/explored boundaries. And Tidal is inherently designed to explore patterns.
So, where does this leave us with code? My thinking returned back to the experiments in the playground. I'd seen from Mike Hodnick (kindohm)'s explorations with Tidal in his 365 Tidal Patterns learning and seeing text as a musical tool. Tidal and Gibber, which led me here, and many other custom languages are being driven by the relative ease with which it is now possible to make our own 'text-musical' instruments aka compilers or interpreters and the relative ease with which many concepts are being exposed as high-level APIs or frameworks which can be exploited (similar to the possibilities that Mashups initially simulated through the easy combination of many open, widely available APIs). I'll be interested to see if the open-source release of Rosyln by Microsoft creates any new musical languages, since like a traditional musician this is the syntax (or fingering) that I am most familiar with.
I've found that this has changed my behaviour and thinking more radically than I initially thought. Firstly, I'm sold now on text being an input device for a musical instrument and that live-coding to generate music is going to be an increasing phenomena as a performance art form. Second, I've noticed that I'm much more likely to kick-off some code to solve a problem and use it as a throw-away scratchpad. I still don't delete it, but it's more like one of my many scribbed moleskines and notebooks that contain half captured and explored ideas for code, creations, art and ideas. I've got a littered directory that is building up with results that are going no further than having achieved the results at the time. It's similar to all those spreadsheets that I call-up to do a set of complex calculations rather than using a calculator and then close without saving. I'm finding myself much more used to writing some code.
Which gets me on to the final point, which called to mind an article in the Guardian earlier this year on the need for a new revolution to teaching maths in schools and the approach now used in Estonia for a computer based approach to teaching maths. Seeing how my own children approach education and problems, maybe we need to increasingly consider ourselves that some of these text input tools that we're using have a far wider set of application and we should be using them more ephemerally for achieving our daily tasks - coding television programs anyone, rather than using an expensive, custom video editor??? I'm certain it's being done already.... which is why it's all the more laudable to see the work of the BBC once again supporting education for coding in schools (again, like Estonia)....
Lunchtime over
Tuesday, 9 September 2014
LiveCoding with Swift Audio continued...
After getting stuck trying out Swift Playgrounds using the new AVFoundation classes because I was on Mavericks the temptation was too much at the weekend and I created a new partition, downloaded the latest Yosemite build and got started. Jamie Bullock's Live coding audio with Swift playgrounds worked like a dream and I was up and running with some playground code making sounds. Great.
Googling a bit further I came across Gene de Lisa's excellent post trailblazing Swift and Core Audio and his note on using Swift with AVFoundation to play audio and generate midi notes from a sound bank. Good, this all worked nicely.
So, as this was going so smoothly I decided to jump straight in at the deep end and see how far the new AVFoundation classes and Swift bindings could be pushed... undeterred by Thomas Royal's notes on the current limitations of Audio Unit implementations with Swift as his blocker had seemed to be resolved in XCode6 Build7 I thought I'd see how far I could get following Matt Gallagher's wonderfully simple introduction to AudioUnits building an iOS tone generator. I thought I'd see how far this could go in a playground and with the basis of generating a tone the whole of audio synthesis would be open to me.
It's always fun to have something real and a little challenging as a goal to learning something new like Swift so I stumbled through reading the unfamiliar syntax and how to make the various cross-language bindings. All reasonably straightforward and answers were pretty forthcoming with some googling. Then, I got stuck. It seems the final step of being able to give a Swift function as a callback from C/C++ code is currently blocked by Apple. I followed this up on a number of different APIs and it looks like everyone is running into the same problem and the Apple Dev forums are saying this is currently not available and the alternatives are to make a C++/Objective-C/Closure trampoline [maybe I'll tackle that later] or another bit of kludgy glue. Hmmm, not nice! It's a little bit frustrating as Apple have done a good job of getting the typealias of the AURenderCallback nicely imported into AVFoundation, so let's just hope this is a matter of time and it will be resolved soon.
I'll detail where I got to and the code so far so I can share this in case it gets unblocked soon (it's in a playground, so not written for beauty, but to kick the tyres of what is possible):
It's the last line where things get stuck. What is happening is that the Swift function (RenderTone) cannot be matched to the typealias of AURenderCallback which is defined as a CFunctionPointer:
(Note, the reason for the slight differences in the type alias definition in the message and what I have stated above is that a number of these bindings changed in Xcode6 Build4 - ConstUnsafePointer to UnsafePointer and UnsafePointer to UnsafeMutablePointer and UnsafePointer<()> to UnsafeMutablePointer<Void>)
Googling a bit further I came across Gene de Lisa's excellent post trailblazing Swift and Core Audio and his note on using Swift with AVFoundation to play audio and generate midi notes from a sound bank. Good, this all worked nicely.
So, as this was going so smoothly I decided to jump straight in at the deep end and see how far the new AVFoundation classes and Swift bindings could be pushed... undeterred by Thomas Royal's notes on the current limitations of Audio Unit implementations with Swift as his blocker had seemed to be resolved in XCode6 Build7 I thought I'd see how far I could get following Matt Gallagher's wonderfully simple introduction to AudioUnits building an iOS tone generator. I thought I'd see how far this could go in a playground and with the basis of generating a tone the whole of audio synthesis would be open to me.
It's always fun to have something real and a little challenging as a goal to learning something new like Swift so I stumbled through reading the unfamiliar syntax and how to make the various cross-language bindings. All reasonably straightforward and answers were pretty forthcoming with some googling. Then, I got stuck. It seems the final step of being able to give a Swift function as a callback from C/C++ code is currently blocked by Apple. I followed this up on a number of different APIs and it looks like everyone is running into the same problem and the Apple Dev forums are saying this is currently not available and the alternatives are to make a C++/Objective-C/Closure trampoline [maybe I'll tackle that later] or another bit of kludgy glue. Hmmm, not nice! It's a little bit frustrating as Apple have done a good job of getting the typealias of the AURenderCallback nicely imported into AVFoundation, so let's just hope this is a matter of time and it will be resolved soon.
I'll detail where I got to and the code so far so I can share this in case it gets unblocked soon (it's in a playground, so not written for beauty, but to kick the tyres of what is possible):
import Cocoa
import AVFoundation
func RenderTone(inRefCon:UnsafeMutablePointer<Void>,
ioActionFlags:UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp:UnsafePointer<AudioTimeStamp>,
inBusNumber:UInt32,
inNumberFrames:UInt32,
ioData:UnsafeMutablePointer<AudioBufferList>) -> (OSStatus)
{
// no-code as could not get this far!
return 0;
}
var acd:AudioComponentDescription = AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_DefaultOutput),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,
componentFlagsMask: 0)
var ao:AudioComponent = AudioComponentFindNext(nil, &acd)
var err:OSStatus
var aci:AudioComponentInstance = nil
err = AudioComponentInstanceNew(ao, &aci)
var aci2:UnsafeMutablePointer<AudioComponentInstance> = UnsafeMutablePointer<AudioComponentInstance>(aci)
var callback:AURenderCallbackStruct = AURenderCallbackStruct(RenderTone,nil)
It's the last line where things get stuck. What is happening is that the Swift function (RenderTone) cannot be matched to the typealias of AURenderCallback which is defined as a CFunctionPointer:
typealias AURenderCallback = CFunctionPointer<(
(UnsafeMutablePointer<Void>,
UnsafeMutablePointer<AudioUnitRenderActionFlags>,
UnsafePointer<AudioTimeStamp>,
UInt32,
UInt32,
UnsafeMutablePointer<AudioBufferList>)->OSStatus)
Having got all this way, I then found this post on the Apple Developer forums..... which kind of summarises that I'm not the only one to have got here only to bash my head against a wall :-(
However, as another post indicates, this might not be forever as it does seem that Apple may not be completely against putting this in eventually and the devs could just have a long list to get Swift, Yosemite and iOS8 out of the door... I know how that feels!
However, as another post indicates, this might not be forever as it does seem that Apple may not be completely against putting this in eventually and the devs could just have a long list to get Swift, Yosemite and iOS8 out of the door... I know how that feels!
(Note, the reason for the slight differences in the type alias definition in the message and what I have stated above is that a number of these bindings changed in Xcode6 Build4 - ConstUnsafePointer to UnsafePointer and UnsafePointer to UnsafeMutablePointer and UnsafePointer<()> to UnsafeMutablePointer<Void>)
Saturday, 6 September 2014
Ripple - feeling music and cymatics
I saw this a while back in some tech/design magazine and was reminded again how cool this is seeing it on Pintrest today. Just love the simplicity in the design and concept. The speaker is called the Ripple and was designed by Jackson McConnell for the hearing impaired.
These remind me of a sort of 3D interpretation of Chladni patterns that you can get by putting sand on vibrating surfaces and seeing how it piles up. These can also make some pretty cool looking patterns (there are some good examples here) like the ones made by Hans Jenny below:
These are all variations of applications of our friend the Bessel function, which can be used to represent deformations of a 2D surface.
Taking this further in the field of cymatics there's a whole lot of interesting 2D and 3D patterns that can be made with different materials and vibrations. It gets even more funky for non-Newtonian materials like cornstarch (check this out on Make, Instructables or YouTube).
These remind me of a sort of 3D interpretation of Chladni patterns that you can get by putting sand on vibrating surfaces and seeing how it piles up. These can also make some pretty cool looking patterns (there are some good examples here) like the ones made by Hans Jenny below:
These are all variations of applications of our friend the Bessel function, which can be used to represent deformations of a 2D surface.
Taking this further in the field of cymatics there's a whole lot of interesting 2D and 3D patterns that can be made with different materials and vibrations. It gets even more funky for non-Newtonian materials like cornstarch (check this out on Make, Instructables or YouTube).
Subscribe to:
Posts (Atom)