Tuesday, 30 August 2016

RegEx MiniGrammar

I was working on some code the other day and quickly needed to drop in a RegEx to match a particular pattern and once again needed to Googling the syntax and check some example web-wisdom to get what I needed quickly.

To save myself next time, I'm blogging some self notes cribbed from the easiest places so I can refer them here rather than search around. This might start to become a behaviour pattern in itself!

In many cases my usual regexs are just simple globbing helpers [glob patterns are sets of filenames expressed with wildcard characters, such as *.txt] to filter out particular files to process. More recently I've been increasingly using them to pattern match portions of text in URLs and other data sets.

Simple MiniGrammar reminder:

Alternatives (or)
Using the pipe character
e.g. gray|grey matches "grey" and "gray".

Grouping
Parts of the pattern match can be isolated with parentheses
e.g. gr(e|a)y matches "gray" and "grey".

Occurances
A number of characters can be used to express the number of occurances of the pattern in the match:

Optional (zero or one) is indicated with ?
e.g. colou?r matches "color" or "colour"

None to many (zero or n) is indicated with *
e.g. tx*t matches "tt", "txt" and "txxxxxxt".

Any number (zero or n) is indicated with +
e.g. go+al matches "goal", "goooal" and "goooooooal".

Exactly a certain number of times is given by {n}
e.g. gr{3} matches "grrr"

Bounded number of times is given by {min,max}

Metacharacters

Match a single character from a set using square brackets:
e.g. gr[ae]y matches "grey" or "gray"

Character ranges are expressed using the dash/minus sign:
e.g. [a-z] matches a single character from a to z or [a-zA-Z] matches lower case and capitals.

Except ranges are expressed by putting a ^ in the bracket. This indicates all characters except those following. It's most common case is matching against everything apart from whitespace:
e.g. [^ ] matches all characters except whitespace.

Start of string is matched by ^
End of string is matched by $



Examples
And some of the examples I find helpful with a bit of commentary:

match whitespace at the start or end of a file: ^[ \t]+|[ \t]+$
Explanation: ^ is used to set the start of the string, the square brackets indicate can contain either a space or tab (\t) whitespace character. This sequence can be repeated zero or many times (plus character). OR it matches to the whitespace found at the end of the string as indicated by $.




Thursday, 25 August 2016

Song Exploder


I've been listening to this podcast for over a year now after seeing it listed as one of the best holiday postcasts in the Guardian before our long driving holiday in 2015. It was also showcased in more detail in the Observer.



I took a bunch of unlistened episodes again on holiday this year and have listened through some of my old favourites again. What is so interesting is how widely varied the creative processes are between musicians and bands and how ideas come from the smallest of inspirations and are then honed and crafted. It's particularly interesting to hear how the musical and lyrical elements are developed together.

Here are some of my favourite episodes:

78 Grimes - Kill v. Maim
Interesting insight into the creative process, defining songs as either a kick-drum song or a guitar song. This track gets as much from the drums as possible with 40 layers of drum tracks!

76 Chvrches - Clearest Blue
I just love this band and have been listening to them for a while now. Their Glasto 2016 set was one of my highlights this year. It is interesting to hear their process and the idea of strarting the creative process with a set rules for the band, like a two chord rule and then breaking them.

66 KT Tunstall - Suddenly I see
KT Tunstall is amazing, from her first blow-away performance on Jools Holland using a looper pedal to the way that she plays acoustic guitar. The insight into the development of this track is how her producer identifies that her strong rhythm developed from busking was getting diluted by drummer layering over stock beats. This track is inspired by looking at artists you love.

43 Sylvan Esso - Coffee
I love the use of sound samples including the Little Tikes xylophone.

40 Ramin Djawadi - Game of Thrones theme
Everyone must have tried playing the catchy riff at some point! It's interesting hearing the development of the track and some of the derivatives including an 8bit version!

24 Tycho - Awake
Love the album, so it was interesting to hear the development process.

17 Anamanaguchi - Prom Night
I particularly liked hearing about the tech use of 8bit Nintendo sounds and the creative process around the use of synthetic vocals

9 Polica - Smug
And another exploration of how particular kit can drive the development of a song.

7 Jeff Beal - House of Cards
How the darkness and gloom was worked into this track to evoke a particular atmosphere.

6 Daedelus - Experience
I can remember listening this podcast at the end of a long drive last year. It only uses acoustic sounds with the interesting accordion riff that eventually  became part of a hip-hop classic!





Wednesday, 24 August 2016

FFMPEG for adding black, colourbars, tone and slates

There’s quite a few scattered comments online about doing parts of this with FFMPEG, but nothing cohesive and it seemed a bit hit and miss getting some of the bits to work at first. I've collected together the steps I worked through along with some of the useful references that helped along the way. It’s not complete by any means, but it should be a good starter for building upon.

The first reference I turned up was to add black frames at thestart and end of the video giving a command line as follows:

ffmpeg -i XX.mp4 -vf "
    color=c=black:s=720x576:d=10 [pre] ;
    color=c=black:s=720x576:d=30 [post] ;
    [pre] [in] [post] concat=n=3" -an -vcodec mpeg2video -pix_fmt yuv422p -s 720v576 -aspect 16:9 -r 25 -minrate 30000k -maxrate 30000k -b 30000k output.mpg

This gives the starting point using the –vf command to setup a concat video effect. The –vf option only allows a single input file into the filter graph but allows the definition of some processing options inside the function. The post goes on to look at some of the challenges that the author was facing mostly related to differences between the input file resolution and aspect ratio. I played around with this in a much more simplified manner using an MXF sample input file as follows:

ffmpeg -i XX.mxf -vf "
    color=c=black:s=1920x1080:d=10 [pre] ;
    color=c=black:s=1920x1080:d=30 [post] ;
    [pre] [in] [post] concat=n=3" –y output.mxf

In my case here I’m using the same output codec as the input so hence the simpler command line without additional parameters. The –y option means overwrite any existing files which I’ve just used for ease. The key to understanding this is the concat filter which took me quite a bit of work.

As a note, I’ve laid this out on multiple lines for readability, but it needs to be on a single command line to work.

Concat Video Filter
The concat video filter is the key to this operation which is stitching together the three components. There are a couple of concepts to explore, so I’ll take them bit by bit, some of the referenced links will use them earlier in examples so it might be worthwhile skipping the references first, reading through to the end and then dipping into the references as needed to explore more details once you have the full context.

Methods of concatenating files explains a range of ffmpeg options including concat which is descibed with this example:

ffmpeg -i opening.mkv -i episode.mkv -i ending.mkv \
  -filter_complex '[0:0] [0:1] [1:0] [1:1] [2:0] [2:1] concat=n=3:v=1:a=1 [v] [a]' \
  -map '[v]' -map '[a]' output.mkv

Notice in this example the \ is used to spread the command line over multiple lines as might be used in a linux script (this doesn’t work on windows). In this case there are multiple input files and the filter_complex command is used instead. As most of the examples use –filter_complex instead of –vf I’ll use that from now on. I had a number of problems getting this to work initially which I’ll describe as I go through.

In this case the concat command has a few more options:

concat=n=3:v=0:a=1" : 

concat means use the media concatenate (joining) function.
n means confirm total count of input files.
v means has video? use 0 = no video, 1 = contains video.
a means has audio? use 0 = no audio, 1 = contain audio.

Some clues to understanding how this works are given with this nice little diagram indicating how the inputs, streams and outputs are mapped together:

                   Video     Audio
                   Stream    Stream
                   
input_file_1 ----> [0:1]     [0:0]
                     |         |
input_file_2 ----> [1:1]     [1:0]
                     |         |
                   "concat" filter
                     |         |
                    [v]       [a]
                     |         |
                   "map"     "map"
                     |         |
Output_file <-------------------

Along with the following description:

ffmpeg -i input_1 -i input_2
 -filter_complex "[0:1] [0:0] [1:1] [1:0] concat=n=2:v=1:a=1 [v] [a]"
-map [v] -map [a] output_file
The above command uses:
  • Two input files are specified: "-i input_1" and "-i input_2".
  • The "conact" filter is used in the "-filter_complex" option to concatenate 2 segments of input streams.
  • Two input files are specified: "-i input_1" and "-i input_2".
  • "[0:1] [0:0] [1:1] [1:0]" provides a list of input streams to the "concat" filter. "[0:1]" refers to the first (index 0:) input file and the second (index :1) stream, and so on.
  • "concat=n=2:v=1:a=1" specifies the "concat" filter and its arguments: "n=2" specifies 2 segments of input streams; "v=1" specifies 1 video stream in each segment; "a=1" specifies 1 audio stream in each segment.
  • "[v] [a]" defines link labels for 2 streams coming out of the "concat" filter.
  • "-map [v]" forces stream labeled as [v] go to the output file.
  • "-map [a]" forces stream labeled as [a] go to the output file.
  • "output_file" specifies the output file.
Filter_Complex input mapping
Before we get onto the output mapping, let’s look at what this input syntax means – I cannot remember quite where I found the information, but basically the definition of the concat command n, v, a gives the ‘dimensions’ of the input and output to the filter. So there will be v+a outputs and n*(v+a) inputs.

The inputs are referenced as follows: [0:1] means input 0, track 1, or is defined [0:v:0] which means input 0, video track 0.

There needs to be n*(v+a) of these, arranged (v+a) in front of the concat command. For example:

Concat two input video sequences
“[0:0] [1:0] concat=n=2:v=1:a=0”

Concat two input audio sequences (assuming audio is on the second track)
“[0:1] [1:1] concat=n=2:v=0:a=1”

Concat two input AV sequences (assuming audio is on the second and third tracks)
“[0:0] [0:1] [0:2] [1:0] [1:1] [1:2] concat=n=2:v=1:a=2”

Getting this wrong kept producing this cryptic message:

"[AVFilterGraph @ 036e2fc0] No such filter: '
'
Error initializing complex filters.
Invalid argument"

Which seems to be the root of some other people's problems as well.

Output mapping
Taking the rest of the filter, it is possible to put mappings after the concat command to identify the outputs:

concat=n=2:v=1:a=1 [v] [a]"
-map [v] -map [a]

These can then be mapped using the map command to the various output tracks which are created in order, so if you had four audio tracks it would look something like this:

concat=n=2:v=1:a=4 [v] [a1] [a2] [a3] [a4]"
-map [v] -map [a1] -map [a2] -map [a3] -map [a4]

This can be omitted and seems to work fine with a default mapping.

Notice in this case that the output tracks have all been named to something convenient to understand, likewise these could be written as follows:

concat=n=2:v=1:a=4 [vid] [engl] [engr] [frl] [frr]"
-map [vid] -map [engl] -map [engr] -map [frl] -map [frr]

This is also possible on the input as follows:

“[0:0] [vid1]; [1:0] [vid2]; [vid1] [vid2] concat=n=2:v=1:a=0”

Which is pretty neat and allows a bit more clearer description.

Generating Black
Now we’ve got the groundwork in place we can create some black video. I found two ways of doing this, first in the concat filter:

ffmpeg –i XX.mxf –filter_complex “[0:0] [video]; color=c=black:s=1920x1080:d=30 [black]; [black] [video] concat=n=2:v=1:a=0”

This has a single input file which we take the video track from and then create an input black video for 30secs and feed into the concat filter to produce a video stream.

Alternatively a number of samples show the input stream being created like this:

ffmpeg –i XX.mxf -f lavfi -i "color=c=black:s=1920x1080:d=10" –filter_complex “[0:0] [video]; [1:0] [black]; [black] [video] concat=n=2:v=1:a=0”

This all works fine, so let’s now add some audio tracks in. We’ll need to generate a matching audio track for the black video. I found when I was first playing with this that the output video files were getting truncated because the duration of the output only matched the ‘clock-ticks’ of the duration of the input video. The way I’ll do this is to generate a tone using a sine wave function and set the frequency to zero, which just saves me explaining this again later.

ffmpeg –i XX.mxf –filter_complex “color=c=black:s=1920x1080:d=10 [black]; sine=frequency=0:sample_rate=48000:d=10 [silence]; [black] [silence] [0:0] [0:1] concat=n=2:v=1:a=1”

And similarly if we wanted to top and tail with black it works like this:

ffmpeg –i XX.mxf –filter_complex “color=c=black:s=1920x1080:d=10 [black]; sine=frequency=0:sample_rate=48000:d=10 [silence]; [black] [silence] [0:0] [0:1] [black] [silence]  concat=n=3:v=1:a=1”

or it doesn’t! It seems that you can only use the streams once in the mapping… so it’s easy enough to modify to this:

ffmpeg -i hottubmxf.mxf -filter_complex "color=c=black:s=1920x1080:d=10 [preblack]; sine=frequency=0:sample_rate=48000:d=10 [presilence]; color=c=black:s=1920x1080:d=10 [postblack]; sine=frequency=0:sample_rate=48000:d=10 [postsilence]; [preblack] [presilence] [0:0] [0:1] [postblack] [postsilence] concat=n=3:v=1:a=1" -y output.mxf

Which is a bit more of a faff, but works.

ColourBars and Slates
Adding ColorBars is simply a matter of using another generator like this to replace black at the front with colorbars and this time generating a tone:

ffmpeg -i hottubmxf.mxf -filter_complex "
testsrc=d=10:s=1920x1080 [prebars];
sine=frequency=1000:sample_rate=48000:d=10 [pretone];
color=c=black:s=1920x1080:d=10 [postblack];
sine=frequency=0:sample_rate=48000:d=10 [postsilence];
[prebars] [pretone] [0:0] [0:1] [postblack] [postsilence]
concat=n=3:v=1:a=1" -y output.mxf

Let’s add the black in back at the start as well:

ffmpeg -i hottubmxf.mxf -filter_complex "
testsrc=d=10:s=1920x1080 [prebars];
sine=frequency=1000:sample_rate=48000:d=10 [pretone];
color=c=black:s=1920x1080:d=10 [preblack];
sine=frequency=0:sample_rate=48000:d=10 [presilence];
color=c=black:s=1920x1080:d=10 [postblack];
sine=frequency=0:sample_rate=48000:d=10 [postsilence];
[prebars] [pretone] [preblack] [presilence] [0:0] [0:1] [postblack] [postsilence]
concat=n=4:v=1:a=1" -y output.mxf

Now let’s add the title to the black as a slate, which can be done with the following:

drawtext=fontfile=OpenSans-Regular.ttf:text='Title of this Video':fontcolor=white:fontsize=24:x=(w-tw)/2:y=(h/PHI)+th

Which I found along with some additional explanation for adding text boxes.

This can be achieved in the ffmpeg filtergraph syntax by adding the filter into the stream. In each of the inputs these filter options can be added as comma separated items, so for the [preblack], let’s now call that [slate] and would look like this:

color=c=black:s=1920x1080:d=10 ,drawtext=fontfile='C\:\\Windows\\Fonts\\arial.ttf':text='Text to write':fontsize=30:fontcolor=white:x=(w-text_w)/2:y=(h-text_h-line_h)/2 [slate];

Note the syntax of how to refer to the Windows path for the font.

This puts the text in the middle of the screen. Multiple lines are then easy to add.



Friday, 19 August 2016

dotnet core - Native Cross Platform

Following my initial foray into the world of dotnet core I was very excited at what this offers. I can now take my C# code and really easily build and run it on any of my Windows, Linux and Mac platforms. True, I've been able to do that for a while with Mono, but the tooling seems to be making it a common operation on each and something that I can see the toolchain working well with on Docker and Cloud deployments.

Pumped up by this and reading through the final post that I mentioned in the last article I wanted to have a look at a native build of the application. There are a number of reasons for this that we'll dig in to. First of all the build completed in the last post is a dotnet portable application. Take a look in bin/Debug/netcoreapp1.0 and you can see this is creating a dll that the dotnet core exe uses to run the application. It's a jumping off point. It also means that you need to have the dotnet core runtime and libraries already installed. This is possible with Linux deploys and there are a bunch of examples out there and with a bit more faff it's possible with Docker, but I was interested in making this really simple by being able to copy over a single file.

What's more, I had a taste for wanting to be able to build on my Windows or Mac box and create the Linux cross-compiled version that I could then put on a Docker container and use in the Cloud.

In the post there is a very nice walk through of setting up for various native compile targets. This requires some modifications to the basic project.json file as follows:


{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
  },
  "frameworks": {
    "netcoreapp1.0": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "version": "1.0.0"
        }
      },
      "imports": "dnxcore50"
    }
  },
  "runtimes": {
    "win7-x64": {},
    "ubuntu.14.10-x64": {}
  }
}

A new runtimes section has been added with some targets. A list of the supported runtimes can be found in the RID catalog.

Restoring and building now gives us a new directory bin/Debug/netcoreapp1.0/win7-x64 with the following contents:














Which you can see clearly this time now contains the dotnetcore.exe that kicks things off. However, this does not look like a native exe to me and more disappointingly the ubuntu version seems to have been missed completely.

Forgetting the cross-compile for the moment, certainly there should be a native build and then I could go through some hassle, setup a Linux machine and build on that. I took a look for some other examples. This post gives some very positive early looks at dotnet core and native compiling. This time using the dotnet compile --native option. It seems that this is from a very early pre-release version and the compile option has now been scrubbed in favour of dotnet build --native.

Digging a bit more it seems that both cross-compiling and indeed native compiling has been disabled in RC2. It seems I'm not the only one to have struggled with this and taken a look also at using the --Native switch on the build option.

Not wanting to be deterred completely I decided to uninstall RC2 as I had Preview2 and install an earlier version. Getting hold of Preview1 was next to impossible. I found how to get dotnet core versions with PowerShell. After faffing around a bit I was blocked due to scripting being unhelpfull disabled on the machine I was using. Undeterred I did finally find a Preview1 msi, installed, tested and once again the native build option was not available.

As a last stab, I did download the dotnet cli source code and start kicking off a compile which didn't work directly in Visual Studio and I'd pretty much lost the energy at that point to have another round with PowerShell on another machine.

This is most frustrating as although I could probably fight this through to get it working on Docker, I'm also interested if some of the backend functions I have could be easily used as AWS Lambdas. This would allow me to reuse some existing C# code just when needed without the costs of porting. It might not be the fastest in run-time, but certainly would be beneficial in terms of reuse. This post got it to work, presumably with a build where the native option was still available.

Well, hopefully native and cross-compile will come back as I think for now I'll need to put this to one side until RC2 is out the door.

dotnet core - Getting Started

With the rise of interest in Docker containerisation and the benefits that it provides particularly in Cloud deployments I've been keeping an eye on how Microsoft are evolving their strategy to join the party. From the early initial demonstration at the 2015 Keynote (video) to the more recent inclusion of Docker properly as a first class citizen on Windows shown at DockerCon.

The parallel path to this in Nadella's open-source drive after the acquisiton of Xamarin earlier this year is the dotnet core initiative which brings dotnet more easily onto Linux and macOS. This brings Mono into the fold and at the same time by making it open-source keeps it with it's most active community.

As I have a bunch of existing C# code from various projects bringing some of that over to Linux and Docker has some interesting opportunities and potential to reuse some of the components easily in non-Azure Cloud deployments.

Getting started was no problem at all. As most of the C# has originated from Windows, I'm starting on that platform and will look later at how the tools will work on a Mac later. The target platform is for Linux - Ubuntu or CentOS. The tools are easily downloaded for the platform of choice here

Once installed it's a simple matter to make a basic Hello World example:


using System;

namespace HelloWorldSample 
{
 public static class Program 
 {
  public static void Main() 
  {
   Console.WriteLine("Hello World!");
  }
 }
}

This should be saved as program.cs in a project directory. The dotnet toolset then requires the  the basic project settings, such as the compile target and dependencies to be configured in a simple json file such as below:


{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {},
  "frameworks": {
    "netcoreapp1.0": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.0"
        }
      },
      "imports": "dnxcore50"
    }
  }
}

This can be saved in the same project directory as project.json for example.

It's now time to use the dotnet command line interface (CLI) to build the project. Run up a CMD prompt and within the directory you'll need to first pull down any dependency packages (like the referenced dlls in Visual Studio) that are used in the project. One of the key functions of the dotnet tooling is that it includes the NuGet package manager. Try the following:

dotnet restore

This will go away and download any packages you need and will also create a file project.json.lock which is used to track any of your project changes.

Now we can build and run the sample using dotnet run. This goes through a compile (build) stage and then runs the application straight after:

Neat! Using the run command again will recognise that the application has already been build and will just run it:

You'll also see that typical bin and obj directories have been created within the application.

This is great. The code is portable and the same kind of operation should run very nicely on my Mac as well.

There's a really good tutorial here that runs through much of the same example which we'll explore a bit further in the next post.




Monday, 15 August 2016

AWS Lambda - more than ImageMagick and configuring API Gateway

We've looked at getting setup a basic Lambda function that returns a scaled version of a file held on S3. Not very exciting or dynamic. It's great that ImageMagick is available as a package for Lambda and this means that some pretty basic Lambdas can be setup just using the inline code page on AWS to get things kicked off. For anything more ImageMagick doesn't quite give us enough. This also pushes us on to the next step of using Lambdas for a bit more functionality.

We're going to take another baby step and look at providing a slightly improved function to take a cutout tile from the image used prevsiously. Imagine we have an image that is composed of a tile of 5 rows and 10 columns of images and give some consideration to a little border region between each. What we want to do is return one of those tiles in the function call.

ImageMagick no longer has enough functionality to help, so we're going to use GraphicsMagick to provide the additional functionality. The code example is shown below:


var im = require("imagemagick");
var fs = require("fs");
var gm = require('gm').subClass({
    imageMagick: true
});
var os = require('os');

var AWS = require('aws-sdk');
AWS.config.region = 'eu-west-1';

exports.handler = function(event, context) {
    
    var params = {Bucket: 'bucket', Key: 'image.png'};
    var s3 = new AWS.S3();
                
    console.log(params);


    s3.getObject(params, function(err, data) {
        if (err) 
            console.log("ERROR", err, err.stack); // an error occurred
        else     
            console.log("SUCCESS", data);         // successful response
        
            var resizedFile = os.tmpdir() + '/' + Math.round(Date.now() * Math.random() * 10000);
        
            console.log("filename", resizedFile); 
            
            
            gm(data.Body).size(function(err, size) {
                
                console.log("Size", size); 
                
                var num = 10;

                var x = size.width / num;
                var y = size.height / 5;
            
                var col = 0;
                var row = 0;
            
                
                gm(data.Body)
                    .crop(x-14, y-14,
                            (col * x)+7,(row * y)+7)
                    .write(resizedFile, function(err) {
                    
                        console.log("resized", resizedFile); 
                        
                        if (err) {
                            console.log("Error", err); 
                            context.fail(err);
                        }
                        else {
                            var resultImgBase64 = new Buffer(fs.readFileSync(resizedFile)).toString('base64');
                            context.succeed({ "img" : resultImgBase64 });
                        }
                        
                    });
                });
                
        });
    
};

In this example we're still not sending in any parameters and we're looking to get back the (0,0) referenced tile in the top-left corner. Using the below image, this would be the blue circle:


If you put this Lambda code directly into the inline editor and try to call it you'll get an error as it cannot find gm, GraphicsMagick. To add it in unfortunately we need to start packaging up our Lambda. So, first step, you need to download the gm package, which is easily done with npm as follows:


npm install gm

If you do this in the root of your project you'll get a folder called nod_modules containing a bunch of files. Now, create a zip file along with your Node,js function above. The contents inside should look like this:





Now in your Lambda you will need to select your Zip file, upload it (by clicking Save - yep, a bit unintuitive) and you'll need to configure the Configuration tab of your Lambda. The Handler will need to be set to call the Node.js filename and the function that you export, so in this case the filename is called 'getImage' and the handler exported has been called 'handler'.







Great. Now things should run smoothly and you'll get back a tile (hopefully the round circle) from the image. To make this a bit more useful it would be good to now pass in the row + column information to be able to choose the tile, so let's do that.

Adjusting the Lambda code we can get parameter information in via the 'event' information passed into the handler function. Let's add a line to log this to the console.


exports.handler = function(event, context) {
    
    console.log('Received event:', JSON.stringify(event, null, 2));

Now configure your test event from the Lambda console - Actions/Configure Test Event to something like this:














And run a test execution, you should get something like this in the log (Click on the Monitoring tab in Lambda and scroll down):












Great, we're calling this from within AWS as a test and getting back the row & column information. It's now simple enough to get these values to set the row & column variables. For the sake of simplicity I'll leave out error handling in case events are not present and if the values are not correct, that's standard code work.


     var col = event.col;
     var row = event.row;

Run again and take a look at the base64 string in the JSON return result 'img'.

So this is getting the parameter information from within AWS but we want to specify it from our simple HTML page, so we need to configure API Gateway to pass the parameters in the HTTP GET method in the traditional kind of 'getImage?row=1&col=1' kind of formulation.

Navigate to the Lambda Triggers tab and click on the 'GET' method which will lead you to the API Gateway configuration.  Choose your API, click on Resources and choose your function to give you this view:














Now click on te Integration Request and scroll down and open out the Body Mapping Templates:


Add in a new Content-Type and paste in the template above which maps the input parameters to the event object.

This should now be ready to go, so a simple test in a browser first and then tweak your HTML code to have something like this:


    var response = $http.get('https://xxxxxx.execute-api.eu-west-1.amazonaws.com/prod/getImage2?row=2&col=2');

It's simple exercise now to play around with your JavaScript in the HTML to select the tile you want to get.

AWS Lambda - Getting Started

Since seeing the applications of Lambda at AWS Media London and the huge momentum behind it at AWS Summit London 2016 I thought it was time to get some of the basics down here as a jump-off reference along with some helpful links.

Getting started with Lambda is pretty easy and the benefits of 'serverless' infrastructure are quite self-evident (and if not are well described in numerous blogs and articles). AWS is the obvious starting reference place for kicking off:
There are a number of other tutorials of how to link up with other AWS products like S3, Kinesis and DynamoDB. I'm going to focus on using Node.js in using Lambda for now. There's a great online course available as well for getting started with Lambda offered by Udemy.

Beyond Hello World, most of the starting examples are derivatives of a thumbnail creation service using S3 from the AWS docs. The typical process is to drop a file onto S3, trigger a Lambda, process the image to scale it using ImageMagick and put the resulting file into a new S3 bucket. 


This is great to start with and there are a lot of good walk through tutorials explaining additional details and jumping off points. I wanted to collect together a couple of those references and take a look at some other ideas around this example and offer some suggestions for smoothing some of the details around getting started.

Following on pretty much directly from the AWS documentation, this tutorial steps smoothly through the same example. This article takes the same example and describes packaging and deployment in more detail using CloudFormation and the AWS CLI (command line interface). It's not quite time to delve into that detail yet, but it's a good reference to come back to later.

Well, this gets us started. Now what I wanted to explore is how to move beyond dropping files into S3 and creating new S3 files and to start using them with a simple web-page. 

AWS Lambda - Image Generator via API Gateway

Following on from the AWS Lambda - Getting Started post I wanted to continue exploring some of the starting blocks. In this article I'm going to look at continuing using AWS Lambda for working with images but instead stepping away from the well-trodden step of initiating the Lambda by dropping a file into an S3 bucket.

What I'd like to look at is calling a Lambda service via REST API and getting an image back. Now, you can think of any number of applications here, we're just going to look at some of the technology bits.

First step in moving away from initiating the Lambda from S3 is that we need some way to trigger the Lambda from a web-service call. AWS provides this via their API Gateway product and there is an accompanying tutorial in the reference Lambda documentation. API Gateway is pretty neat and relatively easy to configure if you want to send some JSON and return the result also in JSON. We're not going to explore that any further here. Now, say we want to create a service that just calls a Lambda and returns an image. Should be just the kind of thing for Lambda, right?

Now, this is where things get interesting. There are a number of blogs and articles talking about this with varying approaches trying to get API Gateway to somehow return an image/xxx MIME type. None of them seemed to work for me at the current time. This one is instructive on some failed approaches and a stackoverflow discussion worth reading as well.

So, it depends how you might want to use this. In my simple example case, which we'll step onto next I wanted to call from a basic web-page and get an image back from a Lambda rather than an S3 hosted file. It's probably fighting with all AWS is designed to do, but let's go with it for now.

For this example, the alternative seems to be to return the image (binary data) as Base64 encoded data in the JSON response. While a little more inefficient as we need to encode, decode and transport using more bytes it should be friendly with API Gateway and as we'll see in the next step also works pretty easily with simple HTML as well.

Getting started, this is how things turn out. I'll explain what's going on below:


var im = require("imagemagick");
var fs = require("fs");
var os = require('os');

var AWS = require('aws-sdk');
AWS.config.region = 'eu-west-1';

exports.handler = function(event, context) {
    
        var params = {Bucket: 'mybucket', Key: 'myimage.png'};
        var s3 = new AWS.S3();
        
        console.log("bucket", params);

        s3.getObject(params, function(err, data) {
            if (err) 
                console.log("ERROR", err, err.stack); // an error occurred
            else     
                console.log("SUCCESS", data);         // successful response
            
                var resizedFile = os.tmpdir() + '/' + Math.round(Date.now() * Math.random() * 10000);
            
                im.resize({
                    srcData: data.Body,
                    dstPath: resizedFile,
                    width: 100,
                    height: 100,
                    progressive: false
                }, function(err, stdout, stderr){
                    
                    console.log("resised", resizedFile); 
                    
                    if (err) {
                        context.fail(err);
                    }
                    else {
                        var resultImgBase64 = new Buffer(fs.readFileSync(resizedFile)).toString('base64');
                        context.succeed({ "img" : resultImgBase64 });
                    }
                });
                
        });

        // allow callback to complete 
};

This simple Lambda is using the AWS API in node and setting up the parameters to be able to read a file in a bucket. First thing to remember is that this file needs to be visible to Lambda for it to work, so either make this a publically available file (you need to set this for each file on S3) or set the roles and permissions via IAM for S3 and your Lambda appropriately.

The function then uses ImageMagick, which is kindly provided by AWS as an available package to do the resize. We're using the 'os' to enable the creation of a local file within the Lambda context. This file is then encoded in a buffer to Base64 and returned. None of this is hugely efficient, but it does the job.

This can all be dropped in to AWS Lambda as inline code and simply tested by calling the resulting URL given in the API Gateway setup via a browser something like this:


https://xxxxxxx.execute-api.eu-west-1.amazonaws.com/prod/getImage

Which if everything is working ok, should give a response something like this:


{"img":"iVBORw0KGgoAAAANSUhEUgAAAIkAAACJCAMAAAAv+uv7AAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAAAYBQTFRF////1eDpw9LgoLjNepy8Z4+yc5i4vc7dz9vm5+3zscXWQXGcUYCrU4WwToa4Uo3BVJDHUo3DU4WyUoKtq8DT4env2+TsmbPLW4atTHumUoa2W5vVUYe3Tn2oUX6nka3Hydfj8/b5m7XMTXqjUH+qToe6VpPKWJbOT4i7U4OvRXSegaLAT32mU4m6WJfQWprTUYvAjqzG7fL2+fv8WZjSUoq8TXymqL7SUH2nUX+pXoivVIi3V4SrVIGrVIKstMfZoLnOVYe1hKTCcpe4VYm5Vo2/ZI2yXIeuVo/EVZLIWYWtV5XMU4OtYYuxb5W3V5HGapG1nbfOWIu6VY7CeJu7wNDfVYaxVoy8ka7JVYi2rsPVUH6oTHqjSnijU4i5orrPbZO1U4/FVoKqt8naU4a0UYi5nrfNYYqvR3ahUIa4pbzQUIq9iafDSHehUoCqU4Cpa5K1WIStWoauV4WuxdXiWIeypb3RuszcVou6VI3AmbTLV4m3f6G/VoOtVYSuq8HUY1MLMAAAAAFiS0dEAIgFHUgAAAAJb0ZGcwAAAAcAAAAHAIhuELEAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAAJdnBBZwAABegAAAL0AM6p+5YAAANMSURBVHja7ZoJXxJREMBXZREEBFy5fIqiHB7JKuCRiqZlZnSZSqlZppYn3ff51ds1K82i3ZlZrH7z/wL7/70383bezJMkhmEYhmEYhmEYhjFIVXWNTT7EXlPrOB0LZ53d5fbUe32H+OsbFJfcGKiwRjAUjjT5xQl8zS3R1raKacTaO+KJpPgNqc6u7h5nZTzO9KZFWdS+SH/Gao+s5pETf2Zg0FqX7NCwIQ99XQbOtscsE3GOjKaNeejkeseCFok48knjHgfBOz5hicg5xeDGHNmiyaksuUfs/AWzHjrTF6kTOjBzCSIixGye9h/gvFyAiWiBe4VSJXbVZKweU8nTbVD22nW4iJZCY2QHy9Q0RkT7L94gyqC5mzgRIeYXSEQWi1gRIW7dJhDJDKp4E6EQJNDCPIGISIbRIlVuChEhlpaxJmHESXKU9DAyfyYmaUS02qkfJZIdNlGQlEddQR21/QNUIkL47ZgluWO6JCnDKqKyrW0gFBF378FN1u5TmhTWwSLOCKWIEO4qqMnGJq3Jg4dQk54+WpPZLahJNEVrIraBJVNmlVhEeKphJtUeapOdXZjJ7g61yd4+zMTmpTaBhqzsozZJRWEmVKXJD3IlmAl5Egvh+tdNOsiqpP9nTdYL1CJqN8zk78liux//7eNA71+hBLWJ1wYzoa1idRIhmEkwTm3SUAszob1j6ESgly/qkIWmjiQ1NtOadLZCTagDpasNakJcF2AaF0OjlCaYvgVNj+0bccyQhTJ7cK025wrdoiDbj3StHFQjRyJsb6mPsDPt5SUaE2TDT4fmTEk/xs8zHAqFyegQWgQ/U9Ehmqs8Qc2ZdAozNLOm7AgyVNJPqeZvmWeom486TjdLDz7HVG/FRTIRSXrxEn7qL70iFJGk1+PADcoV50hFtA0aA/Wrk3nKrflKbO2NeZG3I5a8E3pXNBm36vsP9E8tDlj8aKpw8n36bI2HRsAeN9w/SHbJ4IGBEYKyYsgl6Qm3Wemh4zDgktoMA/viZl1ammfLLEfCHd2ohIdOoHFre9P7i0xS/U2RcMjS+DhBbMNWUnY6fd/L3Jxvr14p2esq8tDwZxy7rXKH65CSvF/jsOj4YBiGYRiGYRiGYRiGOSW+AE9BVtXctxSRAAAAJXRFWHRkYXRlOmNyZWF0ZQAyMDE2LTA4LTE1VDEwOjA3OjM1KzAwOjAwX7Lc1AAAACV0RVh0ZGF0ZTptb2RpZnkAMjAxNi0wOC0xNVQxMDowNzozNSswMDowMC7vZGgAAAAASUVORK5CYII="}

There's a useful github repo for someone trying the same idea with more involved code.

Next step we'll get this called from a basic web page.

AWS Lambda - Using Image function in a basic web-page

Following on from the previous posting, we're now going to try to use our basic Lambda function which provides a simple HTTP GET method with no parameters and returns JSON with an image in base64 encoding. We're going to make a simple web-page to call the function and display the image.

I'm going to use AngularJS as it just means we don't need to fight with basic HTML and scripting. If you're not familiar, then this or this should give a quick Hello World intro just for what is needed in this example.

First of all we need to think how to get a Base64 chunk of data to show as an image. Thankfully most browsers now support inline images with data URLs. More details can be found here. The format looks like this:

<img src="data:image/png;base64,iVBORw0KGgoAAAANS..." />


Which all seems quite easy. There are some discussions on stackoverflow of where to go from here to link up to JavaScript.

Taking our basic Angular app it's quite a simple matter to add in a button that pushes the base64 data we had previously into a text box and also into the image data:

<!DOCTYPE html>
<html ng-app="TestApp">
<head>
 <script type="text/javascript"
  src="http://ajax.googleapis.com/ajax/libs/angularjs/1.0.7/angular.min.js"></script>

</head>
<body>
 <div ng-controller="TestController">
        
        <h1>base64: {{data}}</h1>
    
        <img height="100" width="100" data-ng-src="data:image/png;base64,{{data}}" data-err-src="empty.png"/>

        <button ng-click="click()">GetImage</button>
 </div>
    
 <script>
        angular.module('TestApp', [])
            .controller('TestController', function($scope) {
            $scope.data = "empty";
            
            $scope.click = function() {
                $scope.data = "iVBORw0KGgoAAAANSUhEUgAAAIkAAACJCAMAAAAv+uv7AAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAAAYBQTFRF////1eDpw9LgoLjNepy8Z4+yc5i4vc7dz9vm5+3zscXWQXGcUYCrU4WwToa4Uo3BVJDHUo3DU4WyUoKtq8DT4env2+TsmbPLW4atTHumUoa2W5vVUYe3Tn2oUX6nka3Hydfj8/b5m7XMTXqjUH+qToe6VpPKWJbOT4i7U4OvRXSegaLAT32mU4m6WJfQWprTUYvAjqzG7fL2+fv8WZjSUoq8TXymqL7SUH2nUX+pXoivVIi3V4SrVIGrVIKstMfZoLnOVYe1hKTCcpe4VYm5Vo2/ZI2yXIeuVo/EVZLIWYWtV5XMU4OtYYuxb5W3V5HGapG1nbfOWIu6VY7CeJu7wNDfVYaxVoy8ka7JVYi2rsPVUH6oTHqjSnijU4i5orrPbZO1U4/FVoKqt8naU4a0UYi5nrfNYYqvR3ahUIa4pbzQUIq9iafDSHehUoCqU4Cpa5K1WIStWoauV4WuxdXiWIeypb3RuszcVou6VI3AmbTLV4m3f6G/VoOtVYSuq8HUY1MLMAAAAAFiS0dEAIgFHUgAAAAJb0ZGcwAAAAcAAAAHAIhuELEAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAAJdnBBZwAABegAAAL0AM6p+5YAAANMSURBVHja7ZoJXxJREMBXZREEBFy5fIqiHB7JKuCRiqZlZnSZSqlZppYn3ff51ds1K82i3ZlZrH7z/wL7/70383bezJMkhmEYhmEYhmEYhjFIVXWNTT7EXlPrOB0LZ53d5fbUe32H+OsbFJfcGKiwRjAUjjT5xQl8zS3R1raKacTaO+KJpPgNqc6u7h5nZTzO9KZFWdS+SH/Gao+s5pETf2Zg0FqX7NCwIQ99XQbOtscsE3GOjKaNeejkeseCFok48knjHgfBOz5hicg5xeDGHNmiyaksuUfs/AWzHjrTF6kTOjBzCSIixGye9h/gvFyAiWiBe4VSJXbVZKweU8nTbVD22nW4iJZCY2QHy9Q0RkT7L94gyqC5mzgRIeYXSEQWi1gRIW7dJhDJDKp4E6EQJNDCPIGISIbRIlVuChEhlpaxJmHESXKU9DAyfyYmaUS02qkfJZIdNlGQlEddQR21/QNUIkL47ZgluWO6JCnDKqKyrW0gFBF378FN1u5TmhTWwSLOCKWIEO4qqMnGJq3Jg4dQk54+WpPZLahJNEVrIraBJVNmlVhEeKphJtUeapOdXZjJ7g61yd4+zMTmpTaBhqzsozZJRWEmVKXJD3IlmAl5Egvh+tdNOsiqpP9nTdYL1CJqN8zk78liux//7eNA71+hBLWJ1wYzoa1idRIhmEkwTm3SUAszob1j6ESgly/qkIWmjiQ1NtOadLZCTagDpasNakJcF2AaF0OjlCaYvgVNj+0bccyQhTJ7cK025wrdoiDbj3StHFQjRyJsb6mPsDPt5SUaE2TDT4fmTEk/xs8zHAqFyegQWgQ/U9Ehmqs8Qc2ZdAozNLOm7AgyVNJPqeZvmWeom486TjdLDz7HVG/FRTIRSXrxEn7qL70iFJGk1+PADcoV50hFtA0aA/Wrk3nKrflKbO2NeZG3I5a8E3pXNBm36vsP9E8tDlj8aKpw8n36bI2HRsAeN9w/SHbJ4IGBEYKyYsgl6Qm3Wemh4zDgktoMA/viZl1ammfLLEfCHd2ohIdOoHFre9P7i0xS/U2RcMjS+DhBbMNWUnY6fd/L3Jxvr14p2esq8tDwZxy7rXKH65CSvF/jsOj4YBiGYRiGYRiGYRiGOSW+AE9BVtXctxSRAAAAJXRFWHRkYXRlOmNyZWF0ZQAyMDE2LTA4LTE1VDEwOjA3OjM1KzAwOjAwX7Lc1AAAACV0RVh0ZGF0ZTptb2RpZnkAMjAxNi0wOC0xNVQxMDowNzozNSswMDowMC7vZGgAAAAASUVORK5CYII=";
            };
        });
     </script>
</body>
</html>

Now to get it calling the Lambda service, the $scope.click function can simply be changed to make an http request and put the data into the image. Hey presto!


$scope.click = function() {
    
    var response = $http.get('https://xxxxxxx.execute-api.eu-west-1.amazonaws.com/prod/getImage2');

    response.success(function(data, status, headers, config) {

        $scope.data = data.img;
    });

    response.error(function(data, status, headers, config) {
        alert("Error.");
    });
};

Now, if this isn't working for you, and almost part of the point of this post in itself it means that you have some API Gateway config to get sorted. I must admit first time I went through this there was a bit of head bashing and using fiddler to check that everything was ok.

If everything worked ok up to the previous post and you're getting the base64 data coming back from a browser call then it's sure to be some fun with JavaScript. This is most likely that you have not got CORS setup for your API Gateway binding. Biggest clue here is that you only have a GET method defined in API Gateway and there is no OPTIONS method. AWS makes this pretty easy, so follow the online instructions to configure for CORS.

If you're still having problems and I was bashing my head at this for a while it seems (for me at least) that the default CORS settings from AWS were not working and I needed to manually add in X-Requested-With to the Access-Control-All-Headers  configuration. Remember you'll need to deploy the API again for the changes to take effect. Thanks to this stackoverflow discussion and this one as well for the help.

The other problem I bumped into on this first time round and you'll not have it on the example posted is that if you use the function context.fail(error); in your Lambda to return an error, this will give an http 403 response, not a 200 response, and this is not handled in the basic API Gateway bindings so looks like an error that the service is unavailable. Very confusing!