September 28, 2010

Further Forays Into the HTML5 Stack - Animation

Short update this one.

I've been doing some simple animations replacing one image with another every 5 seconds using the SVG SMIL animation framework. I'm using the 'defs' element to define a number of image elements that will not be immediately rendered. I then use an 'use' element to refer to the first element in the sequence before using the 'animate' element to alter the 'xlink:href' attribute of the 'use' element every 5 seconds from a set of values in the 'animate' element.

Unfortunately this approach seems to be very CPU intensive even between changes to the 'use' element's 'xlink:href' attribute.

I'm wondering whether the animation timer does not take into account the down time between the changes and just spins its wheels, wasting CPU, all the time.

I think that I need to do some rough benchmarking to work out whether to use CSS3 or SVG SMIL animation and to work out which techniques are more CPU friendly.

September 20, 2010

Forays into the HTML5 stack.

While working on Cazcade with Neil, Dan C., Jon et al I'm also working on my own little project - DiarWise. It's still pretty stealth so I'm not going to discuss it in detail on an open forum but I think that now is a good time to discuss some of the technology that I've been using.

For DiarWise I've been looking at a much more rich UI experience highly interactive and completely resolution independent. I wanted to be able to create a UI that would be equally renderable on a smartphone, a tablet and a PC without extensive customisation for each. I also wanted a UI that would would be able to interact well with the browser event models for mouse and multi-touch.

I've been focussing on SVG embedded in HTML over the HTML 5 Canvas element for a number of reasons:
  • It is significantly more mature as a specification than the HTML 5 Canvas element.
  • The HTML 5 canvas element is procedural in its rendering so with a command it renders pixels directly to screen and so does not have a screen graph; the SVG xml elements are the scene graph and are embedded directly in the DOM and so are manipulable using all the tricks that we are familiar with. This also means that a lot of the heavy lifting for re-drawing is done for us.
  • Text is more of a first class citizen in SVG and so is easier to make available to Accessible browsers.
  • SVG graphics elements are style-able using CSS.
  • HTML can also be embedded back into SVG (and manipulated via the DOM) using the foreignObject element
There are a few weaknesses as I've discovered as I've worked with it.
  • The default SVG animation framework is built around the SMIL standard which is pretty low level and is not always the easiest to work with though it is very powerful. I'm intending to see how well SVG interacts with the CSS3 animation effects which are much nicer to work with.
  • SVG support is present in the latest iteration of all the browsers but is patchy to say the least. The best of the moment is the WebKit family, Firefox is the next best though it has big issues with the text rendering. IE9 claims to be a huge step forwards over IE8 but I have yet to try out the beta and there are question marks over how well it will support SVG animation.
  • While its text support is good there is one glaring omission - word wrap. The neatest work-around is to embed an appropriate HTML element in the SVG using foreignObject.
The best trick I've found so far with SVG is the simplicity of decoupling the screen resolution from the render resolution and this cascades down to any embedded HTML elements. The 'svg' element defines a local coordinate space for all its contents and you can override the coordinate system using the viewBox attribute. This means that all embedded elements can be positioned and sized relative to each other in terms of local 'pixels' and the SVG renderer will scale according to the screen resolution. The handles aspect ratios elegantly using the preserveAspectRatio attribute. This gives you the ability to present a consistent interface in a wide range of resolutions and can handle pan and zoom exceedingly smoothly for people with limited eyesight.

So far my grand experiment with SVG is going well and it seems to be well worth using as one of the technologies that make up the HTML 5 stack.

September 15, 2010

My life with BDD...

This is a blog post that's been very long overdue, but I finally acknowledged that I really need to capture my experiences after a chat with an old friend at Twitter (shameless name drop...).

I first came across Behaviour Driven Development thanks to Mauro Talevi, my first hire on the team that I set up at HSBC back in 2008. He's a major contributor on the JBehave project that is one of the major BDD frameworks. I hope that I'm not going to embarrass him too much by saying that he was an invaluable influence on the project.

I'm not going to talk about the technical implementation as that is well covered elsewhere but instead in this blog I will talk about the general principles and practical experience of working with BDDs.

The basic idea is simple. Any story or feature being implemented in a project should have automated integration tests that proves that the work has been completed successfully. It is important the the tests be comprehensible to the stakeholders requesting the story / feature so that they can confirm that it is complete and done. This provides a very clear goal for the developers and works to build up a suite of regression tests that give a healthy confidence in the application. The tests are defined in terms of business scenarios that represent various paths through the business process.

BDD is complementary to Unit Testing and is in no way a replacement.

The key to building an effective suite of BDD test scenarios is the derivation of a clearly understood Domain Specific Language for the tests. This language bridges the gap between the business concerns and the services and components created by the development team. In one sense this should not be a difficult task as the solution domain being developed should reflect the problem domain defined by the customer. Thus the effort needed to bridge the gap between the DSL and the technical solution is relatively small.

We had a lot of difficulty at first working with the stakeholders to agree and define the DSL as they were used to more traditional development practices and we had to educate them to understand that the scenarios that they were producing were testable. What they had to understand was that the scenarios were the contract between the development team and the project stakeholders. If the scenario was incorrect as signed of by the stakeholders and the analysts then the work produced by the development team would be incorrect too. The scenarios provided a very important structure to the SCRUM methodology we used. The analysis producing the scenarios was typically 1 to 2 sprints ahead of the implementation work.

JBehave provides a set of tools for building the elements of the DSL and for running the scenarios defined in it. Any scenarios in JBehave are written in simplified plain English (or the language of choice for the organisation). A scenario is structured as a list of sentences defining steps in the scenario. These sentences all begin with one of three key words:
  • Given: The remainder of the sentence defines pre-conditions for the next steps of the scenario.
  • When: Defines activity that the system under test is carrying out as part of the test.
  • Then: The assertions that the test should be checking.
These sentences are the reusable elements of your BDD scenarios; the Givens, Whens and Thens can be put together to build different scenarios. The sentences are of course parameterised so that different values can be used in different tests.

What proved very important is the granularity of the concerns addressed by the DSL. Too coarse and you have very short (but deep) scenarios made of elements which are almost impossible to reuse; too fine and the size of the scenarios balloon and they become unmaintainable because of the sheer volume of change required as the system evolves.

In the end on the project at HSBC the appropriate level emerged organically, we paid close attention to the tests and worked hard to refactor and leverage common elements that emerged.

Another critical element is the data representation. When we started we bootstrapped the BDD test data management by using XStream to dump the objects help in memory to be compared with saved text files. This proved very helpful in the early stages but proved an increasing burden as the suite of tests broadened and deepened. The problem was that the test data was acutely sensitive to the domain data model and as that evolved we found ourselves expending major effort to correct tests that were not testing the changes but were affected because of our dumping the whole domain model.

The solution was to move to a factory and builder based mechanism. All tests used the same basic data sets generated by factories which then used builders to alter them to match the test requirements. When we came to make assertions we found that it was best to focus on the data elements required by the specific test and allow other tests to check their own data elements. This dramatically reduced the amount of data that needed incidental maintenance.

As we worked we found that after every major release it was worth spending some time on consolidating the BDD scenarios. The earlier scenarios for new functionality tended to be supplanted by scenarios testing more elaborate versions of the functionality. We worked to actively identify situations where this occurred and reduced the volume of tests while keeping their coverage.

JBehave integrated very effectively with Selenium and proved very effective at automating Web UI testing but rapidly exposed shortcomings in Selenium's implementation. By the time I left work was underway to use the WebDriver integration to address this.

By working to continually polish the DSL and the data representation we started seeing great reuse and efficiencies in developing provably tested new functionality. Where we originally spent anything up to 50% of the development time on building the BDD scenarios we saw that fall to as little as 5% of the development time.

Where this really began to pay dividends was that we had the tools that the test team needed to build out a much wider suite of tests and as they developed them we had probably the most thoroughly tested system I have ever worked on. The regression tests were run on our continuous build server producing near real time feedback as we developed which meant that problems were fixed as they occurred rather than after weeks of formal testing. This meant a much smaller testing team was required to thoroughly test the system at release. Fewer regressions were found and we were much more confident that the acceptance testing cycle would run on budget.

BDD is a key technique that I will use on future projects. It provides real benefits for the work invested and contributes materially to successful delivery.

September 12, 2010

Some New Social Networking Infrastructure

I'm a long time Facebook and LinkedIn user but as part of my involvement in developing Cazcade I've had to get my feet wet with Twitter.

I've been running FlipBoard for a while and I've found an outlet for sharing what I find on it by adding my Twitter account. I've now joined my blog to my Twitter account by using TwitterFeed and this post will test the whole process.

Twittering is very addictive and I suspect that I will begin to make more and more use of Twitter both to publish and consume content. Certainly Twitter together with FlipBoard makes consumption entirely easy and pleasurable. I'm very keen to bring together a number of my favourite RSS feeds in one place and read them in FlipBoard which has yet to have direct RSS support so TwitterFeed will be further utilised to aggregate them into one Twitter account.

What's interesting to me is how Cazcade fits into all of this. FlipBoard provides a brilliant mechanism for consuming and re-tweeting individual pieces of information. The limitation is that this is a piecemeal mechanism, you can only present one piece of information at a time. Cazcade's analogy by contrast allows you to accumulate various sources (web pages, videos, images and Twitter) into pools and present the information as a wider vision or argument and then pass them on via Twitter.

Cazcade is attempting to provide a mechanism above and beyond the current transient social network conversations.

The question I am asking myself is whether this will meet an unsatisfied need. Only time will tell.