August 15, 2022

A Short Essay on the Relationship Between Problems and Solutions

My professional life involves a lot of problem-solving, whether architectural problems or working with engineers digging through traces, metrics and logging to work out why a system is misbehaving.

My personal life also includes problem solving, whether it is interpersonal or understanding why a drawer in the kitchen just won't shut.

I even like solving problems in my leisure time, working on physicall puzzles or solving them in computer games.

A lifetime of problem-solving has lead me to one inescapable conclusion.

If you can't see a solution, then you don't actually properly understand the problem.

Most of the time people articulate the immediate concern as the problem.

"My kitchen drawer is stuck."

This definition of the problem is no use in solving the problem. A little investigation reveals a little more about the problem.

"My kitchen drawer is stuck because the balloon whisk has hooked up on the edge of the drawer above."

This articulation of the problem begins to suggest a simple solution. By manipulating the balloon whisk you should be able to unhook it and open the drawer.

Great! Our problem has been solved... or has it?

"My kitchen drawer is likely to get stuck whenever I put the balloon whisk in such that it hooks on the edge of the drawer above."

This demonstrates an understanding that the problem can recur in certain circumstances and suggests further solutions:

  • Don't keep the baloon whisk in the drawer.
  • Smooth or otherwise change the edge of the drawer above so it doesn't get stuck.
  • Only store the baloon which in such an orientation / position in the drawer that it will not ever get stuck.

Further iterations of understanding the problem can lead to even more solutions which I am not going to go into here.

The point is that by broadening and deepening our understanding of the problem, new and different solutions suggest themselves.

Have you ever fully articulated a problem to such an extent that the best solution just springs out naturally?

June 16, 2016

A Rant on the Decision not to make Optional Serializable in Java 8

I'm in a proper ranty mood this morning having once again barked my shins on the frankly rather short sighted decision not to make Optional Serializable in Java 8. 

Let's get one thing out of the way - I am perfectly well aware of the reasoning behind the decision based on the intended use of Optional in APIs. I'm not going to challenge the decision directly - I am going to challenge the reasoning which has several fundamental flaws. 

The first challenge is on the basis of the "Principle of Least Surprise". Everywhere I look on the Internet on this topic, going all the way back to the early pre-releases, developers were caught out and surprised by this decision. Developers are still being caught out and surprised by this and, I am willing to bet, will continue to be caught out by this for years to come. I have complete sympathy with them. The precursors of Optional including the Nullables tended to serialize, Optional is a carrier of other types which are often Serializable and Optional tends to work well in the streams coming out of Collections which are themselves serialization friendly. 

The second challenge is in terms of the attitude behind the decision. I perceive the attitude as being very "Ivory Tower". In my head at least the reasoning was that "We, the great and good of Java, can only conceive of one way in which Optional should be used: Therefore we will do all that we can to prevent the proles from going against our superior understanding."  If that was not the reasoning then it must have been something close... For some reason I hear "Inconceivable!", "You keep using that word, I do not think it means what you think it means."

My third challenge is on the basis of good API design. Good API design is contextual, it needs to be appropriate for the use case. Core language APIs are a special case in that they are should provide the most expressive, flexible and robust semantics as they are a major part of the toolkit with which you build everything else. Before something gets added to a core language API a key question should be asked: "Is this capable of being adapted to many useful use cases and being used in unexpected and novel ways that will aid the evolution of the language and its community?" The fact that Optional was envisaged for only one key use case and deliberately excluded from many others is a strong argument that it should never have been included in the core APIs in the first case. 

What really saddens me is that there is a lot of good clear thinking resulting in a set of very flexible, robust and expressive additions to the core Java language in Java 8. Unfortunately Optional is not one of them when it so nearly could have been. 

December 21, 2015

Netgear ReadyNAS NV+ V2 and Larger Disks (WD60EFRX)

Just want to get this out of my head so that I can go back to sleep...

I've been taking a sabbatical after 3 years with Zapp (2 years as Chief Architect) and have been doing a number of personal projects. One of the projects was to sort out my NAS setup as I was running out of space on my ReadyNAS NV+ V2 and wanted to play around with fitting larger disks.

I decided to experiment with installing non-officially supported 6TB Western Digital NAS drives as their siblings (WD10EFRX/WD20EFRX/WD40EFRX) are supported.

I can only report failure though possibly with a glimmer of hope for others...

My previous configuration had 4 x WD30EZRX (3TB) drives in an X-RAID2 setup. After backing up I started to try out hot swapping out the drives individually and resynching before rebooting to realise the increased capacity.

The first problem was that the screws that came with the ReadyNAS were A2 Countersunk UNC 6 32 1/4" screws. To mount the WD60EFRX you need 3/16" or even  5/32". A little careful work with some abrasives took care of that as I was impatient.

Over the course of 4 days of swapping and resynching (each resynch took ~16 hours) I was ready for the reboot. The resynch had gone swimmingly!

At reboot is when things started to go wrong as the expansion failed and it started resynching the drive. All reboots after resynch completion failed to expand the volume and resulted in a resynch.

Next attempt was to clear the drives down completely (including the existing partitions) and perform a factory reset (boot menu button accessed via a pinhole by the USB, choose 'Factory Defaults'). First I tried X-RAID2 default - file system creation error. Next I tried Flex RAID with a RAID 5 configuration - file system creation error.

I then decided to use the EnableRootSSH plugin and dig into the operating system to see what was going on... going into the detailed logging I saw that the creation of the ext2 filesystem on the large volume was failing.

Sleeping on it (until just now) I woke up with a realisation that ext2 is quite an old filesystem and may have some limitations. At a 4 KiB blocksize (which is what is configured into the ReadyNAS) it can only support up to 16 TiB volume size and file sizes up to 2 TiB. So while the individual drives are supported, the volume size that they created was not.

Ext2 (and ext3) do support up to 32TiB if the block size is 8KiB but the problem would be how to make the ReadyNAS set up the file system with this block size. I suspect that one could (using SSH) manually fix an incomplete setup by creating the volume with the larger block size and complete creating the media and backup directories.but that only pushes the limitation a little way down the road and may have all sorts of problems with hot-swapping drives in the event of a failure.

My solution was to buy a diskless ReadyNAS 316 enclosure which uses btrfs and tops out at 16EiB and definitely supports the WD60EFRX drives.

I hope that this post will help other people who may contemplate this experiment themselves.

July 14, 2015

How my mind thinks about things...

I'm a great believer in the alliance between the subconscious and the conscious minds to solve technical problems. It started at school with an excellent piece of exam advice from a great teacher (Hello Mr Burgess!). 

The advice was to read a paper fully at the start of the exam, see which questions you already knew the answer to and which ones you didn't. 

You then answered the questions that you knew the answer to before re-reading the others where you realised that you knew the many of the answers because your subconscious had worked it out. Rinse and repeat. 

This advice stood me in good stead through many exams though obviously your subconscious couldn't help you out if you hadn't done the work In the first place. 

This led me to read up on the subconscious and work out a number of ways to work with my subconscious successfully. 

My favourite ones to read up and think as fully about a problem as I possibly can one day. Sleep on it and either I will find the answer comes to mind as I wake up or a soon as I start working on the problem again. Sometimes it will take several rounds of sleeps and working on the problem but I almost always arrive at an answer. 

Unfortunately my mind doesn't always differentiate between a problem I need to work on (IT, personal etc.) and something that I just happened to read about. For example the other night I arrived at a theory about the reason why certain galaxies appear heavier than they are why super-massive black holes appeared much earlier than expected in the universe. 

What is interesting is that I remember a chunk of the rather non-linear thought process that generated these ideas. 

I woke up on Thursday morning last week and as I was waking I started thinking about the experience of a being that had lived its entire life in a closed box with no seams or windows and had no way of knowing if there was anything out side the box or if the box was the entirety of existence. I then thought that this was kind of like the experience of us within our own universe. I then went on to think about what else might enter the box to provide evidence of something outside and I realised that gravity was a good bet. Theoretically by constructing some very sensitive instruments the being inside the box could start to infer the existence of things outside the box by mapping the force and direction gravity within the box. 

The first 'Aha!' moment came when I thought about the fact that gravity is thought to be so weak because it leaks out of our 4 dimensions into others and I realised that what leaks out can leak back in!

This then suggested that any unusual unexplained gravitational effects in the wider universe could be as a result of deeper structures leaking gravity back in. 

I remembered the articles that I had read about dark matter and Modified Newtonian Dynamics trying to explain why there is greater mass / gravitation seen in certain galaxies than expected. 

My brain then jumped to a possible explanation. The fact is that we don't know how the 4 dimensions are folded in the wider dimensions so could the gravity in the 4 dimensions be leaking back into those 4 dimensions at a distance? In effect galaxies and other structures should be mutually reinforcing their gravity as it leaks out of the 4 dimensional universe and leaks back in another place. This could explain the excess gravitational attraction seen in certain galaxies and would explain the early appearance of supermassive black holes as a feedback loop caused by matter concentrating in multiple places in the 4 dimensional universe mutually reinforcing itself across the wider dimensions causing more matter to infall. 

Now as I write I'm wondering whether the bigger structures, galactic clusters and voids, are reflective of this effect.

I expect that this has already been thought of by a physicist but this is what my mind does to me some mornings!

December 10, 2014

Had my own 'Meet the Ancestors' Moment Earlier in the Week.

It all started when I was chatting with friends about the upcoming centennials for various World War 1 campaigns and I mentioned that I believed a family member died at Gallipoli. One of my friends expressed great surprise as they had believed that only Australians and New Zealanders had died there. I decided to find out a little more and a little judicious internet surfing rapidly made it clear that the Gallipoli Campaign had been pretty bloody killing over a hundred thousand soldiers and wounding many more before it ground to a halt. Of those deaths the majority were British.

With that knowledge I asked my mother about it, it turned out that my among my father's nicknames were 'The War Memorial' or 'Granite with Knobs On' as he had been named for his uncle who had died at Gallipoli. My mother further added that this was in part an act of atonement as my paternal grandfather had been his commander at the time.

I then dug a little deeper and the first thing I found was this article about a remembrance last year (2013) for the campaign which quoted the gravestone of a Lieutenant Commander J.R.Boothby (my father was J.R.M.Boothby). The quote just felt right:
With undaunted heart he breasted life's last hill.
Apparently the families were given just 64 characters and I was really glad that, if this was my great-uncle, my family had done so well.

As I dug further I discovered that Lt. Cdr. J.R.Boothby had died early in May 1915 as part of the RNAS Armoured Car Squadron. I dug further and discovered that the RNAS was in fact the Royal Naval Air Service a forerunner of both the Fleet Air Arm and the RAF. What I hadn't realised was that the RNAS had the only mechanized land forces in the British forces at the start of the war. In fact it formed the nucleus of the unit that created the first tank.

With that I managed to understand how my grandfather fitted in when I discovered this book called the Devil's Chariots and on perusal of the limited pages made available by Google I was able to determine that my grandfather had been in command of the RNAS Armoured Car Division at the time of Gallipoli so it all tied up.

I'm still researching the topic but as a result I'm going to see what I can do for the centenary of my great-uncle's death to remember him and his sacrifice.

November 19, 2014

A Judicious Use of Generics to Simplify Working With Java Maps.

I've always been a little ambivalent about Generics in Java but every now and then I come across a neat usage that really helps out.

In this case the neat usage is with Java Maps.

Java Maps Generics have always struck me as being a little all or nothing you can use generics to define the types of all the keys and all the values but it's a bit blunt instrument there is no way to individually relate the key to the type of the value.

To that end I've created a little open source code that certainly works for me in many use cases: TypedMap on GitHub. I'll get round to releasing a Maven-ised version of this on a public repo soon, but the code is pretty trivial.

It introduces an interface called a TypedKey that takes a generic type parameter that indicates the type of the value related to the key. This interface has no methods and can be implemented by any of your classes so that they can be used as keys.

It then introduces an interface called TypedMap which extends the core java.util.Map interface to add three new methods, putTyped, getTyped and removeTyped. These are generically typed variants on the java.util.Map put, get and remove methods taking a TypedKey to define the type of the value being worked with.

Taking advantage of the way Generics works, these interfaces provide compile time type safety and eliminate the need for casting values retrieved from the map as long as an appropriate TypedKey is used.

Two concrete implementations of the interfaces are provided: DefaultTypedKey and TypedMapDecorator.

The DefaultTypedKey is the simplest possible implementation of the TypedKey interface, it is a direct child of java.lang.Object and inherits the hashcode and equals semantics.

TypedMapDecorator delegates all java.util.Map functionality to an embedded Map instance and implements the typed methods backed by the embedded Map.

I'm still working on the generics usage to make it more elegant and to provide a greater degree of control to the user, but even in this version I believe that the use of a TypedMap with TypedKeys can significantly simplify code in many use cases.

June 30, 2014

Spring-WS Default Endpoint Configuration

We've recently been trying to set up some web services that provide common security functionality such as signing and signature verification. We really wanted to keep them generic as we did not want to have to set up individual operations and services. We chose to use Spring-WS because it had the concept of a default endpoint that could be configured to consume any web service not otherwise handled. For the security element we chose to use WSS4J Interceptors.

When it came to setting up the default endpoint I could find nothing at all that clearly defined how to do it, Spring Documentation, Spring Forums and even blog posts and Stack Overflow did not seem to have the answer. In the end it took trial and error and a certain amount of reading the source code to work out the magic sauce.

To save others the pain I thought that I would record a how-to here.

<beans xmlns:context="http://www.springframework.org/schema/context" xmlns:sws="http://www.springframework.org/schema/web-services" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/beans" xsi:schemalocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/web-services http://www.springframework.org/schema/web-services/web-services-2.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd">

    <context:component-scan base-package="com.yourpackage">

    <sws:annotation-driven>

    <bean class="org.springframework.ws.server.endpoint.adapter.MessageEndpointAdapter" id="messageEndpointAdapter" lazy-init="false">

    <bean class="org.springframework.ws.server.endpoint.mapping.PayloadRootAnnotationMethodEndpointMapping" id="endpointMapping" lazy-init="false">
        <property name="defaultEndpoint">
            <ref bean="defaultEndpoint">
        </ref></property>
    </bean>

    <bean class="com.yourpackage.DefaultEndpoint" id="defaultEndpoint">
    
</bean></bean></sws:annotation-driven></context:component-scan></beans>

The spring beans configuration supports annotation based component scan which is how I would normally set up endpoints but in the case of a default endpoint there appears to be no annotation based approach. I've set up a MessageEndpointAdapter that will be used by the PayloadRootAnnotationEndpointMapping class to be able to communicate with the default endpoint. Finally we explicitly set up the Endpoint Mapping with our default endpoint set in as a property.

The default endpoint implements the MessageEndpoint interface which uses a generic mechanism for representing the incoming message called the MessageContext. Below you can see an example where the default endpoint echoes the request message to be the response.

package com.yourpackage;

import org.springframework.ws.context.MessageContext;
import org.springframework.ws.server.endpoint.MessageEndpoint;


public class DefaultEndpoint implements MessageEndpoint{
   @Override
    public void invoke(MessageContext messageContext) throws Exception {
        messageContext.setResponse(messageContext.getRequest());
    }
}


To be honest setting up a default endpoint is not hard once you know how. Shame the documentation isn't really there.