Modify java.library.path at runtime

Linking to native code in Java is always a hassle. JNI isn’t exactly nice, and there are some oddities around classloaders and native libraries which are annoying if you run into them.

One thing I wasn’t aware of was exactly how hard it is to load a library it isn’t already in the directories specified by the java.library.path system property. 

Initially, I thought I’d just be able to alter that property and the JVM would pick up the new locations. That turns out not to be the case, as is shown by this (closed) bug report.

However, there is a solution, outlined in this post on the Sun forums, which revolves around altering the usr_paths field stored in java classes.

	public static void addDir(String s) throws IOException {
		try {
			// This enables the java.library.path to be modified at runtime
			// From a Sun engineer at http://forums.sun.com/thread.jspa?threadID=707176
			// 
			Field field = ClassLoader.class.getDeclaredField("usr_paths");
			field.setAccessible(true);
			String[] paths = (String[])field.get(null);
			for (int i = 0; i < paths.length; i++) {
				if (s.equals(paths[i])) {
					return;
				}
			}
			String[] tmp = new String[paths.length+1];
			System.arraycopy(paths,0,tmp,0,paths.length);
			tmp[paths.length] = s;
			field.set(null,tmp);
			System.setProperty("java.library.path", System.getProperty("java.library.path") + File.pathSeparator + s);
		} catch (IllegalAccessException e) {
			throw new IOException("Failed to get permissions to set library path");
		} catch (NoSuchFieldException e) {
			throw new IOException("Failed to get field handle to set library path");
		}
	}

Obviously, I don’t think that’s portable across JVMs, though.

Webdesign

For the last I’ve been doing webdesign (yeah, that actual visual UI stuff, not just AJAX or something) at work, and – remakably – for the first time since 1997 (yes – 1997!) I’ve enjoyed it.

Generally speaking my design tastes are different – or perhaps I could better say they reflect my unique sense of humour. For example, the orginal – and best – design for nicklothian.com featured a colour scheme generated from converting universal constants (the speed of light, e, etc etc to hex values). It was unique, and is still yet to be duplicated (!!).

But doing serious webdesign led me to dig out an old, old review for the first website I ever built and maintained. This, was when the web was young, CSS didn’t really work, Netscape 4 (!) was my browser of choice and I think I was running a pre-Slackware 1.0 Linux install, which I’d downloaded onto 12 floppies.

Website review, 1997 (actually, the article is from the May 1998 edition of Adelaide Review but I did the site in ’97)

 

Website review

Website review

Nick’s Two Laws of Software Engineering

So I’ve been working in software for over 10 years, and so I figured its about time I came out with some highly scientific principles of software engineering. Follow these, and I personally guarantee success…

  1. Don’t try and do too much at once.
  2. Hire smart people.
It’s important that you do both of these – leave either out and you’ll probably fail. 

Firstly, software is too hard to try and get too much of it done at once. Don’t bother trying – no matter how smart your team is you’ll fail – or have to redefine success to something less than 100% satisfactory. If you’ve got a big project to do, then incrememental delivery is the only way forward.

Secondly, no matter how small the project is you need smart people. Software is too hard, and it has an amplifying effect on stupidity. Since the smallest bug can cause big problems, and the number of bugs decreases expoentially with smartness you just can’t afford to have less than the smartest people available (I’ve got no evidence this bug vs smartness relationship is true, but it sounds kinda scientific & shit, so I’ll leave it in). However, if you have smart people, you have to stop them trying to do too much at once. See rule one.

That’s it. I now declare software development to be a solved field.

Android

So Android is out (a couple of months after I got a Nokia E71 – which is great BTW). People seem to think it will succeed mearly on the basis of having an open platform. I’m sure that will help, but the real issue holding back mobile is the carriers, not the software platform. Getting a decent deal out of carriers was Apple’s biggest move with the iPhone, and Android phones will need to get similar deals. Ask a mobile developer about how to get “on deck” on most carriers, and how much money they get from that (usually around 20%!) and compare that to the AppStore deal.

On not going to Google

So contrary to some self-inflicted rumors, I’m not off to Google London or Sydney. My wife has taken a new contract here in Adelaide so we’ll be staying here for at least another 12 months.

It is kind of disappointing, but I’ve been doing the Google recruiting process for a week short of 3 months now, been though 5 interviews (with somewhat mixed results – yes, I’ll do a post about that), turned down the opportunity to do more interviews for some position I had no interest in and the last message I had from Google HR was that they would have “concrete clarification” about other positions for me on Wednesday. That was Wednesday three weeks ago.

So apparently Google have done studies which show that slowing page load times from 0.5 to 0.9 seconds cuts traffic by 20% – people just don’t wait around. Perhaps I might recommend a similar study for the recruitment process?

OTOH, all the Google engineering staff I met or talked to were great. So all in all it was a mixed experience – if you are thinking about trying it I’d recommend it, but I’d also recommend not trying anything too complicated (like saying yes to a recruiter from another continent..)

FriendFeed – 6 likes, 3 don’t likes and 2 requests.

I’m a huge fan of FriendFeed.

I think that too many internet pundits see it as competition for Twitter, which is distracting from the real innovations it has provided. FriendFeed is the first real new way of reading rss feeds since the distinction between river-of-news and source-centric emerged.

The addition of a social layer which allows people to define who they are (by what data they produce) makes the subscription process much easier. I could see myself suggesting to family members to sign up to it so they’ll see what I’ve been doing – rather than the alternative of them having to remember to visit Flickr, and my 4 blogs etc, etc, or use a traditional blog reader wich will just confuse them.

At work, we started building a similar service for a vertical market group (educators in Australia) in roughly the same time frame as FriendFeed. There are a number of differences (we’ve only had 1.5 developers working on it over that time, for a start!), but the one thing FriendFeed has which I want is the commenting on items.

We actually built a proof-of-concept for a social-bookmarking service which would allow commentry on posted items (as well as on tags), but that hasn’t been incorparated into our service yet. However, we didn’t think of building that directly into the newsfeed, which is such a small thing, but makes such a difference.

Apart from that, here’s some things which I love about FriendFeed but most people probably don’t think about:

  1. The way your profile shows who you follow, not who follows you. Showing the number of people who follow you turns it into a popularity contest, which is kind of silly.
  2. The subtle influence of “Like” by your friends on what shows in your feed. If one of your friends “Likes” something then it will show up in your feed as posted by a friend-of-your-friend.
  3. The ordering of who liked something by how far away they are from you in the social graph (or showing your friends first, anyway)
  4. The search. The fact that it is easier to find one of my own del.icio.us posts on FriendFeed than in del.icio.us itself is scary.
  5. Imaginary Friends is a brilliant idea, which solves a really big problem. Whoever thought of that – in particular the name! – deserves lots of “Like”s
  6. The “multi-level” nature of it. You can start with using it just as a way of following people, then discover features just as you think “wouldn’t it be great if I could….”

Things I don’t like:

  1. “Like”. The meaning of the word makes it difficult to use in some circumstances (eg, reading about something bad which you want to remember). In our social bookmarking prototype we used a system very much like the “Rating an Object” pattern from the Yahoo design pattern library (which is a great resource for social software patterns, BTW). I think that actually rating an object may not quite have the correct social connotations for FriendFeed, but perhaps a “Star” (like in GMail) might be appropriate?
  2. I don’t think FriendFeed rooms work, yet. Maybe it is that I don’t quite get them, but firstly I can’t see a good way to find them, and secondly the fact that items need to be specifically re-shared into them is confusing. (OTOH, I’m probably wrong here, and they are probably already heavily used. We built a similar feature called communities (eg), and they have worked pretty well. We do have a discovery mechanism, though (as well as a more subtle recommendation thing when a person edits their profile)).
  3. Sometime a friend will share an item, and because people keep commenting it will remain towards the top of my feed for a long time. I know I can hide it, but I’d prefer the reverse – if I don’t “Like” it of comment on it then it should go away quicker.

Feature requests (some of these are things we’ll probably build into our system anyway):

  1. Tagging. I want to use FriendFeed as a social bookmark system.
  2. Direct messages, or a “@”-like syntax which will make sure a message is recived by the target.

Anyway, like I said: I’m a big fan. I think FriendFeed is already beautiful software, and it’s a joy to see the subtle, continuous refinement and evolution it goes though.

Google Developer Day 2008

I went to the Google Developer Day in Sydney today. It was pretty interesting, even if I can’t say I learnt much new, perhaps becuase I concentrated on sessions about AppEngine and OpenSocial.

The AppEngine ones were excellent, but unfortunately I’d previously watched the video of the same talk from Google I/O. The OpenSocial sessions weren’t as good. I think they suffered a little from being unclear about the exact audience they were pitched at. In particular they seemed to jump around from assuming you knew nothing at all about open social to assuming you understood the difference between a gadget, a container, a REST server and Shindig.

The final session was an OpenSocial code lab, which involved the audience attempting to copy down random tiny urls to pages we saw for a few seconds. If you missed one of the URLs then nothing you tried would work for you afterwards. We used the iGoogle sandbox as a container, which seems to be pretty buggy and confusing. On the upside I did have a good conversation with John Hjelmstad about Shindig etc, as well as some interesting ideas he had for CSS spriteing.

By all accounts, the Gears & Android sessions were really good, so perhaps I should have gone to them instead.

Finally, my team at work were fortunate enough to be one of six finalists for the Google Speedgeeking prize, which seems to be a prize for the best mashups done using a Google API. We didn’t win, but we got a trophy anyway.

Google Speedgeeking 2008 finalist

The (s|S)emantic (w|W)eb

“The semantic web is the future of the web and always will be”

Peter Norvig, speaking at YCombinator Startup School

I’m sick of Semantic Web hype from people who don’t understand what they are talking about. In the past I’ve often said <insert Semantic Web rant here> – now it’s time to write it down.

There’s two things people mean when they say the “semantic web”. They might mean the W3C vision of the “Semantic Web” (note the capitalization) of intelligent data, usually in the form of RDF, but sometime microformats. Most of the time people who talk about this aren’t really having a technology discussion but are attempting a religious conversion. I’ve been down that particular road to Damascus, and the bright light turned out to be yet another demonstrator system which worked well on a very limited dataset, but couldn’t cope with this thing we call the web.

The other thing people mean by the “semantic web” is the use of algorithms to attempt to extract meaning (semantics) from data. Personally I think there’s a lot of evidence to show that this approach works well and can cope with real world data (from the web or elsewhere). For example, the Google search engine (ignoring Google Base) is primarily an algorithmic way of extracting meaning from data and works adequately in many situations. Bayesian filtering on email is another example – while it’s true that email spam remains a huge problem it’s also true that algorithmic approaches to filtering it have been the best solution we’ve found.

The problem with this dual meaning is that many people use it to weasel out of addressing challenges. Typically, the conversation will go something like this:

Semantic Web great, solve world hunger, cure the black plague bring peace and freedom to the world blah blah blah…

But what about spam?

Semantic Web great, trusted data sources automagically discovered, queries can take advantage of these relationships blah blah blah…

But isn’t that hard?

No, it’s what search engines have to do at the moment. The semantic web (note the case change!) will also extract relationships in the same way.

So.. we just have to mark up all our data using a strict format, and then we still have to do the thing that is hard about writing a search engine now – spam detection.

Yes, but it’s much easier because the data is much better.

Well, it’s sort of easier to parse, and in RDF form it is more self descriptive (but more complicated), but that only helps if you trust it already.

Well that’s easy then – you only use it from trusted sources

Excellent – lets create another demo system that works well on limited data but can’t cope with this thing called the web.

Look – I don’t t think the RDF data model is bad – in fact, I’m just starting a new project where I’m basing my data model on it. But the problem is that people claim that RDF, microformats and other “Semantic Web” technologies will somehow make extracting infomation from the web easier. That’s true insofar as it goes – extracting information will be easier. But the hard problem – working out what is trustable and useful – is ignored.

The Semantic Web needs a tagline – I’d suggest something like:

Semantic Web technologies: talking about trying to solve easy problems since 2001.

RDF could have one, too:

RDF: Static Typing for the web – now with added complexity tax.

So that’s my rant over. One day I promise to write something other than rants here – I’ve actually been studying Java versions of Quicksort quite hard, and I’ve got some interesting observations about micro optimizations. One day.. I promise…

The problem with OpenID is…

The problem with OpenID is branding – people get (very) confused when they get taken off site to login. I’ve watched usability testing of this, and it is truly horrible. Obviously this isn’t unique to OpenID – it applies equally to any federated identity solution (in fact – Shibboleth based federations are even worse than OpenID in this respect).

I think user education will help, but it would be really good to be able to extend OpenID to be able to put a logo on the identity provider’s site so the user can see they are logging into site “blah” via whatever open id provider.