Plaxo revisited

So I got some helpful feedback on my last post from John McCrea who is VP of Marketing for Plaxo. So I logged back into Plaxo and had a little hunt around and after about 5-10 minutes found my addresses.

My initial reaction was Doh! what an idiot I was, but then after a few minutes reflection I started to think about user interface design and intuitive learning. When I showcase new functionality to my users, they will often give feedback about how easy a feature is to use or find. Often as developers or familiar users of a system we don’t see that something might not be named or located in the most natural place.

So I though I might also give some feedback to Plaxo that might help improve the user interface.

This is where the confusion started for me:

Plaxo options

Option 1 leads you to the old Plaxo address book system

Option 2 leads you to an import wizard to get your address book from Outlook, GMail etc

Option 3 doesn’t contain any of your old address book “connections”

So these are my thoughts:

  • Links with nearly the same names but going to total different actions is confusing
  • Option 3: The domain language seems confused, surely the people I have already shared my address book with are already “connected” in someway. Maybe that connection only allows address book features and not all the new features.
  • Option 2 doesn’t seem to support importing from option 1 (You wouldn’t need this option if they were already connected)
  • Option 1 feels like a skinned version of old app and not really integrated into the whole experience

Early morning rant…

I’ve been an occasional user of Plaxo (just the address book) for the last 3 years, using it to keep a few important addresses up to date. Anyway today was the first time I had used it since they launched Pulse, and guess what all my contacts have gone!

Now that’s not very clever for a social networking site and reminds me what Duncan Cragg would say about Web 2.0 and making sure all you data is public, readable in some open format and ideally distributed over the interweb. If fact this really goes back to why “data bases” were created in the first place: data lives longer than applications: well not in Plaxo’s case.

More Ubuntu 7.10 notes for Dell D630

So the wifi seemed to work pretty well out of the box but I noticed after prolonged use it would just suddenly freeze and the only way to make it come back was to reboot. (You couldn’t even reload the module or restart networking stack).

You can tell if this will be a problem for you by running iwconfig: if you have lots of Invalid misc errors and the signal and noise levels are fixed at -60dBm then you will want to switch from the ipw3945 module to the iwl3945.

To try it out:

sudo modprobe -r ipw3945
sudo modprobe -r ieee80211
sudo modprobe -r ieee80211_crypt
sudo modprobe -r mac80211
sudo modprobe iwlwifi_mac80211
sudo modprobe iwl3945

Now if you run iwconfig you should see wlan0 (plus wmaster0 which we’ll just ignore). If it’s called wlan0_rename then you run

sudo nano /etc/udev/rules.d/70-persistent-net.rules

Comment out the line reserving eth1 for ipw3945 module.

To make this all permanent you need to add the iwl3945 and iwlwifi_mac80211 modules to /etc/modules

sudo echo iwlwifi_mac80211 >> /etc/modules
sudo echo iwl3945 >> /etc/modules

And now stop the ipw3945 and dependencies from loading by blacklisting them

sudo echo blacklist ipw3945 >> /etc/modprobe.d/blacklist
sudo echo blacklist ieee80211 >> /etc/modprobe.d/blacklist
sudo echo blacklist ieee80211_crypt >> /etc/modprobe.d/blacklist
sudo echo blacklist mac80211 >> /etc/modprobe.d/blacklist

Notes on installing Ubuntu Gutsy Gibbon 64bit edition on a Dell D630

I had to start the install CD in safe VGA mode and have not got the slash to display yet.

To get sound to work you don’t need to recompile your kernel or go back to an earlier version just follow method G one this page:

Gutsy Intel HD Audio Controller

or

sudo aptitude install linux-backports-modules-generic

sudo gedit /etc/modprobe.d/alsa-base

In the editor, add the following line at the end of the file:

options snd-hda-intel model=dell-m42

Save the file and reboot to get sound working correctly.

If sound is too low, go to Volume Control’s Preferences and add “Front” (and any other playback tracks) and make sure they are set to the maximum.

What I would love to do

Smart Contracts
http://www.erights.org/smart-contracts/index.html
http://www.waterken.com/dev/IOU/Design/

We may end up getting RESTful smart contracts for free due to all the goodness in the current design. This could lead (my client).com to an unheard of level of security and possible make it the most high profile public implementation of smart contracts to date. Watch this space

Auto Save Enhancement

We currently auto save on page transition but for JavaScript clients we could do this on any change. Imagine a world were your browser behaves like IntelliJ and you never loose any data you type in.

Web 3.0 Semantic Web

Due to our data being so dynamic we may have the opportunity to go from web 2.0 to web 3.0 and again this could make this one of the first killer semantic web applications.

What we will be doing in the future

Decoupling Resource selection from Representation selection in our Urls

This will allow resources (domain objects/documents) to be selected (think SQL WHERE clauses) and presenters re-used by the client editors without specific coding by developers. So a url like “/video/skins/latest” could return all videos tagged with skins or possibly any resource tagged with video and skins. What we are aiming for is totally dynamic / fluid urls with uses we never imagined.

Mashups

Editors will be able to mashup YouTube videos, Facebook items, MySpace pages, Flickr images etc. into pretty much any where on the site. Users will be able to mashup client content / videos onto any other site.

Social Networking

Allowing users to rate each others content, interact with client content / users and other social networking sites and generally use the site in new and interesting ways

Caching

We will be totally embracing web caching so that all non current user specific content will be cache able. User specific content will be included client side (in an accessible way) so containing pages do not need to be made non cache able.

What I have been doing for the last few months

So I thought I would start my first post with what I have been working on for the last couple of months. Naturally I have removed all references to my current client.

What we have built already:

RESTful Web Site and Web Components

So just like REST exposed your web service api/end points on to url and embraced simple message / state transfer we are going a step further and making all our web components (think html widgets) be exposed by an addressable url. Composition of components can then be done done either server side or client side. And is therefore not language specific so one could compose a widget that comprised of other widgets written in ruby, python and C# etc. In fact the widgets don’t even need to be on our server or written by us – think mashups on steroids.

Post/Redirect/Get pattern
http://en.wikipedia.org/wiki/Post/Redirect/Get

We have built a simple interface contract that enforces that all POSTs return a redirect so that the Back button will always work. This combined with the transaction boundaries ensures that all GET requests are idempotent.

Simple transaction boundaries

Transaction boundaries have been enforced so that developers do not need to worry about them at all in production code. GET requests run in a transaction that will always be rolled back, POST requests will always commit if successful and always roll back if an error is thrown. Some serious work went into taming Hibernate so that it did not auto commit, had no mutable static state and was completely encapsulated.

Extended SiteMesh
http://www.opensymphony.com/sitemesh/

We are using SiteMesh not only for decoration of the site but for the composition of web components and the ability to extract any content from a page so that when combined with AHAH only the smallest payload is returned to the client keeping response time down.

AHAH (Asynchronous HTML and HTTP)
http://microformats.org/wiki/rest/ahah

This enables much simpler JavaScript to be written (think JavaScript that never needs to replicate the domain model or business rules on the client) and allows for complete reuse of server side logic. This combined with behaviour CSS bindings ( http://www.bennolan.com/behaviour/ ) leads to NO in-line JavaScript nastiness and more semantic html.

No Session State just persistent documents

We have NO session state, all state changes to documents are persisted which leads to a number of advantages: Users never loose data they have filled in, marketing can see exactly how far a user got before balling out of a work flow. Users can fill in data in any order they want and only when they are ready do we action their request. Domain objects are only updated once the document has been validated.

In memory web acceptance testing

Think super fast builds, no deployment (in fact no need for a web server), pure Java acceptance tests (i.e. refactorable) and the 80/20 rule on what is good enough to give you confidence that the system works.

Progressive Enhancement and Accessibility
http://en.wikipedia.org/wiki/Progressive_Enhancement

All stories are played vanilla html version first and a second story for JavaScript enhancements. This leads to cleaner simpler semantic html and also allows feedback from the first story to be tacked onto the second story.

NO Logic in the View
http://www.stringtemplate.org/

We are using StringTemplate to enforce that NO logic can be written in the view, plus it’s super fast, has no “for loops” (think ruby each blocks).

Dan's technical ramblings