Misc

7th February
2022
written by

I have worked in many big companies as a freelancer and as a Scrum Master and one take-away from me has been how often there is mistrust between “the business side” and the “development team”. As a Scrum Master it has been my job to bring the sides together and remind them that we are nothing without each other and hopefully also bring a bit of understanding of all aspects of developing software and doing business with it to all involved.

As a Scrum Master, I have coached or taught many Product Owners and the main lessons have been; Trust the team. Tell them (somewhat clearly) what you need and why and then work with them to get to a good solution. Listen to their concerns – it will help you long term.

I have also had to work with many teams to (re)build trust in the Product Owner – to help them communicate their concerns clearly and help them understand the role and understand why we are not always aligned in our wishes for the product. And hopefully highlight, when the Product Owner listens to the team and help show why the Product Owner makes those product decisions.

What has been true for all the journeys my teams have been through, is that the work has flowed much better with trust and understanding between the people involved.

A trick I use often, is to nudge on the language used. Nudge the Product Owner to always say “we” and the team to include the Product Owner into the “we” that they already have in their head. It is a dirty trick, because it seems like such a small thing, but it has a big effect on the teams self image. Seen over a long period of time, you can really appreciate the difference. No more “us and them”-language. “We” are taking on responsibility for mistakes made and “we” are the actors in the successes achieved.

Talking to all team members, about how we can play different roles with different viewpoints and that these viewpoints needs to be balanced and respected, is key. Maybe a few team members are especially good at remembering the architecture view, the security view, the code quality view, the business view or the user feedback view and all these are important. Probably not equally important, but weighed in a certain way, that is dependent on our situation and that can change over time. We can disagree on the trade-offs we make, but we have to respect that there are trade-offs that have to be made and trust that when we make those decisions, we do it with good intentions and to the best of our knowledge at the time.

There is an excellent way of expressing this trust; The prime directive:

“Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”

–Norm Kerth, Project Retrospectives: A Handbook for Team Review

The prime directive is usually mentioned in the context of the retrospective, but they are really words to live by. I say: Hang a poster with it in your workspace and remind everyone about it often. To build software well as a team, we need trust.

25th March
2014
written by

One day when I was surfing cat videos professional relevant videos on Youtube I noticed a red progress-indicator:

My first thought was – I want this in my apps. How did they do it?
If you lower the bandwidth a pattern emerges:

youtube-slow

Aha! When you click on a video, Youtube will start a request to fetch informations about it, animate the bar to 60% where it waits until the call is completed and finally animates it to 100%.

Utter deception but without doubt a well thought-out solution. As long as you are on a sufficiently fast connection and the amount of data that needs to be transferred is limited the illusion is complete.

It has to be said that even if Youtube cheats a little it is a much better solution than those spinners you see on the majority of sites with asynchronous requests today:

It gives no sense of progress and no indication if the transfer has stopped. I’ve also experienced many sites where errors aren’t handled correctly and you end up with a eternal spinner – or at least until you loose patience and refresh the page.

Dan Saffer expresses it in simple terms in the book Designing Gestural Interfaces: Touchscreens and Interactive Devices

Progress bars are an excellent example of responsive feedback: they don’t decrease waiting time, but they make it seem as though they do. They’re responsive.

With the very diverse connection speeds we have today I’d say that the need is even greater – The Youtube example from before might hit the sweet spot on an average connection, but If you are sitting away from high speed connectivity maybe on a mobile connection with Edge (nevermind that you probably cannot see the video itself) the wait can easily outweigh your patience.

As Jakob Nielsen writes in Usability Engineering; Feedback is important, especially if the response time varies

10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

Requirements for the solution

There are many ways to add continuous feedback but each come with their own limitations. To be able to use the solution in most problem areas we need to set some requirements:

The solution should:

  • Be integratable into existing solutions without too much extra work and (almost) without changes to the server.
  • Should work across domains, when the client lives on one domain and communicates with the server on a different. (Also called Cross Origin Resource Sharing or CORS).
  • Work in all browsers.
  • Be able to send a considerable amount of data both back and forth.

 

The first attempts

When communicating with a server from JavaScript, it is under normal circumstances a request that is started and a little while later you get a status indicating whether the call was a success or not. So no continuous feedback. The naive solution could be to make multiple single requests but the overhead of a request is relatively expensive so the combined overhead would be too big.

If you look at HTTP/1.1 there is a possibility to split a response from the server in multiple parts – the method is called chunked http:

chunked

If we send a agreed number of chunks we are able to tell the user how far the request has come.

 

Traditional AJAX call

Traditional AJAX calls – typically executed via XMLHttpRequest generally don’t give any frequent feedback and if you want to execute it as a CORS-call you need to make changes to the server and it will give an extra request if the call is with authorization. The support in older browsers is limited.

 

JSONP

JSONP is an old technique to circumvent the CORS problems with AJAX. A call where you basically add a script tag that lives on the remote server that will then be executed within the scope of the current page.

In a weak moment I tried to implement chunked http in a JSONP call – multiple JavaScript methods in one script that would then be executed continuously. Of course it didn’t work – the browser won’t execute any code until the JavaScript file has been fully loaded.

 

Other technologies

I also looked at Server Sent Events, but thats not supported by Internet Explorer.
WebSockets requires a greater change on the server side and it also gives some challenges with security, cookies etc. when we are outside the traditional http-model.

Failure is always an option

Adam Savage

The final solution

HiddenFrame is a technique where you create a hidden iframe and let it fetch a lump of HTML from the server. If there is any script-tags in this lump, they will be executed as the browser meets each end-tag. So there we have a potential solution.

Sending data is no problem either because we can start the fetch itself by executing a form-post.

And where does JavaScript Promises fit into all this?

Well to get a good API I’ve used a Promise-implementation that offers continuous feedback via the progress-method and handles any success and failure-scenarios:

new LittleConvoy.Client('HiddenFrame').send({ url: 'http://example' }, { name: 'Ford'})
    .progressed(function (progress) {
      ... progress contains percentile progress and will be called 10 times
    })
    .then(function (data) {
      ... the call was a success and data contains the result
    })
    .catch(function (message) {
       ... the call failed and message contains the error message object
    });

On the server side a library is added, currently only available for Microsoft ASP.NET MVC. The JSON producing methods that you already have are just decorated with a attribute that makes sure that everything works:

public class DemoController : Controller
{
    [LittleConvoyAction(StartPercent = 40)]
    public ActionResult Echo(object source)
    {
      return new JsonResult {Data = source };
    }
}

 

Demo or it didn’t happen

  • A small demo is available here.
  • The code is available at GitHub
  • The library can be installed via the .NET package manager NuGet as LittleConvoy.

 

The future

  • The transport layer itself is separate from the client so extra methods can be added, adding traditional AJAX and WebSockets would be an obvious choice in the future if it can be done with too many changes on the server side.
  • Some Promise implementations offer cancellation, it would be great if you could cancel a call.
  • Sending data gives no feedback – it would be great if that was somehow possible.

 

Related work

  • Comet is a collection of technologies that offers push for the browser and it also contains a implementation of iframe-communication but is targeted permanent communication channels.
  • SignalR is like Comet build for push and permanent communication channels
  • Socket.IO is an abstraction over WebSockets that contains fallback to iframe-communication but more targeted Node.js.

This post is also available in Danish at QED.dk

3rd March
2014
written by

In my previous post in Danish I looked at how to perform asynchronous calls by using promises. Now the time has come to pick which library that fits the next project.

There is a lot of variants and the spread is huge. One search for promise via the node package manager npmjs.org gave 1150 libraries which either provides or are dependent on promises. Of these I have picked 12 different libraries to look at, all are open source and all offer a promise-like structure.

Updates:

  • 2014/03/06 – Fixed a few misspellings (@rauschma via Twitter)
  • 2014/03/07 – Removed raw sizes, since they did’nt make much sense (@x-skeww via Reddit)
  • 2014/03/07 – Added that catiline uses lie underneath. (@CWMma via Twitter)
  • 2014/03/07 – Added clarification on what the test does. (@CWMma via Twitter)

The API across the libraries are almost alike, so I’ve decided to look at:

Features
What kind of generic promise related features does each library offer?

Size
And I’m thinking mostly browsers here – how many extra bytes will this add to my site?

Speed
How fast are the basic promise operations in the library? You would expect that these will execute many times so this is important.

 

The libraries

First an overview of the selected candidates, their license and author. Note that the name is linking to the source of the library (typical Github).

 

License Author Note
Bluebird MIT Petka Antonov Loaded with features and should be one of the fastest around and with special empathizes on error handling via good stack traces. Features can be toggled via custom builds.
Catiline MIT Calvin Metcalf Mostly designed for handling of web workers but contains a promise implementation. Uses lie underneath.
ES6 Promise polyfill MIT Jake Archibald Borrows code from RSVP, but implemented according to the ECMAScript 6 specification.
jQuery MIT  The jQuery Foundation Classic library for DOM-manipulation across browsers.
kew Apache 2.0  The Obvious Corporation I’m guessing it is pronounced ‘Q’, can be considered as a optimized edition of Q but with a smaller feature set.
lie MIT Calvin Metcalf
MyDeferred MIT RubaXa Small Gist style implementation
MyPromise MIT [email protected] Small Gist style implementation
Q MIT Kris Kowal Well known implementation, a light edition of it can be found in the popular AngularJS framework from Google.
RSVP MIT Tilde
when MIT cujoJS
Yui BSD Yahoo! Yahoo’s library for DOM-manipulation across browsers.

 

Features

The following is a look at the library feature set, looking only at features directly linked to promises:

 

Promises/A+ Progression Delayed promise Parallel synchronization Web Workers Cancellation Generators Wrap jQuery
Bluebird ✓ (+389 B) ✓ (+615 B) ✓ (+272 B) ✓ (+396 B) ✓ (+276 B)
Catiline
ES6 Promise polyfill
JQuery
kew
lie
MyDeferred
MyPromise
Q
RSVP
when
Yui


The numbers in parenthesis by Bluebird is the additional size in bytes each feature will add.

Promises/A+
Is the Promises/A+ specification implemented?

Progression
Are methods provided for notification on status on asynchronous tasks before the task is completed?

Delayed promise
Can you create a promise that is resolved after a specified delay?

Parallel synchronization
Are there methods for synchronization of multiple operations, can we get a resolved promise when a bunch of other promises are resolved?

Web Workers
Can asynchronous code be executed via a web worker – pushed to a separate execution thread?

Cancellation
Can promise execution be stopped before it is finished?

Generators
Are coming functions around JavaScript generators supported?

Wrap jQuery
Can promises produced by jQuery be converted to this library’s promises?

 

Size

Every library have been minified via Googles Closure compiler. All executed on ‘Simple’ to prevent any damaging changes. For libraries that support custom builds I have picked the smallest configuration that still supports promises. The result is including compression in the http-stack, so its actually the raw number of bytes one would expect that the application is added when using each library:

 

Minified and compressed

Speed

The speed has been measured via the site jsPerf which gives the option to execute the same tests across a lot of different browsers and platforms including mobile and tablets. The test creates a new promise with each library and measures how much latency is imposed on execution of the asynchronous block (see more detailed explanation here). Note that the test was not created by me, but a lot of fantastic people (current version is 91). The numbers are average across platforms:

 

Operations per second

Conclusion

Over half of the worlds websites already uses jQuery. If you have worked with promises in jQuery, you quickly find that they are inadequate.
I have previously had problems with failing code that doesn’t reject the promise on error as you would expect, but where the error still bubbles up and ends up being a global browser error. The promise specification dictates that errors should be caught and the promise rejected, which is not what happens in jQuery.

So if you today have a site based on jQuery, the obvious choice is to pick one of the libraries that offers conversion from jQuery’s unsafe promises to one of the more safe kind. If size is a priority either Q or when are good suggestions, loaded with features and at a decent speed.

If you are less worried about size, Bluebird is a better choice. The modularity makes it easy to toggle features and it has a significant test suite that covers performance on a lot of other aspects than the single one covered by this post.

If performance is essential, kew is a good bet. A team has picked up Q and looked into lowering its resource requirements. This has resulted in a light weight but very fast library.

If you are looking for a more limited solution with good speed and without big libraries, the ES6 Promise polyfill is a good choice – then in the long term when the browsers catch up, the library can be removed completely.

This post is also available in Danish at QED.dk

21st January
2011
written by

If you are looking for a new business model for your project? You have a great idea for a site, but no idea how to monitize it?

You could of course be traditional and offer an ad-based freemium-model like spotify.com with a sidedish of premium service. That have been done many times, but it is more flexible than the even more traditional model where you just let your users pay.

Maybe you could offer a free service and use it to gather large amounts of data and sell them like Patientslikeme.com? Or simply take a commision (or a posting fee) for facilitating contact/services to/from other companies like flattr.com, airbnb.com or GroupOn.com?

There is also a model where you let your customers pay what they want (encouraged by an anchor price of what other users have paid) and even let the users decide how much of the money that should go to charity. An example of this model can be found at humblebundle.com.

Or if your main product is free, how about “in-app”-sales like Haypi Kingdom or my favorite example Farmville – and if you want the user to loose track of the “real-world cost” then make your own monitary system.

If you want your users to create something of value, then make a platform that lets them co-create and get a share in the profit like Quicky.com. Helping other creative people monitize their ideas – that’s a great business model! Almost a meta business model.

Source: These was all picked from the presentation below; “10 business models that rocked 2010”

14th January
2011
written by

Less is more. That was my big lesson in 2010. I used to have clutter, mess, piles and heaps of stuff – in my home and in my office. Now my things fit into a suitcase and a backpack. I can’t buy things that I am not willing to carry with me every day, so I never shop anymore except to replace other things. Material things has never meant less to me than they do now.

It reminds me of the saying; “If you own more than seven things, the things will own you”. The simplification I have done in my life really feels like freedom. I can honestly say that I don’t miss any of my stuff. Back home we had a “game room” with several XBoxes, a Wii and a Playstation as well as a home movie theatre; I loved it and spent a lot of time there, and I really thought I would miss it, but I don’t. What I miss from back home are the people; friends, colleagues and family, but actually I speak more to my close family now than I did, when I lived less than 100 kilometers away from them (thank you, Skype).

Money has never been a big thing for me and that is probably because I have just been lucky to be able to make a fine living for doing what I love. The IT-business is a generous place to be. Now I think even less about it and also spend much less. Living in Asia can be cheap even while enjoying some luxury. Cutting down on our spending also have the nice side effect that we don’t have to work as many hours on profitable projects and can devote more time for pet projects, sightseeing or just each other.

Some days I wake up and I can’t believe, how lucky I am, thinking that this can’t last. But I just can’t stay worried; the sun is shining and I just keep telling myself: Don’t worry – be happy.

We Danes are known for our happiness being listed as the happiest people in the world several times by OECD, the reason often cited (by Americans) is that we expect less from life. I don’t think that is the true answer; we expect a lot from life just not only material things. We value life experiences and quality over quantity, and right now I’m taking that to an extreme and loving every minute of it.

Less IS more.

Tags:
10th December
2010
written by

To add to our blog post series about fun machines in Lego (Turing, most useless, 3D-printer and so on) here is a video of the antikythera mechanism built in Lego – that is the functionality is simulated with a machine built in Lego, but it certainly doesn’t look like the original.

If you don’t remember what the antikythera mechanism is then let wikipedia enlighten you:

“The Antikythera mechanism … is an ancient mechanical computer[1][2] designed to calculate astronomical positions. It was recovered in 1900–01 from the Antikythera wreck,[3] but its complexity and significance were not understood until decades later. It is now thought to have been built about 150–100 BCE. The degree of mechanical sophistication is comparable to late medieval Swiss watchmaking.[citation needed] Technological artifacts of similar complexity and workmanship did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.[4]”

The modern version is explained in this video:

A really really old computer rebuilt in Lego – what’s not to like?

21st October
2010
written by

This is almost as cool as the Lego Turing Machine and much cooler than the regular Lego printer: a Lego 3D printer.

Challenge: make a Lego 3D printer that prints Lego 3D printers.

Tags: ,
18th October
2010
written by

I have been to a lot of conferences and seen a lot of presentations from brilliant people, but sometimes those brilliant people fail to make a presentation that connects with the audience. As an audience member (and not speaking as one of the brilliant people presenting) I have just one advice for those speakers.

Speak at conferences because you are on a mission. Don’t give a presentation just because people ask you to, and you are flattered. Make sure to think about what you are giving the audience – what the audience should take away from your talk (and make it simple). At conferences most attendees are on information overload, so you have to inspire them for further investigation. Tell jokes, tell anecdotes, use images to let your audience connect with your material. Be enthusiastic. Be memorable. Be tweetable. Be bloggable. Be the odd one out. Make sure that everyone knows why you are on that stage and what you are talking about.

That’s it.

11th September
2010
written by

These days I’m hiding out at home at my parents house. I look like I have been beaten up with a swollen face, a broken nose and blue/black/red/yellow circles around my eyes, but this is all done on purpose and with my consent. Two days ago I underwent surgery on my nose to try to straighten it out after it broke in three places last January. This is my third surgery this year but in the last two they were not able to put all of the breaks back into place, so they had to do this last surgery after I had healed.

For months I have been waiting by the phone for them to call with a time for my surgery and last Monday they did. At first they offered me a time slot during my favorite IT-conference JAOO and I reluctantly took it, because I had to get this done before we leave the country November 1. Then they called me again Tuesday asking if I could do it this Thursday and even though I had to cancel a few things I jumped at the chance. This way I get to heal before JAOO and I can make a surgery followup appointment just before we leave the country. The timing couldn’t have been better.

So now I just have to get through the next few weeks with bruises and painkillers…

Tags: ,
26th August
2010
written by

I just found this great blog post on the MoMA-blog. This is what happens when you give MoMA-employees a Friday afternoon with Lego; they start copying the art!

My favorite was this yellow piece that they have made. It is probably the most complex of the pieces, so you can imagine that most pieces are quite simple, but so are the artwork that they copy. (You will have to go to the MoMA-blog to see the other photos.)

Lego from Moma

MoMA pieces in Lego

I would love to see an exhibit at MoMA just with Lego art. The Lego company should just send loads of Lego bricks to artists to see what would come of it.

Tags: , ,
Previous
  • You are currently browsing the archives for the Misc category.

  • WordPress Appliance - Powered by TurnKey Linux