Templated Components with Knockout Continued

As promised in my last post, I am going to explain today how to write more complex templated components. I have already mentioned our dilemma with table components and that we wanted to use templates to solve it. When creating a table component, we want to be able to pass markup for the table header as well as for the table rows. So we need a way to define different parts of the template and to decide which parts to insert at which places into the component.

Defining Different Parts of the Template

For passing multiple parts of template into the component, we decided to place each part of the template in its own template tag. In order to distinguish between the roles of the different template parts, we use the 'data-template-function' attribute. In this example, 'data-template-function' can have the values 'table-header' or 'table-row'. So this is the markup that is passed to the templated-table component:
[crayon-5d596ea7d8856691378346/]

Accessing the Template in the Viewmodel

In the templated list example from the last blog post, the template could be injected into the component directly by using $componentTemplateNodes. This won't work here because the template consists of multiple parts which we wish to use in different places within the component. So the template needs to be processed by the component's viewmodel first.

The component's viewmodel can access the template via componentInfo.templateNodes, an array which contains all the DOM nodes of the template (as a reminder: componentInfo is passed to the component's createViewModel function). The viewmodel should provide a parameter of the type HTMLElement[] for each template view element (important, it needs to be an array!). In this example, we use the variables tableHeader and tableRow. With the help of our data-template-function attribute, we can decide which value to assign to which variable:
[crayon-5d596ea7d885a979124016/]

Inserting the Template into the View

Once set in the viewmodel, the nodes can be inserted into the html at the correct place:
[crayon-5d596ea7d885c636990541/]

Templated Components are Awesome!

We have been using templated components for some time now. Every time I reuse one of them, I am very happy to see how little effort is needed to integrate it into the code. Templated components have really save us a lot of time and lines of code!


Templated Components with Knockout

Templated Components

Some time ago I wrote about our best practices for creating components. Even though components have been a great relieve to us when developing new features, there have been limitations as to their flexibility and reusability. For example, when using a component for different tables, each of the tables needed to have more or less the same layout and the same number of columns. To a certain degree, we could adapt the tables with the help of boolean variables, but this did not exactly increase the readability and quality of our code. So we ended up creating a whole range of different table components, even though their basic logic was the same: they all needed to do sorting, scrolling, filtering, etc.

This was when we started to investigate and found out that Knockout now allows passing markup into components. Thus we can have the same logic for all tables but use completely different layouts for them.

Example: Templated List

Here is a very simple example of a templated lists. For passing template into the component, any html code can be placed inside the component's custom html element:
[crayon-5d596ea7d8b84842855445/]
The component can inject any template into its html by using the knockout template binding. The parameter 'nodes' specifies an array of DOM nodes that are to be used as a template, the 'data' parameter supplies the data for the template to render.

The html we placed between the component tags can be accessed via $componentTemplateNodes:
[crayon-5d596ea7d8b88983868090/]
The resulting list looks like this:

Patients

  • Name: John Smith, birth date: 23.09.1952
  • Name: Jenny Smith, birth date: 14.02.1963

Since we do not only want to use the component for patients, but also for orders, we use the same component again in a different view, and pass different template this time:
[crayon-5d596ea7d8b89632412768/]
So we get a completely different list:

Orders

  • Name: John Smith, address: 111 Kirby Corner Rd, Coventry CV4 8GL, status: open
  • Name: Jenny Smith, address: Waterside, Stratford-upon-Avon CV37 7LS, status: delivered

Having understood the principle of how templated components work, we can now move on to creating more complex templated components that inject different parts of template into different places in their view. I will explain in one of my next blog post how this can be done.


Projekt "Mediheld" erfolgreich abgeschlossen

Wir freuen uns sehr, dass unser Projekt "Mediheld" erfolgreich abgeschlossen ist. Naja, so richtig abgeschlossen ist ein solches IT-Projekt eigentlich nie, aber so berichtet es jedenfalls "Der Neue Tag - Oberpfälzischer Kurier".

„Mediheld - gesamtheitliche Versorgung im ländlichen Raum“ verfolgt das Ziel, die medizinische Versorgung in dünn besiedelten Gebieten zu verbessern. Gerade in ländlichen Regionen müssen in der häuslichen und ambulanten Pflege oftmals beträchtliche Fahrtstrecken zwischen Arzt, Apotheke, Sanitätshaus, Pflegdienst und Patient bzw. seinen Angehörigen zurückgelegt werden. Hinzu kommt, dass in den Apotheken Arzneimittel häufig nicht vorrätig sind oder aufgrund mangelnder Kommunikation und Koordination die Patienten mit falschen Medikamenten versorgt werden.

Um diesen Problemen entgegen zu wirken, haben sich Akteure aus der Wirtschaft, Wissenschaft, Gesundheitsversorgung und Politik zusammengeschlossen, um eine zukunftsträchtige Lösung zur gesamtheitlichen Versorgung im ländlichen Raum zu erarbeiten. Gefördert wurde das Vorhaben durch das Bayerische Staatsministerium für Wirtschaft und Medien, Energie und Technologie.

Unter der wissenschaftlichen Begleitung des Fraunhofer-Institutes für Angewandte Informationstechnik in Bayreuth und dem Lehrstuhl für Öffentliches Recht, Sozialwirtschafts- und Gesundheitsrecht der Universität Bayreuth haben wir eine einfach zu bedienende App entwickelt, die alle Beteiligten des Versorgungsprozesses unter Beachtung der rechtlichen Vorgaben miteinander vernetzt und so die Kommunikation sowie die Koordination zwischen Arzt, Apotheke, Sanitätshaus, Pflegedienst und Patient  bzw. seinen Angehörigen erleichtert.

Bedient werden kann die App mittels Smartphone, Tablet oder PC. Jeder der Beteiligten hat zu jeder Zeit Zugriff auf die für ihn relevanten und für seine Augen bestimmten Daten. So ist beispielsweise der Patient immer über den Status seiner Bestellung informiert, die Apotheke und das Sanitätshaus haben direkten Zugriff auf die vom Arzt oder Pflegedienst getätigten Aufträge und der Arzt kann die vom Patienten benötigten Rezepte sofort bearbeiten. So wird auf ganz bequeme Weise eine effiziente und effektive ambulante Pflege ermöglicht.

Die Schug-Gruppe aus Eschenbach hat uns als Projektpartner bei der Modellierung der komplexen Abläufe und mit dem Einsatz der Prototypen in ihren Einrichtungen bei der Weiterentwicklung der "Mediheld"-App hilfreich unterstützt. Durch die Erprobung in der Praxis, das konstruktive Feedback der Anwender und die gute Interaktion der Beteiligten konnte der "Mediheld" von uns fortlaufend optimiert werden.

Jetzt geht ein Projekt zu Ende, das in den letzten Monaten im Mittelpunkt unseres Alltags stand. Wir sind mächtig stolz auf uns, einen so wichtigen Beitrag zur Versorgung im ländlichen Raum geleistet zu haben - ganz zu schweigen von all den kleinen und großen technischen Herausforderungen, die wir auf dem Weg gemeistert haben! Aber ein bisschen traurig sind wir auch, denn unser "Mediheld" ist uns allen ans Herz gewachsen … Aber ganz vorbei ist es zum Glück doch noch nicht: Wir streben an die App "Mediheld" deutschlandweit, später international, an Ärzte, Apotheken, Sanitätshäuser und Pflegedienste zu vertreiben.

Wenn auch Sie mit zu den Benutzern von Mediheld gehören möchten, besuchen Sie unsere Webseite www.mediheld.info und fordern Sie noch heute eine Demoversion an!

mediheld


Best Practices for Creating Knockout Components

With the release of version 3.2.0, knockout introduced Components, which are a clean and powerful way of organizing and structuring code. We have increasingly used components in our projects since then and enhanced our understanding about what is a good component and what is not. In this blog post, I will present our best practices for creating components.

Components - general principles

The idea behind components is to create self-contained chunks of code that fulfill two main purposes:

  • reusing pieces of code
  • simplifying code by breaking it into smaller pieces which each encapsulate a certain functionality; this makes it easier to maintain and understand the code

In order to be self-contained and thus easily reusable, components must be only loosely coupled to the part of the application they are embedded in.

Registration

Knockout components combine a viewmodel and a template. In order to start using a component, it must be registered with the ko.components.register function. The function specifies the viewmodel and template, both of which should be loaded from external files and not hardcoded into the registration.

Of all the different ways of specifying the viewmodel, we use the createViewModel factory function. It is called with the parameters params and componentInfo, where params is an object whose key/value pairs are the parameters passed from the component binding or custom element, and componentInfo.element is the element into which the component is being injected. By passing both params and componentInfo into the constructor, we ensure that both are accessible in the viewmodel:
[crayon-5d596ea7d8c8f473628993/]

View

For injecting a component into a view, we use it as a custom element. We prefer this over the knockout component binding, which binds the components to regular div elements, because it is much more elegant and straightforward:
[crayon-5d596ea7d8c92690764886/]
To supply parameters to the component, we provide a key-value object as params attribute. The properties of this key-value object are defined in the containing viewmodel and will be received by the component's viewmodel constructor.

The component's view should have as its outer element a distinctive div container with the component name as its class name:
[crayon-5d596ea7d8c93621890967/]

[crayon-5d596ea7d8c94404530982/]
 
[crayon-5d596ea7d8c95923200318/]
This makes it easier to address the component and to provide component-specific css.

Since we want the components to be reusable, the view must not contain any ids. Having ids in the component would make it impossible to use a component multiple times within one page.

Params

The properties which are passed over as params into the component are the only means of communication between the component and the containing viewmodel. In order to maintain a loose coupling, we must never pass the complete parent viewmodel into the component. This would make the component very hard to reuse and the viewmodel very hard to change.
Instead, only those properties and functions should be sent to the component that are really necessary for displaying and manipulating the component's view elements.

For better keeping track of what the properties are meant to be used for, we have established the following naming convention: if the component needs to call a function of its containing view model, the parameter for this function should have the word 'callback' in its name (such as 'cancelCallback'). If, on the other hand, the containing viewmodel needs to call a function from the component, the function's name should contain 'ComponentAction', such as "findAddressComponentAction".

Since a component should solely be concerned with itself and not with the container in which it is embedded, we should never pass over a view element from outside the component. Moreover, trying to access a component via their containing elements is a dangerous thing to do especially if a component is used more than once within the container. Instead, a component can be accessed unambiguously via the componentInfo which is provided by knockout's createViewModel function and should be passed into the constructor if the viewmodel needs to access it.

Viewmodel

A component should act as a black box which takes care of all the functionality that is expected from it. Therefore it should implement all the component-related logic. For example, if we create a table component, the viewmodel must provide the functionalities for searching, sorting, scrolling, etc., rather than the containing viewmodel.

Besides, the viewmodel class should always have a dispose function in which any resources are released that are not collected by the garbage collector, such as subscriptions, event handlers or ko.computed properties (if they are not pure computed properties). The viewmodel should have this dispose function even if the function is empty, as a reminder that we do need to remember to clean up when we extend the component later. The dispose function is called every time just before the container element is removed from the DOM.


Checklists for improving the development process, part three: adaptations of the checklist for creating pull requests

In my last two blog posts about checklists I have presented some checklists for handling pull requests and described their effect and the team's reaction. I have also stated that checklists continuously need to be altered and adapted in order to be truly helpful. In this blog post I am going to explain which changes I have made to the checklist for creating a pull requests and what were the motivations behind that.

Alterations and adaptations of the original checklist

Putting the checklists on trial in our real world workflow revealed several shortcomings and flaws in them that needed to be fixed. For example, I soon discovered that even though it leaves the programmer more freedom at going about his tasks, the list for creating pull requests did not work out as a do-confirm list. The process turned out to be more complex than I had at first imagined; for example, it is important that the latest master is merged into the branch before running the _common solution gulp task on the latest _common master branch, otherwise running the gulp task would propagate changes into my branch that are already in the master branch. So I changed the checklist into a read-do list.

Another thing I realized was that it was impossible to use one and the same checklist for creating pull requests for a regular branch and for a hotfix branch. The list requires us to merge the master branch into our branch, but a hotfix branch will end up in the production branch, where the master branch should by no means end up! So I created a separate list for dealing with hotfix branches, and added some more steps to it.

Moreover, having a look at our application's unit tests made me add a further step to the checklists. Many of the tests failed because they had not been run for quite some time. Adding the step 'run tests and make sure they succeed' to the checklists, I hope that in the future the tests will be used more frequently. This will not only ensure that the tests are always up-to-date, it will also help us discover bugs our new changes would have introduced to the system.

So, this is the updated version of the checklist for creating pull requests:

new version of the checklist for creating pull requests

Since it had been me who ended up fixing all the broken tests, I am especially happy to see that the team members carefully follow this new step 'run the tests …' :)


Checklists for improving the development process, part two: effects and reactions

A couple of weeks ago I wrote a blog post about checklists and how I created several checklists of my own to improve the way we handle pull requests. Today I'd like to share the results of the first weeks' trial period: which effects did the checklist have on the development process and how does the team feel about them.

Here are the two checklists again as I originally drafted them:

checklist for branching

checklist for creating pull requests

Effects on the development process

After I had read so much about how checklists seem to be able to work wonders in other professional areas, the results of the first week trial period were somewhat sobering to me: during this week almost everything happened from a broken build to changes not being forwarded to the global common solution. Well, you cannot change a programmer's habits overnight, and to be fair, not all of those incidents were caused by not using the checklist...

As the weeks passed on, however, there were several situations in which the checklist helped me detect changes that I had forgot to forward to the global _common solution, that would have been overwritten by the next running of the _common solution gulp task. And I was delighted to observe that almost all of the Pull Requests had the latest master branches merged into them. Also, there were much less cases in which some other branch than the master branch ended up in a Pull Request, and if there were, this was absolutely necessary and explicitly allowed.

What the team members think about the checklists

Even though the checklists have undeniably shown some positive effects, I know that not everyone uses the lists at all times. To be honest, every now and again I catch myself thinking 'do I really need to go through that checklist now?', especially if I am in a hurry or think that I have already done all the steps. But someone needs to set a good example…

So, I think while we generally agree on the importance of executing the steps set down in the checklists (haven't we all been bothered by merge conflicts because we forgot to merge the latest master?), we consider it somewhat below us to actually use the list for that. After all, we are all smart developers who have coped without checklists for many years… so we just try to remember everything by heart, and maybe save the couple of minutes it takes to open the checklist and make sure we REALLY have done everything that the list requires. Needless to say that this is not exactly what a read-do list is meant for…

Conclusion

Even though the checklists are not used as regularly as I'd have wished for, they have increased the team's awareness for the necessity of executing these steps. It is still better if the steps are done by heart and in the wrong order, than if they were not done at all.

We all generally agree that checklists are a good idea and that they are helpful, and we all know in our hearts that actually we should be using them. I firmly believe that if we continuously optimize the checklists and constantly adapt them to our processes, the acceptance of them will grow and we will slowly get accustomed to using them.


Command Query Responsibility Segregation (CQRS)

Command Query Responsibility Segregation (CQRS) is an architectural design pattern whose origins are attributed to Greg Young. It is based on Bertrand Meyer’s Command and Query Separation Principle (CQS), according to which a method should either be a command that performs an action, or a query that returns data, but never both at the same time.

CQRS - what it is

Following this train of thought, CQRS's key principle is to strictly separate querying from executing commands. Thus, what has been a single service in classical architectural models now is split into two separate services: a command service accommodating the command methods (methods that change the state of the system) and a query service accommodating the query methods (methods providing a view of the system's state without changing the state itself).
The result are two services which are completely independent from each other and can be designed and scaled according to their different needs.

Being able to scale each side independently is important because the number of queries is typically much higher than the number of commands. Also, when it comes to data storage, the two sides have completely different requirements. Whereas commands need to store their data in normalized tables, the data storage on the query side should be optimized to enable fast reads. Since normalized data requires complex, time-consuming queries, CQRS usually uses separate, de-normalized data tables for the query side, thus minimizing the number of joins needed for a query to get the requested information.
Of course, if we decide to work with different data storages for querying and commanding, we also need to provide a mechanism that ensures consistency across all tables.

Example: Booking system

Let me illustrate the CQRS pattern by the example of a simple booking system that allows the user to make room reservations, to get an overview about which rooms are still vacant and which not, and to view a list of all the bookings he has made.

In the system, our basic queries are getAllBookings and getBookingsByUserId. Each query is executed by its own query object and returns the information requested by the client as its search result. A query can also carry search criteria and paging options.

On the command side, we have the commands CeateBookingCommand, EditBookingCommand and DeleteBookingCommand, which are issued by the user through UI elements. Each command is modelled in a separate class and contains the minimal amount of information that is necessary for executing it (for booking a room, we would need the date, number of people, name, contact details, etc, depending on the business logic).

For each of these commands, the command model provides a separate command handler whose responsibility it is to validate the command and to make changes in the persistent data storage. If, for example, the user issues a CreateBookingCommand, the command handler assigned to deal with this kind of command will first validate the command (e.g. check if all required attributes such as date and user name are provided, check if the email address is in a correct format, etc.) and check if the requested room is still available. If everything is ok, it will then save the new booking into the database. Even though commands do not return data by definition, they may issue status and/or error messages in order to let the user know if the request was processed successfully.

CQRS_bookingSystem

Advantages

With the distinction between command and query services and the separation of querying and commanding concerns, we are able to create a solution that is optimized for either sides. The advantages of such a system can be summed up as follows:

  • Scalability: scale command and query sides independently from each other
  • Flexibility: CQRS allows the use of different representations of objects for commands and querying and is therefore more flexible than CRUD, which requires using the same classes for queries and commands
  • Better performance: separation allows optimizing each operation (e.g. quicker querying through de-normalized data storage)
  • Testability: separation of concerns leads to better testability
  • Maintainability: independent elements are easier to adapt to changing business requirements and thus easier to maintain
  • Collaborative systems: CQRS is particularly useful in collaborative systems in which multiple users operate on the shared data simultaneously. If any conflicts arise, it might not always be desirable to apply the rule 'last one wins'. CQRS allows defining more complex rules that capture the business logic.

Summary: when to use and when not to use

Even though the list of advantages above looks intriguing, not every project is suited for applying CQRS. CQRS should primarily be used in highly collaborative and complex systems in which the underlying business rules change frequently. Simple, static or non-collaborative systems, however, usually do not benefit from using the CQRS pattern.

The CQRS pattern has been very popular and widely used since it has been introduced, but that does not mean that it should be used everywhere. To put it in Udi Dahan's words, "Most people using CQRS […] shouldn't have done so". So, before employing CQRS in any project of your own, you should carefully reflect what will be gained from using it.


Checklists for improving the development process

The idea

Having recently been entrusted with the task of reviewing the team's pull requests, I realized that many things can go wrong in the process of merging branches into master if it is not handled with care. For example, it happens from time to time that changes are accidentally overwritten because someone edited an automatically generated file and forgot to forward the changes to the template from which the file is generated; or important changes get lost while resolving merge conflicts because the person resolving the conflict is not aware of the changes a colleague has made; at other times, a pull request waits to be merged into the master branch for too long (thus increasing the likelihood of merge conflicts) because someone merged another branch into it, which is not yet ready to be merged itself; or merging the PR results in a broken build because someone forgot to make sure there are no build errors in their branch. Such things just happen, and when they do, it takes a lot of time and effort to fix them.

So, being on the lookout for ways to improve the way we handle pull requests, we came up with a new idea: checklists! Checklists are already in use in many areas such as aviation, surgery or the building industry. So why not use them in our development process as well?

A brief history of checklists

Aviation checklists reach back as far as October 30, 1935, the day on which the US Army Air Corps held an aircraft competition in order to decide about the next generation of military bombers. Among the competitors was Boeing's Model 299, which had outshone the other competitors at the preceding evaluations and was considered to be the sure winner. However, as the aircraft had taken off and started to climb into the sky, it suddenly stalled, turned on one wing and exploded.

The cause of this crash was attributed to a 'pilot error'. Apparently, the pilot, being unfamiliar with an aircraft that was considerably more complex than anything he had ever flown before, had neglected a crucial step before the take-off. Because of its complexity, the news declared the aircraft to be 'too much airplane for one man to fly'. Still, the army ordered a couple of the Boeing aircrafts and had their test pilots deliberate on what to do. And they came up with a pilot's checklist with step-by-step checks for takeoff, flight, landing and taxiing.

This was the hour of birth for aviation checklists. Not even have checklist prevented countless plane crashes since then; they also have saved myriads of lives in surgeries, and helped builders schedule large-scale building projects and make sure that the resulting buildings do not crumble down because an individual missed some crucial point, just to name a few examples. With checklists, we have found a way to come to terms with tasks that are too complex for one mind alone to remember.

How to write a good checklist

Seeing that checklists can have such positive effects, the next question was how to write a good checklist for my own purpose.

The first important thing to know about checklists is that they are not intended to spell out every single step in minute detail. They are not intended to turn off your brain completely and substitute the thinking process. Rather, they assume that you are aware of what you are doing and simply want to remind you of the most critical steps. In order for a checklist to be actually used by people, it needs to be simple, short and precise. If it is vague, imprecise or too long, it will be more bothersome than helpful.

There are two different kinds of checklists: read-do lists and do-confirm lists. With the former, you carry out a task as you check it off. With the latter, you can carry out a task as you remember. In the end, you pause to run through the checklist and ensure you have not forgotten anything. Read-do lists should be employed when the order in which steps are carried out is of importance, or when a wrong step has unwanted consequences. Do-confirm lists, on the other hand, leave the people who use them more freedom to execute their task as they seem fit.

Checklists for branching and creating pull requests

With this in mind, I set out to write my own checklists. For a start, I decided to write one for creating new branches and one for creating pull requests.

The list for creating branches is a read-do list as errors here cannot easily be erased. It aims at decreasing merge conflicts and preventing the wrong branches to end up in your new branch:

checklist for branching

The list for creating pull requests is a do-confirm list that makes sure you have done what is in your power to minimize sources of errors and losses of code:

checklist for creating pull requests

Conclusion

We are going to try out these checklists in our team for the next couple of weeks. I am quite curious to see how they work and how they are accepted by the team members. Will the team members like using them? Will they consider them helpful? Will the lists fulfill their purpose and improve the process of handling pull requests? I will keep you informed about the result of this little experiment. If the checklists turn out to be successful, I am going to create more lists to use in other parts of our development process. With those steps off our minds, we can focus our brain-power on our most challenging task - producing good code.


Iterative Performance Tuning of Web Apps

In my last blog post I described how to use Firebug's Profiler in order to find out which parts of your code are most time consuming and are responsible for the performance issues you might experience. Today I am showing you how I used the Profiler as a step in the iterative process of tuning the performance of a web app.

Steps of the iterative performance tuning process:

According to Wikipedia, the systematic tuning of performance comprises the following steps:

  1. Assess the problem and establish numeric values that categorize acceptable behavior.
  2. Measure the performance of the system before modification.
  3. Identify the part of the system that is critical for improving the performance. This is called the bottleneck.
  4. Modify that part of the system to remove the bottleneck.
  5. Measure the performance of the system after modification.
  6. If the modification makes the performance better, adopt it. If the modification makes the performance worse, put it back the way it was.
  7. start at 1. again

The Problem & Performance before the test

In my application, the performance issues concerned the searching functionality. I have a list with more than 45000 items which is searched for a search term the user can enter in a search field. The search itself uses an implementation of the Soundex algorithm (I described the Soundex Algorithm and the adoptions I made in order to use it for the German language in an earlier post) to match the search term against the items. I observed the performance to be awful especially on mobile devices.

I did not establish a numeric value to define acceptable behaviour. I just wanted it to improve significantly (being well aware that "improve significantly" is a heavy violation of the SMART principle…) so that the user does not deem the application dead while waiting for the search to finish…
Repeatedly running the Firebug profiler gave me the average of about 4500ms per searching task.

Identifying and removing the bottleneck

Using the Firebug profiler, I received the following profiling report (excerpt):

profiling report before the tuning

With the help of this report, I could easily identify the first three function calls as the bottleneck of my search. The first of them is an access to the database to retrieve all items of the list, the second the transformation of a search term into its Soundex code, and the third a filtering operation.

Generally, there are two options to deal with these functions: either to improve the function so that less time is spent for its execution, or to reduce the number of calls of that function; which of the two options is best of course depends on the function's internal logic.

I started by analyzing the first of the functions and checked where it was called. I found out that accessing the database could be prevented entirely here. Until now, the same variable was used for storing the list as it appears when it is not filtered, and for storing the list when it contains only the search results. So what happened during each search was that first, the list got emptied completely. Then, a list with all available items was demanded from the database and then filtered for the search item. I introduced a separate variable to store the list with the unfiltered items. Now the search can use this variable instead of retrieving the list from the database.

Assessing the improvement

Having made this modification, I ran the profiler again in order to check if the performance has indeed improved. As I had expected, the function that accesses the database was not invoked a single time, leading to an improvement of about 1500ms! This was a great success, so I kept the changes.

I also had a look at the other two bottlenecks. I could not improve the Soundex algorithm, nor could I reduce the amount of calls of that function. However, I was able to make some further improvement by making some minor changes to the filterBy function. Even though the modification decreased the average time spent in this function by only about 0.015ms, this accounts for quite a lot if it is multiplied with 45426, the number of calls of this function.

In total, a sorting task now takes about half the time it did before. Here is an excerpt of the profiling report after the tuning:

Profiling report after optimization


Using the Firebug Profiler for profiling web apps

I am sure every programmer is quite familiar with such situations: you are bursting with pride and self-content because your app is finally running without errors. You try it out on a mobile device for the first time, and you are thoroughly disappointed to realize that the performance of your masterpiece is just awful …
Then it is just about time for some profiling. This blog post will show you how profiling a JavaScript application can easily be done using Firebug's Profiler.

Alternative Profiling Tools

Although many other browsers provide built-in profiling tools (such as Chrome, Internet Explorer or Safari), I liked Firebug's Profiler best for its highly detailed profiling report and its way of presenting the results so that you get the most important information at one glance.

How to use the profiler

To start using Firebug's Profiler, you need to open Firebug. Select the Console tab and click on "Profile". The profiler is running now and observing all your JavaScript activity, making statistics about time consumption.

starting the firefox profiler

All you need to do now is trigger some activity. I am having performance issues with searching for a particular entry in a table in my app. So I enter a search term, start the search and wait until the search results are displayed. Then I click "Profile" again in order to stop the Profiler. Firebug now opens a huge table (well, that actually depends on the complexity of your code…) containing the profiling results. Here is an excerpt from the table I got:

profiling report

How to read the results

The great art now lies in correctly interpreting the results in order to know which part of the code is causing the trouble. Therefore it is vital to know which information is contained in the report:

In the top left corner, the profiling report specifies the total amount of time for executing the activity, as well as the total number of function calls that were involved. The table below lists every function that was called during the sort. The columns provide the following information:

  • Function: the name of the invoked function
  • Calls: how often has this function been called?
  • Percent: the percentage of time this function consumed in relation to all other functions within the sorting
  • Own time: total time spent within the function (summary of all calls)
  • Time: total time spent within the function (summary of all calls); the difference to 'own time' is that 'time' also includes the time spent within functions that were called by that function
  • Avg: average time for one call of the function
  • Min: minimal time for one call of the function
  • Max: maximal time for one call of the function
  • File: the name of the file in which the function is located and the line number of the function; a link leads directly to the file

By default the table is sorted so that those functions accounting for the highest percentage are listed first. Thus your culprits are easy to spot!

Having conducted the profiling and identified the functions responsible for the high time consumption, we need to start to analyze those functions and think of ways to improve their performance. How I used the profiling report to improve the performance of my search I will describe in a following blog post.