18 Aug

Best Practices for Creating Knockout Components

With the release of version 3.2.0, knockout introduced Components, which are a clean and powerful way of organizing and structuring code. We have increasingly used components in our projects since then and enhanced our understanding about what is a good component and what is not. In this blog post, I will present our best practices for creating components.

Components – general principles

The idea behind components is to create self-contained chunks of code that fulfill two main purposes:

  • reusing pieces of code
  • simplifying code by breaking it into smaller pieces which each encapsulate a certain functionality; this makes it easier to maintain and understand the code

In order to be self-contained and thus easily reusable, components must be only loosely coupled to the part of the application they are embedded in.


Knockout components combine a viewmodel and a template. In order to start using a component, it must be registered with the ko.components.register function. The function specifies the viewmodel and template, both of which should be loaded from external files and not hardcoded into the registration.

Of all the different ways of specifying the viewmodel, we use the createViewModel factory function. It is called with the parameters params and componentInfo, where params is an object whose key/value pairs are the parameters passed from the component binding or custom element, and componentInfo.element is the element into which the component is being injected. By passing both params and componentInfo into the constructor, we ensure that both are accessible in the viewmodel:


For injecting a component into a view, we use it as a custom element. We prefer this over the knockout component binding, which binds the components to regular div elements, because it is much more elegant and straightforward:

To supply parameters to the component, we provide a key-value object as params attribute. The properties of this key-value object are defined in the containing viewmodel and will be received by the component’s viewmodel constructor.

The component’s view should have as its outer element a distinctive div container with the component name as its class name:

This makes it easier to address the component and to provide component-specific css.

Since we want the components to be reusable, the view must not contain any ids. Having ids in the component would make it impossible to use a component multiple times within one page.


The properties which are passed over as params into the component are the only means of communication between the component and the containing viewmodel. In order to maintain a loose coupling, we must never pass the complete parent viewmodel into the component. This would make the component very hard to reuse and the viewmodel very hard to change.
Instead, only those properties and functions should be sent to the component that are really necessary for displaying and manipulating the component’s view elements.

For better keeping track of what the properties are meant to be used for, we have established the following naming convention: if the component needs to call a function of its containing view model, the parameter for this function should have the word ‘callback’ in its name (such as ‘cancelCallback’). If, on the other hand, the containing viewmodel needs to call a function from the component, the function’s name should contain ‘ComponentAction’, such as “findAddressComponentAction”.

Since a component should solely be concerned with itself and not with the container in which it is embedded, we should never pass over a view element from outside the component. Moreover, trying to access a component via their containing elements is a dangerous thing to do especially if a component is used more than once within the container. Instead, a component can be accessed unambiguously via the componentInfo which is provided by knockout’s createViewModel function and should be passed into the constructor if the viewmodel needs to access it.


A component should act as a black box which takes care of all the functionality that is expected from it. Therefore it should implement all the component-related logic. For example, if we create a table component, the viewmodel must provide the functionalities for searching, sorting, scrolling, etc., rather than the containing viewmodel.

Besides, the viewmodel class should always have a dispose function in which any resources are released that are not collected by the garbage collector, such as subscriptions, event handlers or ko.computed properties (if they are not pure computed properties). The viewmodel should have this dispose function even if the function is empty, as a reminder that we do need to remember to clean up when we extend the component later. The dispose function is called every time just before the container element is removed from the DOM.

Share this
07 Aug

Checklists for improving the development process, part three: adaptations of the checklist for creating pull requests

In my last two blog posts about checklists I have presented some checklists for handling pull requests and described their effect and the team’s reaction. I have also stated that checklists continuously need to be altered and adapted in order to be truly helpful. In this blog post I am going to explain which changes I have made to the checklist for creating a pull requests and what were the motivations behind that.

Alterations and adaptations of the original checklist

Putting the checklists on trial in our real world workflow revealed several shortcomings and flaws in them that needed to be fixed. For example, I soon discovered that even though it leaves the programmer more freedom at going about his tasks, the list for creating pull requests did not work out as a do-confirm list. The process turned out to be more complex than I had at first imagined; for example, it is important that the latest master is merged into the branch before running the _common solution gulp task on the latest _common master branch, otherwise running the gulp task would propagate changes into my branch that are already in the master branch. So I changed the checklist into a read-do list.

Another thing I realized was that it was impossible to use one and the same checklist for creating pull requests for a regular branch and for a hotfix branch. The list requires us to merge the master branch into our branch, but a hotfix branch will end up in the production branch, where the master branch should by no means end up! So I created a separate list for dealing with hotfix branches, and added some more steps to it.

Moreover, having a look at our application’s unit tests made me add a further step to the checklists. Many of the tests failed because they had not been run for quite some time. Adding the step ‘run tests and make sure they succeed’ to the checklists, I hope that in the future the tests will be used more frequently. This will not only ensure that the tests are always up-to-date, it will also help us discover bugs our new changes would have introduced to the system.

So, this is the updated version of the checklist for creating pull requests:

new version of the checklist for creating pull requests

Since it had been me who ended up fixing all the broken tests, I am especially happy to see that the team members carefully follow this new step ‘run the tests …’ 🙂

Share this
05 Aug

How to use knockout-bindings to set style of DOM-element

Knockout.js is a very powerful MVVM framework to do some web-application.
I want to explain a trick how you can bind a style of DOM-element in runtime.
The simplest way to use a css-binding is:

<label data-bind="css: 'red'">

so your label will be defined by class ‘red’:

<label class="red">

Another way goes through the binding parameter ‘attr’. You can set all attributes of DOM-element during this parameter directly, but if you want to set the attribute ‘class’ you have to use the name ‘class’ as follows:

<label data-bind="attr: {class: 'red'}">

The binding during ‘css’ attribute adds a class name to an existing list of classes.

<label class="warning" data-bind="css: {red: isDangerous}">

The class ‘red’ will be appended to the class ‘warning’ if the function isDangerous() returns true.
Unlike that, the binding during the parameter ‘class’ overwrites the attribute ‘class’ of an element completely.
Now, imagine the following problem: you will show a collection each element of which has a common class ‘nice’. A loop through the collection will be looked as next example:

<!-- ko: foreach myCollection -->
<label class="nice" data-bind="text: someText"></label>
<!-- /ko -->

Then you want to enable or disable some rows by function isEnabled(). It’s quite easy too:

<label class="nice" data-bind="text: someText, css: {enabled: isEnabled}"></label>

At finally you want set some unique row’s style based on row’s position. You can use something like ‘row’+$index() :

<label class="nice" data-bind="text: someText, css: {enabled: isEnabled, 'row'+$index(): 1}"></label>

But it doesn’t work because Knockout doesn’t understand dynamic names of related parameters!
In such case the parameter ‘class’ helps well. We know that it overwrites the list of classes of DOM-element, because of this we move the description of classes from DOM-element in Knockout binding:

<label data-bind="text: someText, css: {enabled: isEnabled}, attr: {class: 'nice row'+$index()}"></label>

It works and seems good, but you’ll notice soon the class ‘enabled’ will be never set. Why? Because the parameter ‘attr’ will overwrite it.
The solution is very simple: the attribute ‘class’ must always be placed before the parameter ‘css’!

<label data-bind="text: someText, attr: {class: 'nice row'+$index()}, css: {enabled: isEnabled}"></label>

Now it works very well.

Share this
31 Jul

Checklists for improving the development process, part two: effects and reactions

A couple of weeks ago I wrote a blog post about checklists and how I created several checklists of my own to improve the way we handle pull requests. Today I’d like to share the results of the first weeks’ trial period: which effects did the checklist have on the development process and how does the team feel about them.

Here are the two checklists again as I originally drafted them:

checklist for branching

checklist for creating pull requests

Effects on the development process

After I had read so much about how checklists seem to be able to work wonders in other professional areas, the results of the first week trial period were somewhat sobering to me: during this week almost everything happened from a broken build to changes not being forwarded to the global common solution. Well, you cannot change a programmer’s habits overnight, and to be fair, not all of those incidents were caused by not using the checklist…

As the weeks passed on, however, there were several situations in which the checklist helped me detect changes that I had forgot to forward to the global _common solution, that would have been overwritten by the next running of the _common solution gulp task. And I was delighted to observe that almost all of the Pull Requests had the latest master branches merged into them. Also, there were much less cases in which some other branch than the master branch ended up in a Pull Request, and if there were, this was absolutely necessary and explicitly allowed.

What the team members think about the checklists

Even though the checklists have undeniably shown some positive effects, I know that not everyone uses the lists at all times. To be honest, every now and again I catch myself thinking ‘do I really need to go through that checklist now?’, especially if I am in a hurry or think that I have already done all the steps. But someone needs to set a good example…

So, I think while we generally agree on the importance of executing the steps set down in the checklists (haven’t we all been bothered by merge conflicts because we forgot to merge the latest master?), we consider it somewhat below us to actually use the list for that. After all, we are all smart developers who have coped without checklists for many years… so we just try to remember everything by heart, and maybe save the couple of minutes it takes to open the checklist and make sure we REALLY have done everything that the list requires. Needless to say that this is not exactly what a read-do list is meant for…


Even though the checklists are not used as regularly as I’d have wished for, they have increased the team’s awareness for the necessity of executing these steps. It is still better if the steps are done by heart and in the wrong order, than if they were not done at all.

We all generally agree that checklists are a good idea and that they are helpful, and we all know in our hearts that actually we should be using them. I firmly believe that if we continuously optimize the checklists and constantly adapt them to our processes, the acceptance of them will grow and we will slowly get accustomed to using them.

Share this
30 Jul

Kurierdienst DHL Kurier Extension für Magento

Mit dieser Extension erhalten Ihre Kunden genau zum gewünschten Zeitpunkt die bestellte Ware! Durch den neue Service von DHL Paket DHL Kurier können Sie mit diesem Magento Modul Ihren Kunden eine ungeahnte Vielfalt an Einstellmöglichkeiten bieten.

Das DHL Kurier Plugin wurde speziell für Magento entwickelt und angepasst. Besonders die intelligente Auswahl der möglichen Versandarten macht diese Extension sehr intuitiv und einfach zu bedienen, sowohl für Sie als Shopbetreiber, als auch im Besonderen für Ihre Kunden. Gerade durch die neue Versandart mit Kühlung ermöglicht die Extension in Verbindung mit DHL Kurier Lebensmittel oder sonstige verderbliche Ware schnell und sicher zum Kunden zu transportieren.

Machen Sie es Ihren Kunden einfacher Ihre Ware schnell und ohne Komplikationen zu erhalten. Es steigert die Kundenzufriedenheit messbar!

Gleich in unserem Magento Plugin Shop bestellen!

Share this
20 Mai

nginx gegen LogJam absichern

Sicherheit und Absicherung von Servern ist wie ein eisernes Vorhängeschloss – wenn man es nicht pflegt und prüft, rostet es und wird immer weniger sicher.

Wir sind uns unserer Verantwortung sehr bewusst, dass Sie uns Ihre Daten anvertrauen. Daher sind wir immer daran interessiert unsere Server gegen gängige und neueste Attacken zu messen.

Heute ist die sogenannte LogJam Sicherheitslücke bekannt geworden. Dabei sind es vor allem die Browser, die die Sicherheitslücke aufreißen. Allerdings kann man hier in diesem speziellen Fall auch als Serverbetreiber die Anwender durch eine richtige, härtere Konfiguration des Servers schützen.

Wir nutzen nginx als unsere Edge, um unsere Backends gegenüber dem offenen Internet zu schützen. Daher haben wir uns heute, inspiriert durch LogJam, unsere nginx Konfiguration nochmal mit der Lupe angeschaut.

Die wichtigsten Punkte für die ssl Sicherheit sind in nginx die Direktiven ssl_ciphers und ssl_prefer_server_ciphers. Damit kann man unsere ssl Verschlüsselungen verbieten und den Browser und Server anweisen dem Server bei der Aushandlung des ssl Ciphers mehr zu vertrauen.

Zusätzlich ist es immer empfehlenswert ssl auch durch einen möglichst starke Diffie-Hellman Gruppe zu erzeugen. Diese wird mit dem Befehl

erzeugt. Eingebunden werden kann diese dann über ssl_dhparam in nginx.

Überprüfen kann man die Einstellungen des Servers über weakdh.org. Schön, dass unsere Server dort “Good News!” verkünden. Wir sind gespannt, wann die nächste Sicherheitslücke unsere Server testen wird. Wir bleiben auf jeden Fall für Sie und Ihre Daten am Ball!

Share this
15 Mai

New security hole in Magento

Magento just release a new patch calle SUPEE-5994 with multiple critical security fixes today. It addresses issues including scenarios where attackers can gain access to your customer data. All Versions of Magento CE are impacted. We strongly recommend that you deploy this critical patch to your server immediately. More information on this can be found here.

To download the latest patches please go to the Community Edition download page.
Look for the SUPEE-5994. The patch is available for CE 1.4.1–

Be sure to implement and test the patch in a development environment first to confirm that it works as expected before deploying it to a production site.

If you have any troubles with this don’t hesitate to contact us!

Share this
14 Mai

MTM vs. Selenium – How to run tests in parallel?

You need to set up an environment including multiple machines in the same role. The test controller organizes the tests in buckets of 10 or 100 tests (linked to how much tests you have in your test run) and sends bucket for bucket to its agents/machines.

When you change the parameter in your Test Settings, then the tests will be executed in parallel. But the hub isn’t able to realize that it should wait, when all instances of the grid are already in use, before broadcasting the next test request. Because of that, the next test will fail with timeout error, because the other tests took longer than the timeout.

Share this
11 Mai

MTM vs. Selenium – Is it possible to run automated tests on Android and iOS devices?

We already have our Test environment set up with Selenium. But we also use VSO (Visual Studio Online), which means that it could be better to change or to additionally use MS Testmanager (MTM) to organize and execute manual and automated tests. I wrote down some questions and started to analyse if any of the nice features would be lost by changeing from Selenium to MTM.

During my research I stumbled over some articles and posts about how to integrate Selenium tests into MTM. But my questions are still interesting, for the decision if I want to continue writing my tests in Selenium and then add them to MTM or to start creating them on MTM.

You need to extend your VSO with a Plugin. Those extensions give you the opportunity to record manual tests same as on Selenium IDE – but for any mobile devices / operating systems instead of Firefox. It also has the functionality to export that recorded test into some code, which you can add on MTM.
There are various different Plugins, but none of them are for free – and I’m very happy with Selenium.

Here you need to set up a hub/node and connect a mobile device. Then you can record tests via Selenium IDE on Firefox and generate the test code. As last step you have to change the used WebDriver from Firefox to Android or iOS.

Share this
08 Mai

Command Query Responsibility Segregation (CQRS)

Command Query Responsibility Segregation (CQRS) is an architectural design pattern whose origins are attributed to Greg Young. It is based on Bertrand Meyer’s Command and Query Separation Principle (CQS), according to which a method should either be a command that performs an action, or a query that returns data, but never both at the same time.

CQRS – what it is

Following this train of thought, CQRS’s key principle is to strictly separate querying from executing commands. Thus, what has been a single service in classical architectural models now is split into two separate services: a command service accommodating the command methods (methods that change the state of the system) and a query service accommodating the query methods (methods providing a view of the system’s state without changing the state itself).
The result are two services which are completely independent from each other and can be designed and scaled according to their different needs.

Being able to scale each side independently is important because the number of queries is typically much higher than the number of commands. Also, when it comes to data storage, the two sides have completely different requirements. Whereas commands need to store their data in normalized tables, the data storage on the query side should be optimized to enable fast reads. Since normalized data requires complex, time-consuming queries, CQRS usually uses separate, de-normalized data tables for the query side, thus minimizing the number of joins needed for a query to get the requested information.
Of course, if we decide to work with different data storages for querying and commanding, we also need to provide a mechanism that ensures consistency across all tables.

Example: Booking system

Let me illustrate the CQRS pattern by the example of a simple booking system that allows the user to make room reservations, to get an overview about which rooms are still vacant and which not, and to view a list of all the bookings he has made.

In the system, our basic queries are getAllBookings and getBookingsByUserId. Each query is executed by its own query object and returns the information requested by the client as its search result. A query can also carry search criteria and paging options.

On the command side, we have the commands CeateBookingCommand, EditBookingCommand and DeleteBookingCommand, which are issued by the user through UI elements. Each command is modelled in a separate class and contains the minimal amount of information that is necessary for executing it (for booking a room, we would need the date, number of people, name, contact details, etc, depending on the business logic).

For each of these commands, the command model provides a separate command handler whose responsibility it is to validate the command and to make changes in the persistent data storage. If, for example, the user issues a CreateBookingCommand, the command handler assigned to deal with this kind of command will first validate the command (e.g. check if all required attributes such as date and user name are provided, check if the email address is in a correct format, etc.) and check if the requested room is still available. If everything is ok, it will then save the new booking into the database. Even though commands do not return data by definition, they may issue status and/or error messages in order to let the user know if the request was processed successfully.



With the distinction between command and query services and the separation of querying and commanding concerns, we are able to create a solution that is optimized for either sides. The advantages of such a system can be summed up as follows:

  • Scalability: scale command and query sides independently from each other
  • Flexibility: CQRS allows the use of different representations of objects for commands and querying and is therefore more flexible than CRUD, which requires using the same classes for queries and commands
  • Better performance: separation allows optimizing each operation (e.g. quicker querying through de-normalized data storage)
  • Testability: separation of concerns leads to better testability
  • Maintainability: independent elements are easier to adapt to changing business requirements and thus easier to maintain
  • Collaborative systems: CQRS is particularly useful in collaborative systems in which multiple users operate on the shared data simultaneously. If any conflicts arise, it might not always be desirable to apply the rule ‘last one wins’. CQRS allows defining more complex rules that capture the business logic.

Summary: when to use and when not to use

Even though the list of advantages above looks intriguing, not every project is suited for applying CQRS. CQRS should primarily be used in highly collaborative and complex systems in which the underlying business rules change frequently. Simple, static or non-collaborative systems, however, usually do not benefit from using the CQRS pattern.

The CQRS pattern has been very popular and widely used since it has been introduced, but that does not mean that it should be used everywhere. To put it in Udi Dahan’s words, “Most people using CQRS […] shouldn’t have done so”. So, before employing CQRS in any project of your own, you should carefully reflect what will be gained from using it.

Share this

© 2015 groupXS Solutions GmbH. All rights reserved.