WPF Data Grid with extensible columns

When working with WPF, lot of developers just adopt the MVVM pattern in making above average scale applications. Mainly with the support the Prism platform gives in order to build MVVM applications using WPF, and the many support and forums are on using that, it’s a no brainer that lot of WPF devs turn to that pattern. If I say a few words on Prism briefly, it has been a simple yet effective framework to be used in WPF applications, to adopt the MVVM architecture in building extensible loosely coupled applications which are easy to build up and easy to maintain.

This blog post hopes to shed some light on one of the issues that I’ve faced during the development with WPF and Prism. Actually, it’s a problem that I’ve faced with WPF, and how I used Prism to get out of that hole. There’s quite a number of problems and solutions that I’ve thought about sharing through this space, if time and motivation permits, all will be shared in due time. Also I will assume that whoever actually stumbles upon this in their searches already has some idea about Prism and MVVM pattern, because unfortunately I will not delve much in to those explanations here. But I will try to make it all self explanatory as much as possible. So please bear with me.

The Problem

In the project I was working on there came a requirement to build a dashboard in the form of a datagrid. My immediate choice of datagrid was to use the xceed data grid community edition, because it was used elsewhere in the same application, and so I have gained the skills to apply it and style it the way I want. In this data grid, as the standard data grid is, we define the various columns in the xaml it self and define the bindings to those columns. Our project used many other modules where they insert their components in to the existing backbone using Prism MVVM pattern. So naturally I thought it would be awesome to do that with the dashboard too, giving any module to insert it’s columns to the data grid. But obviously I won’t be able to use the default prism regions to implement this behavior. So a custom region adapter was required. But when I was implementing this, the biggest problem I faced was communicating the data context of each row to the cells in the column.

When I implemented the custom region adapter for the columns collection, I’m able to pass in the whole data grids data context to the column, but that doesn’t help me when defining the bindings of the individual cells, as those cells should need the ability to access the data context of the row it’s in. So I came up with a solution to solve that multidimensional problem.

I will go through each step as best I can. This will be a memory exercise for me too, as it’s been a while since I tackled this problem.

Xceed DataGrid

There are many capabilities of this datagrid. Usually I prefer using default stuff provided by whatever technology I’m using. But I went ahead and used this because it was already used in the application, and styles has already been defined for it so it would look uniform when I use it for my dashboard.

Defining columns for the datagrid is pretty straightforward, as it is for the default WPF datagrid.

Each column can be bound to properties in the DataGridCOntrol data context. But for my purposes, I desired to have separate view models for each column so they would be separate from the base DataGridControl. So The columns would not be bound straight to the Data Grid control’s data context, rather they would be bound to a separate view model using the (Column.CellContentTemplate) property. Each of the separate Columns look like this :

<xcdg:UnboundColumn  
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:xcdg="http://schemas.xceed.com/wpf/xaml/datagrid"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
             xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
             FieldName="Location" Title="Location"
             ReadOnly="True"
             Width="100"
             MaxWidth="210">
    <xcdg:UnboundColumn.CellContentTemplate>
        <DataTemplate>
            <StackPanel Loaded="Cell_Loaded" Style="{StaticResource UpToDateStyle}" Unloaded="Cell_Unloaded">
                <TextBlock Text="{Binding Location}"/>
            </StackPanel>
        </DataTemplate>
    </xcdg:UnboundColumn.CellContentTemplate>
</xcdg:UnboundColumn>

The columns are Unbound Columns. Each column has a cellContentTemplate defined and we bind data to the dataContext of the element inside the template. And the data context is fully taken care by the View Model we bind the column to.

In order to do this, we should have the ability to make the column collection a Prism region, so that we can have separate Column user controls backed by it’s own view models, and we can insert those Columns to the Prism region we define. Other modules we add to the project can then also define their own columns, and add to the column region. That’s basically the gist of what I tried to achieve.

Column Region Adapter

We can’t use the default Prism Collection Region Adapter for the Columns collection, because the Columns are not inheriting from Items Control. So a custom region adapter was required. At this point I haven’t build a custom region adapter, so it’s another opportunity to build one and master that area in Prism too. So it was a welcome requirement. I implemented it as an All Active Region.

public class DataGridControlRegionAdapter : RegionAdapterBase
{
    public DataGridControlRegionAdapter(IRegionBehaviorFactory regionBahaviorFactory) : base(regionBahaviorFactory){}

    protected override void Adapt(IRegion region, DataGridControl regionTarget)
    {
        if (region == null) throw new ArgumentNullException("region");
        if (regionTarget == null) throw new ArgumentNullException("regionTarget");
        region.Views.CollectionChanged += (s, e) =>
        {
            if (e.Action == NotifyCollectionChangedAction.Add)
            {
                foreach (ColumnBase col in e.NewItems)
                {
                    var columns = col as ColumnBase;
                    if (columns != null)
                        regionTarget.Columns.Add(columns);
                }
            }
            if (e.Action == NotifyCollectionChangedAction.Remove)
            {
                 foreach (ColumnBase col in e.OldItems)
                 {
                     var columns = col as ColumnBase;
                     if (columns != null && 
                         regionTarget.Columns.Any(co => co.Title == columns.Title))
                             regionTarget.Columns.Remove(columns);
                 }
            }
        };
    }
    protected override IRegion CreateRegion()
    {
        return new AllActiveRegion();
    }
}

Then in DataGridControl we can just insert the region like this :

prism:RegionManager.RegionName="{x:Static mod:ModuleRegionNames.GeneralDashboardColumnRegion}"
prism:RegionManager.RegionContext="{Binding}"

Data Context Binding

A column view model helper class takes care of the data binding and getting the correct model to our views. We have to use that because Prism wouldn’t do that for us in this case as it normally would. Each datacell view model inherits from a DataCellViewModelBase. and each column view model is typed according to column type.

public class ColumnViewModelHelper where T : DataCellViewModelBase
{
    private readonly IUnityContainer _container;
    public ColumnViewModelHelper(IUnityContainer container)
    {
        _container = container;
    }
    public T SetupDataContext(FrameworkElement cellContent)
    {
        var templatedFE = cellContent.TemplatedParent as FrameworkElement;
        var dataGridDataCell = templatedFE.Parent as FrameworkElement;
        var parent = dataGridDataCell.Parent as FrameworkElement;
        var cellDataContext = dataGridDataCell.DataContext;
        if (!(cellDataContext is T))
        {
            var dataRowViewModel = cellDataContext as DashboardEntityViewModel;
            var viewModel =
                _container.Resolve(new ParameterOverride("device,
                                                        dataRowViewModel.Device));
            viewModel.SetRowModel(dataRowViewModel);
            dataGridDataCell.DataContext = viewModel;
            cellContent.DataContext = viewModel;
            return viewModel;
        }
        else
        {
        if (cellContent.DataContext == null)
        {
            cellContent.DataContext = cellDataContext;
            var dataRowViewModel = parent.DataContext as DashboardEntityViewModel;
            // if current row context doesn't match new dataContext, replace
            // dataContext
        if (dataRowViewModel.Device.Id != (cellDataContext as T).RowModel.Device.Id)
            {
                var viewModel = _container.Resolve(new ParameterOverride("device",
                                                        dataRowViewModelDevice));
                viewModel.SetRowModel(dataRowViewModel);
                dataGridDataCell.DataContext = viewModel;
                cellContent.DataContext = viewModel;
                return viewModel;
            }
        }
        return cellDataContext as T;
    }
}

So in the column control cs code, we implement the Loaded method, which calls the view model’s ViewLoaded method as usually we do in MVVM. We pass in the element to the view loaded which gets passed in to the Column model helper’s SetupDataContext method. There we extract the cell data context. If the cell data context is not bound before, we bind it to the custom cell data context we create. If it’s not already bound to the dataCellViewModel subtype, then that means it’s bound to the element corresponding to that row, where that cell is. So we get the row data context from that. Using that row data context we resolve it to a cell data context corresponding to the the column type, using unity container. Each of the cell view models has a reference to the row view model too. So we set that up. Then we replace the cell’s data context with the cell view model we just resolved.

In some instances the cellContent data context would become null, then we have to reset it to the cell data context anyway. In these cases the row view model which the cell belonged to might’ve changed, So we check if that has happened by comparing the current cell data context’s row model with the parent data context row view model. If this is not the same we have to reset the cell data context to the parent row view models data.

This process ensures that each cell will have it’s own view model which takes care of it’s data binding. We take care of potential replacement of data contexts which the datagrid control might do in the SetupDataContext method. So far this has not failed me in having a extensible data grid control where with specific control on each cell. Possibilities are endless for this because we can basically have anything in the cell with cell content templates and they can be bound to anything we want too, using the method described here. So hope this helps someone and hope somebody would be able to extend this to even more capabilities.

Advertisements
Posted in .Net, Design Patterns | Tagged , , , , , , , , | Leave a comment

Immediately Invoked Function Expressions

Back at the writing board, after a while. Lots of changes happened in my personal life, putting off writing this important article for a while. This is a must have addendum for the article series I started on JavaScript.

We know that everything in JavaScript can be treated as objects. Objects doesn’t necessarily need to have a name all the time, especially we only want to that specific object to do one thing and just go away. This is not a specific thing to JavaScript as you may all know. We have anonymous functions or classes mostly, in languages like Java or C# too, exactly used for the same reason I talked about. But in JavaScript this is a pivotal concept because there can be some misunderstanding and confusions around the whole thing, as JS is loosely typed. These anonymous functions are called Immediately Invoked function expressions (IIFE).

If you just read its purpose as in the previous paragraph, it doesn’t seem like there’s a lot more to it right? This is why I was surprised also, when I went over the whole concept in this very thorough article. Through this article I learned a lot on some fundamentals on Javascript it self, through the analysis from the name of IIFE through its function.

Some devs don’t put much attention to terminology at all. But with proper terminology, comprehension and communication of concepts in any subject matter unravels tremendously. That’s why it’s important to narrow down on the semantics of IIFE as well as anything. Because its claimed that IIFEs are called other things as well, like “self-executing anonymous functions”. On first glance they seem to give the same meaning. But it really isn’t. The original article goes more in depth as to why the second term is wrong.

As a special feature of functions as a Object in JavaScript, functions when invoked create their own scope, an execution context, which is a like a private space owned by that function object only. This is a concept we discussed in one of the previous articles. Invocation of a function is a way to encapsulate, a way to create privacy.

 
function makeCounter() {

    // i is a private variable, only accessible inside makeCounter
    var i = 0;
    return function () {
        console.log(i++);
    };
}

The function we return from invoking makeCounter is accessible outside of the scope of makeCounter. But since its accessing the private variable ‘i’ and showing its value to the outside, its a privileged member of makeCounter.

Now lets invoke makeCounter twice for two instances. Note that `counter` and `counter2` each have their own scoped `i`.


var counter = makeCounter();
counter(); // logs: 1
counter(); // logs: 2

var counter2 = makeCounter();
counter2(); // logs: 1
counter2(); // logs: 2

That’s not so hard was it. Calling makeCounter creates their own personal spaces. And the variable is not accessible outside of those spaces.


i; // ReferenceError: i is not defined (it only exists inside makeCounter)

There are several ways we can define a function in Javascript :


function foo1() {}
var foo2 = function() {}

Both foo1 and foo2 are invoked by adding () to the end, you know, coz they are functions. So foo1 and foo2 is just basically names that we gave for this code : function () { /* executed code with the function */ } . And that can be invoked by just adding () to the end. So it stands to reason that we can just drop the different name calling and just do this :

function () { /* executed code with the function */ } ()

But try it, it just doesn’t work like that, it says that it’s a syntax error at ‘(‘. So what’s the deal with that? This is because of the great old lady who only knows what its reading at the moment and nothing beyond that, the JavaScript parser. When the parser sees the keyword function like above, it starts to think, ah this is a start of a function declaration. NOT a function expression as we expect it to be. A function declaration is a statement that we are declaring this function to the scope and that is all. So as a statement, the parser expects the function to have a name and because it doesn’t see a name before the ‘(‘ it throws an error. So we must have a way of telling the parser ‘hey lady, this is not a statement, treat this as a expression will ya’ before she throws a tantrum. So how do we do that? That’s where IIFE concept comes in.

Before going into that, let’s look in to this whole parenthesis business again shall we. So the problem we have is we didn’t have a name for our function declaration right. Fine so we add a name, like in the case of foo1 above and try to invoke it by adding parenthesis at the end :


function foo1 () {} ()

Again this fails, but for a totally different reason. Even if we try to invoke ‘foo1’ by adding parenthesis to the end , the parser doesn’t want to treat it as an expression still. It’s still a function statement, which ends with the }. Remember we can invoke function expressions, Function declarations cannot be invoked. Any other parenthesis we add after the function declaration is the start of something new. In this case the parenthesis is parsed as a way to scope context and since it doesn’t have anything in it, the parser throws the error. We can have any expression inside parenthesis like this. That’s why this code works :


function foo1 () {} (1)

Which is equivalent to :

function foo1 () {} 

(1);

Now solving that syntax error and introducing IIFE is riding on this parenthesis thing. As we said we can have any expression inside a parenthesis when its used to scope context as in the above example. Any expression, but not statements. So if we have our function expression without a name inside parenthesis then the parser knows that it’s an expression and will not treat it as a statement when it encounters the function keyword.

So intuitively we can think of two ways now to introduce IIFEs :


(function () {/*code*/} ());
(function () {/*code*/} )();

Both works ok. The whole point of using the parenthesis was to tell the parser to expect a function expression here. When a function expression is automatically expected in some cases, we can drop the extra parenthesis. Like in the below examples :


var i = function(){ /* code */ }();
true && function(){ /* code */ }();
0, function(){ /* code */ }()

Or these, although slightly harder to read, also works, because they are all function expressions because of the unary operators.


!function(){ /* code */ }();
~function(){ /* code */ }();
-function(){ /* code */ }();
+function(){ /* code */ }();

And this also. Only needs the extra parenthesis if we pass an argument.

new function(){ /* code */ };
new function(i) { /*code */ } (3);

But as a rule of thumb it is generally better to use the extra parenthesis surrounding the function expression as a convention. It disambiguates between returning the invoked result of the expression as opposed to returning the expression it self when reading the code. Especially when the expression is really long, the reader doesn’t have to go to the end of the expression to see if its invoked when its surrounded with parenthesis.

So that’s the deal with IIFEs. Which basically creates a private execution context where we can do stuff and not affect the global scope. Let’s see how this deals with closures. I intend to add an article on closures. For now, This page will shed some light on the area.

Just like in named functions, we can pass in arguments to IIFEs. With the principle of closures, the IIFE has access to the outer scopes variables, and since the IIFE is private, we can lock in outer scopes variables.

Let’s look at the example to understand this concept. This code block gets every anchor tag in the page and overrides its click event to console log its index according to every anchor tag in the page. At least that’s what its supposed to do :


var elems = document.getElementsByTagName( 'a' );

for ( var i = 0; i < elems.length; i++ ) {
    elems[ i ].addEventListener( 'click', function(e){
        e.preventDefault();
        alert( 'I am link #' + i );
    }, 'false' );
}

That function we pass as the second argument of addEventListener is an anonymous function it self, but it’s not invoked. It will be only invoked when we click the a tag. But when that happens the loop is already executed, and the value of i is the total number of a tags in the page. So whenever we click an ‘a’ it will just show the same value : ‘I am link #‘ , i being the same every time. That’s not what we want. This is because we only assign the function to the click even , but the value of i when the assignment happens is never locked in with the function. So when executing the click event, it looks for the i declared before, and used that. So we need to lock in the value of i privately for each click event execution. IIFE to the rescue.


var elems = document.getElementsByTagName( 'a' );

for ( var i = 0; i < elems.length; i++ ) {

    (function( lockedInIndex ){
        elems[ i ].addEventListener( 'click', function(e){
            e.preventDefault();
            alert( 'I am link #' + lockedInIndex );
        }, 'false' );
    })( i );
}

Here for each iteration of the loop, we do what we did before inside an IIFE. When invoking we pass in the i of the iteration, And i is used inside to assign the click event function, and since i is used in the invocation of the IIFE it’s value is locked in. So when the click event occurs, it doesn’t have to look for an outer i. Each of those click functions have its own click function assigned with the right value of i.

There’s another way to do it, by using an IIFE when assgning the click event, like this :


var elems = document.getElementsByTagName( 'a' );

for ( var i = 0; i < elems.length; i++ ) {

    elems[ i ].addEventListener( 'click', (function( lockedInIndex ){
        return function(e){
            e.preventDefault();
            alert( 'I am link #' + lockedInIndex );
        };
    })( i ), 'false' );

}

Here, the second argument is an IIFE where we invoke at the time of the addEventListener call, so that the IIFE returns the function which needs to be executed when clicked with the locked in index. Both methods are equally valid, made easy by IIFEs.

Another great advantage of using this IIFE, is it doesn’t pollute the global scope. You must realize that we could’ve just done this by introducing another function just to do this and invoking that inside the loop. But it’s totally unnecessary and would be just a pollution of the scope.

So we discussed about the Module pattern. In the module pattern we basically return an object instead of a function as we did in the first example of this article (the makeCounter one). Now we can use an IIFE in place of the makeCounter function, so that we don’t need to declare the makeCounter function as we did before. By doing so we are using IIFE in making a module.


var counter = (function(){
var i = 0;

return {
    get: function(){
        return i;
    },
    set: function( val ){
        i = val;
    },
    increment: function() {
        return ++i;
    }
};
}());

Actually there’s nothing special here instead of dropping the middleman and just using an IIFE to create our module. But if we need another counter it’s probably best to use a named method instead of the repetition.

So yeah, that’s the gist of IIFEs. They are incredibly useful whenever we need to encapsulate data in execution contexts.

Summary :

  • Immediately Invoked Function Expressions are functions which we invoke at the time of declaration, so that it does a thing and never be used again.
  • Some call it Self-Executing anonymous functions, which can be misleading as the function doesn’t execute or invoke it self, like a recurring function.
  • Functions, when invoked, create their own execution context, a private scope, which can be used in encapsulation.
  • There are different ways to define a names function in JS.
  • The JavaScript parser treats function declarations and statements differently.
    Inside parentheses in a globale scope, we can only have expressions and not statements. So use that to create IIFE.
  • With decreased readability we can drop the surrounding parenthesis in some cases when using IIFEs.
  • Using IIFE in combination with closures is necessary when we need to encapsulate values inside function objects.
  • IIFE helps to keep the global context clean.
  • IIFE can be used in the module pattern, when we want to create a singleton module.

 

 

Posted in Design Patterns, Javascript | Tagged , , , , , , , , , | Leave a comment

Module Pattern : Part 1

In any project we do, modularization helps with keeping everything nice and clean and separate. It’s highly necessary when we want to achieve the important goal of OOP design, Loose coupled and highly cohesive systems. One might say Javascript is not as apparent as a object oriented language, hence not that hospitable to create a modularized environment. But that notion is easily debunked, because there are ways we can have modularization in our JS applications.

  • Module pattern
  • Object literal notation

There are more non-trivial methods as well. We’ll hopefully venture into them in a later post.

Before we get in to the pattern, we must pay a visit to a concept which should’ve been better to have discussed in the previous post. It’s about object literals. We did mention that a way to make a new object was:

var newObject = {};

What goes inside the curlies? It’s just a standard concept we use in our generation of programming languages : key value pairs. The keys can be any identifier or a string.

So if we define something as {}, that is the most basic of modules. But we can’t use that at the start of a statement, because the engine might interpret that as a start of a block as well. And also there’s the question of how do we refer to that module later on. So it’s basically understandable why we use the notation above, with a proper variable name for the module. Later on we can refer to that object and add more properties like:

newObject.property =  ‘value’;

Of course we covered that in the previous post.

Let’s look at a more thorough example and see what we can derive from that about modules :


varmyModule = {
    myProperty: "someValue",

    myConfig: {
        useCaching: true,
        language: "en"
    },
    
    // a very basic method
    saySomething: function() {
        console.log( "Where in the world is Paul Irish today?");
    },
    
    // output a value based on the current configuration
    reportMyConfig: function() {
        console.log( "Caching is: "+ ( this.myConfig.useCaching ? "enabled": "disabled") );
    },
    
    // override the current configuration
    updateMyConfig: function( newConfig ) {
        if( typeofnewConfig === "object") {
            this.myConfig = newConfig;
            console.log( this.myConfig.language );
        }
    }
};

// Outputs: Where in the world is Paul Irish today?
myModule.saySomething();
// Outputs: Caching is: enabled
myModule.reportMyConfig();
// Outputs: fr
myModule.updateMyConfig({
    language: "fr",
    useCaching: false
});
// Outputs: Caching is: disabled
myModule.reportMyConfig();

  • Object properties are simple to be defined.
  • We can define new objects as properties of an object.
  • We can have functions defined here too.
    • One important thing to note here is the ‘this’. When we define a function inside an object as a property, the ‘this’ inside that function refers to the parent object. This is in contrast to when we define a function in the global scope. That is technically not a function as what we define here, but a way of defining a class in JS. But as we discussed in the previous post, ‘this’ in a function defined outside refers to the instantiation of that function via the ‘new’ keyword. If you haven’t read that I suggest you go read that already.
    • Now you may realize that we could’ve used a function class like that to define a similar ‘module’. But the key difference is we can create multiple instances of that function class using the new keyword. But not when we define a module like this. This module is a pure module, where it’s just there to be used as is. We define properties inside here not as a prototype.

Let’s look more indepth in to using the object literal notation in creating modules. A thorough analysis of it can be found here.

An object literal is a way to encapsulate a set of related behaviors. Encapsulating behavior, or modularizing if we are sticking to the subject at hand, is quite important as it doesn’t pollute the global namespace. That is pivotal in terms of large scale applications. Here we just encapsulate some methods inside  an object, simple as that :

var myObjectLiteral = {
    myBehavior1 : function() {
        /* do something */
    },

    myBehavior2 : function() {
        /* do something else */
    }
};

One might think, what’s the big fuzz about, we can declare our functions anywhere in a js file and use this, why need a object literal anyway. The need comes, as you might guess already, with the configuration. Let’s have a look through a small example. Now this example goes out of basic JS in to jQuery, as used in the article mentioned above. I’m gonna use the same example as it gets the point across nicely.


$(document).ready(function() {
    $('#myFeature li').append('div').each( function() {
        $(this).find('div').load('foo.php?item='+$(this).attr('id'));
    })
    .click( function() {
        $(this).find('div').show();
        $(this).siblings().find('div').hide();
    });
});

What this does is, for each ‘li’ item in myFeature it appends a ‘div’. For each of them it load the data from the url passed in with the id. It also binds a click function to it, which will trigger the visibility of the divs. That’s a nifty piece of code to get it done. But one thing to note is that this is run once on the document ready event. And if we want to replicate this behavior to different kind of DOM element we need to add another function with all the changing parts replaced how we want again. We all know that’s not how we want to roll in programming. So let’s identify the things we can change in this block, so we can introduce the object literal pattern in creating a configurable and reusable module.

  • The wrapper element, as in the example it’s #myFeature.
  • The container element, as in the example it’s the ‘div’.
  • urlBase, where we get the content. This will only depend on the id that we get from the list items.

I believe It can be made more configurable that that. But let’s focus on these things for now.

So if we modularize this taking the configuration and behavior into consideration, it would look like this :


var myFeature = {
    config : {
        wrapper : '#myFeature',
        container : 'div',
        urlBase : 'foo.php?item='
    },

    init : function(config) {
        //Using jquery builtin extend function we can extend our configurations easily.
        $.extend(myFeature.config, config);
        $(myFeature.config.wrapper).find('li').
            each(function() {
                myFeature.getContent($(this));
            }).
            click(function() {
                myFeature.showContent($(this));
            });
    },

    buildUrl : function($li) {
        return myFeature.config.urlBase + $li.attr('id');
    },

    getContent : function($li) {
        $li.append(myFeature.config.container);
        var url = myFeature.buildUrl($li);
        $li.find(myFeature.config.container).load(url);
    },

    showContent : function($li) {
        $li.find('div').show();
        myFeature.hideContent($li.siblings());
    },

    hideContent : function($elements) {
        $elements.find('div').hide();
    }
};

How convenient is that? For any element we want to show and hide, given that we have the content to show mapped to the urls, we can use this module. Only need to call the init() function with our own modified config object, and we will get the behavior applied to wherever we want.

Summary

  • Modularity is important in any Object Oriented Programming language.
  • The most basic of Modules is an Object : { }
  • Object properties can take various forms.
  • ‘this’ in a function which is defined as a Object property, refers to the parent object itself.
  • An Object literal notation, encapsulates a set of behaviors and properties, hence a module.

So I guess that made evident the use of Object literal notation. We will have an in-depth look in to the Module-pattern in the next post.

 

Posted in Uncategorized | Leave a comment

Constructor Pattern

Constructors is one of the first things that any IT school student learns when it comes to object oriented programming. I’m not gonna go much in to the basic concept of constructors. Just that knowing constructors is a method which initializes an instance of a class, after memory is being allocated to it, is enough. Every object oriented programming language has a variation of it, it may be hidden and be called beneath the top layer of execution, but it needs to be there, so is in Javascript.

In Javascript almost Everything is an object. So as it goes there are 3 ways to create objects in JS :

  1.  var newObject = {};
  2.  var newObject = Object.create( Object.prototype ); 
  3.  var newObject = new Object();

Note : if we pass a value to Object() constructor, it will create a new object wrapping the value passed. Otherwise it returns just an empty object.

So this creates the object newObject. I guess the most popular method is the first option because of its compactness.

There are several ways to define properties for this object too. I know, there are MANY ways to do one thing in JS, part of the complexity that comes with it.

  1. Using the dot syntax :
    • newObject.prop = "some value";
    • We can get the value back the same way, the usual dot syntax.
  2. Square bracket syntax. I’d like to call this the map syntax.
    • newObject["key"] = "some value";
    • Notice that the key has to be a string. Think of it as a map with string values as key and ANYTHING as value.
    • value can be returned with newObject["key"]
  3. Object.defineProperties :
        This is a kind of a hardcore method if you ask me. This method is only ECMAscript 5 compatible.

    Eg:

    Object.defineProperty( newObject, "someKey", {
        value: "for more control of the property's behavior",
        writable: true,
        enumerable: true,
        configurable: true
    });
    • One advantage we have by using this method compared to the two above is that we have more control over the behavior of the property.  A method defined with the Object higher class it self. We can configure the property as it is writable, enumerable and configurable.
      1. Writable : Object property can be assigned values using the assignment operator. Default is true obviously.
      2. Configurable : the property descriptor type can be changed. And the property can be deleted. Defaults to false.
      3. Enumerable : If true this property will show up when we enumerate the object properties, for example, by going through the object properties in a loop, or going through the Object.keys. Defaults to false.
    • There’s a lot of information from the Mozilla Developer network on this method, as is the case with any Javascript base knowledge. Thank you MDN!
      • Object properties, or as they call it property descriptors are either data descriptors or access descriptors. Those descriptors are themselves objects and they have the keys : ‘configurable’, and ‘enumerable’. Data descriptors, which are the ones with values, has the optional ‘writable’ key. Data descriptors also has the optional key ‘value’, which defaults to undefined… That’s the basic foundation of the value undefined errors we see as JS noobs. Access descriptors also has the optional keys, get and set. These are basically functions to get and set the property. Both get and set also defaults to undefined. I haven’t really seen them in use. But that could be just me.
      • As you can see there are other levels below this book of design patterns, which I will consider as out of the scope for these articles. If you want more on the method, please go the linked MDN article.
  4. There’s also the Object.defineProperties() method
    Object.defineProperties( newObject, {

      "someKey": {
        value: "Hello World",
        writable: true
      },
      "anotherKey": {
        value: "Foo bar",
        writable: false
      }
     });

How do we do inheritance by these methods? So we discussed about the new Object() constructor right? We can create any object which has the properties of the parent object by using the Object constructor. :

var individual = new Object(person);

individual has all the properties of the person. We can define new properties to individual using the Object defineProperty or defineProperties methods.

So we have used the Object construct like a class right. We defined new objects using new Object and so on. Like that we can define our own sort of classes. In JS this exists somewhere :

function Object () {
...
};

so we can use that with the new keyword to create instances of the Object. Like that we can create our own classes. Here the this keyword refers to the new object being created with the function.

function Car (model, year , miles) {
this.model = model;
this.year  = year;
this.miles = miles;
this.toString = function() {
return this.model + " has done " + this.miles + " miles";

};
};

Here we defined a basic constructor to the class Car. So we can create new Car object by using the new keyword :

var superCar = new Car('Porsche', 2011, 1500);
console.log(superCar.toString());

That is as basic a constructor can get in JS. But it’s not perfect. In Java or C++ do we define the functions inside our constructors?  Whenever we are calling the constructor above, it redefines the functions inside it. We would like to have the function defined once and able to be shared in all the instances like in other programming languages right?  Here comes ‘prototype’ to the rescue.

Every function in JS has a prototype object. This prototype object and its properties are shared between all instances that we create using new keyword. So instead of having a function defined in the function Car as above we can do this :

Car.prototype.toString = function() {
return this.model + " has done " + this.miles + " miles";

};

So that this toString method will be the one called by all instances of Car, instead of having their own separate toString methods. Just what we want.

That’s all we look in to with the Constructor pattern in JS. This hopefully covers how objects are able to be created in JS. Let’s move onto other interesting patterns in coming articles.

SUMMARY :

  • Contructors play an important role in Object Oriented language. The constructor pattern in JS explains how we can have basic constructors in Javascript.
  • There are multiple ways to create an object in Javascript, since everything in JS is Objects.
  • There are several ways to define properties for objects too.
  • defineProperties() is an effective ECMAscript 5 compatible way to define and customize properties the way we want.
  • By using the new keyword we can inherit properties to other objects.
  • We can define ‘classes’ by using the function keyword, and instantiate those ‘classes’ using the new keyword.
  • Every function has a ‘prototype’ object. We can use that to create shared functions of all instances of that function class.
Posted in Design Patterns, Javascript | Tagged , , , , , | 2 Comments

Design patterns in JS

During my initial time frame as a Software Engineer, I did not put much heed and effort or the deserved respect for Javascript. But when you advance in the local software development arena, staying out of or being ignorant of Javascript is almost impossible. Although I didn’t have much respect for JS earlier, it didn’t take much time to develop a profound respect for this language in a short while. I do like Python as a programming language too, maybe I just love scripting languages. Anyway, I found that JS is a language that is very popular but is not being used the right way like 90%. For most of the developers JS is about a framework, and they don’t give a rats ass about the basic underlying mechanics and the intricacies of the language. I had the opportunity to play with a lot of JS frameworks and stuff, but it sort of felt like going on autopilot without knowing the mechanics of it all.

I must say the number of sources for learning Javascript are overwhelming. This is kind of the reason that it’s all a jumble. Most of the stuff talks about derivations or their own personal interpretations of the language. Now I do realize that some of those interpretations are accepted in the community as a standard because they just work and get the job done. People take them for granted. There’s nothing wrong with that. But isn’t it like believing in a religion. Someone says that this is what’s going on in the world and this is how we should face it, but don’t you really wanna know whats going on underneath. That’s a bit too harsh of a metaphor, but it just came to my mind. Anyway, That’s why whenever I’m reading an article on JS, I go deep to the source as far as it goes. This works because the article writers are nice enough to reference their sources. By this way I’ve found quite some interesting articles about the underlyings of JS. I will share with you those articles in the coming posts as best as I can. Be patient, grasshopper.

We all learn about design patterns from our university times. And we get to use them in various implementations in our projects and if we are lucky enough, get to really apply them after school as well. There are many resources about design patterns, and many of them are described using programming languages like Java or C++.  When I stumbledupon a article describing the design patterns in a JS perspective, it immediately grabbed my attention.

cover

This book is very thorough, just the way I like it. It doesn’t have the best and attractive use of language, but it does get the big point across. Although I have been aware of design patterns and had been using them in my work, I didn’t get the comfortable hang of it as much as I liked to. It would be really cool as a software developer to know about the design patterns and just come up with solutions using them instantly as the need requires, isn’t it? But that’s a perfect scenario. I have been reading this book for a while now, and realize now that it’s ideas needs to be shared. In the coming JS design pattern series of posts I will strive by the best of my ability to describe the patterns in it with my analysis.

 

Image | Posted on by | Tagged , | Leave a comment

Configurable SQL access method

Recently I was working on a lot of SQL server development activity. One such activity involved accessing 3 SQL server instances to compare and update data among them. One identity provider server acted at the top of the hierarchy. Two other services were there, one is a legacy system, and another one was a newly introduced service, which basically has to use the same data. The identity service handles the users and the authentication and authorization of the common entities these two services will be using.

All the common tables between the databases had very little differences in their schema. But regardless differences were there. There was a service implemented where a new entity is being introduced or an entity is changed, the other services would know of the change and then update them accordingly. But due to unforeseen circumstances, as often the case was in this scenario, the syncing mechanism would fail, ultimately leaving discrepancies between the data in the separate services. Therefore a separate syncing mechanism was needed. This mechanism, which materialized as a separate syncing tool, was to first compare the entries in the identity service tables and the client service tables to identify the disparities, and then perform the usual CRUD operations to match up the whole thing.

The requirements for the tool was roughly :

  1. It should not depend too much on the models of the separate programs.
  2. Should be expandable, in to other additional applications we add to the system in the future.
  3. Should be configurable.
  4. The data and the processing should be disjoint.

I really experienced the pains of not getting the requirements set and cleared early on while working on this. Lot of the requirements were communicated in different stages. So at first it was a simple husk of a program. I had to do a LOT of refactoring and polishing things up later on and adding more complexity in to the mix to make it more configurable.

Everything was simplified in each of the services with the use of Entity Framework. It’s really nice and easy to use, IF our models are kind of set and we work with a single data model. Besides even at the smallest of changes in the model, we have to do Entity Framework Migrations and rebuilding just to get it up and running again. Ain’t nobody got time for that for a simple tool like the one we built. So Entity Framework was not an option, So good old SQL transactions was the way to go. But additionally the tool was designed so that the data access method was also pluggable.

The perfect tool for the situation would have been a super tool where we only have to define the data sources for the databases and run the tool and it will sync. But achieving that superiority was blocked by the slight changes in the db schema. As I said the requirements for expanding the tool came later on. Before that a single tool which syncs the legacy application with the newly introduced identity provider was made. Then we had to introduce a common tool out of that. What I did was to introduce a common library while making separate tools for the client services, using the common library. While in the process it was my goal to get most of the common elements as possible to the common library.

Gradually I was able to make the common library as common as possible in a satisfactory level. All the logic for comparing and preparing the data for syncing was taken in to the common library. The separate tools should have their own sql queries to do the CRUD stuff.

Being the lazy guy I am, I didn’t want to put all the SQL queries in the code itself, and make it a hassle to rebuild the whole thing whenever there is a change in the schema and all. So I thought of a solution for that to read the sql queries from a separate file and make it easy to change the queries whenever possible. Also get all the queries to one place. A solution I used for this is to use resource strings. All the SQL queries was put in a resource file. This way all the queries are in one place and we can change them whenever we want, without having to rebuild the tool every time. Now I cannot argue it’s the best solution out there, that’s why SQL string provider was made pluggable. The method I used is just one of the methods from which to take the SQL queries. If a new better method comes along, one only needs to know about the query string keys used and use them to map in their way and introduce a new SQL string provider to use that method.

This way building the new separate tool was easy. I only had to introduce a new resource string file, which has all the necessary queries specific to those databases. That’s it. Now I’m thinking there would have been more to that, because I’m not really accepting myself that it was that much simple.

Anyway it worked. It seems like a simple solution that really made my day as a developer of those tools easier. I would love to discuss about the drawbacks and really know about a way that I would’ve done it better. So please feel free to point out the drawbacks and suggest better ways of doing that. Actually I’m looking forward to it. Thanks!

Posted in .Net | Tagged , , , , , , , , | Leave a comment

Ease through effort

We as developers, or programmers, or coders, or whatever you would like to call your self, face a particular problem that is common to all of us, unless you’re like a Brian Kernighan or Guido van Rossum. That problem is remembering or keeping track of the knowledge you gain. Because for programmers, the stuff they learn at school, lays only the foundation of things we will encounter in our work. The real struggle starts after we leave school. Because the knowledge base of IT or programming is a rapidly and constantly changing. That is fact. It may be  troublesome or weary to keep up to date of all things you want to know. The foundation you gain at school only can help you so much. I’m not saying that they don’t matter, they are of utmost important when it comes to gaining real comprehension over just following some steps some other nerd has listed down. It must be always a self learning process, to stay up to par with the rapidly changing technical world.

But in completing a technical task practically requires so much practical knowledge of whatever technology you are using. More often than not, we might have to deal with more than one technology at once, or we might have to reapply whatever solution that we have researched for again after a year or so. We put lot of effort in finding a solution for a coding problem. When programming, thinking takes more time than actual coding. If we do not keep track of what we have done, we just might forget about the solution altogether and all that research effort is a huge waste.

That is why I saw the importance of personal documentation. It has helped me immensely in my work. That’s why I thought of sharing about this with you. This might not be that much of a surprise to you other bloggers out there. But there are some who just cringe at the thought of documentation or writing something. Ironically they are sometimes coders too. I hope this post shines some light on the importance of the matter.

I first saw the use of this when I was working with Blender as a committer for my Google Summer of Code projects. There it was necessary to keep a log of your work to complete the project.

http://wiki.blender.org/index.php/User:Phabtar/Full_COLLADA_Animation_Support_for_Blender

http://wiki.blender.org/index.php/User:Phabtar/Improve_COLLADA_constrained_animations_and_Morph_animation_support.

This is actually not personal documentation, but I gathered lots of my process and habits from this. It was later when I started to work only, that I really started using Personal documentation. At first I was faced with a challenging problem that was to do with serialization in .Net. There was a lot of stuff I researched on and pretty soon it was overwhelming, and I needed a place or a method to organize my thoughts and process.

So I naturally turned to writing. It just came intuitively that writing down my thoughts would help me solve the problem. And it did! This has helped me numerous times in solving pretty challenging problems in various environments.

But later on I started doing it more methodically. I still use a personal word document to track stuff. It may sound so primitive what with all the online services. But I didn’t put much thought in to it when I initially started it, just thought about writing, and went to the first solution that came to my mind. I still do it, but in the lookout for more elegant solutions. http://dillinger.io/ looks promising. This will straightaway help with blogs too. But I’m open to other suggestions. Please drop a comment and enlighten me. The features that would be useful would be collapsible headers, easy navigation and version control would be a plus.

You can use any number of categories you like. But what I kept using, from the days of blender was these

  • Todo
    • To keep track of the various tasks you gotta do, I list anything and everything that I want to do here too. And also
  • In progress
    • Keep track of what you are doing at the moment. Might be several things in parallel. The usefulness is quite obvious. For any particularly challenging problem, thought, experiments, and plans can be included here and be trialed as you want. For example a problem with several solutions. I can strike out the solutions I try which really narrows down to the best solution there is. You just can’t miss anything.
  • Completed
    • This keeps track of what has been completed. Really a copy and paste of whatever that was completed from In progress. So you don’t forget the solutions.
  • Discussions
    • While searching for a particular problem, I go on some trains of thoughts on the general use of technology, that might not have to do with the problem you are dealing with. For example if anything that you see while researching and find intereseting can be put here. For later reference. It’s not just material. I record my thoughts here too, This is the section that is most important to you if you are a blogger. Because you discuss universal things here.
  • Notes
    • Any other notes, research data, debug values, whatever.

Be creative in finding the categories that truly calls to you. Experiment and have fun. For example, All of my project doesn’t have these categories, I have different categories for different projects.

Just recently this has helped me so much, I was working on a bug in a new project, and I documented everything as per my habit. I documented the complex process of a file importing this project used as searching for a solution. But only a month later I was facing another similar bug, and my sorry mind has forgotten the flow I went though before. So I was delighted to find that I have documented that time. Saved me a heck of a lot of time.

This is clearly not a selfish thing. I have used my documentation for my blogging as well. Any solution that you discover is precious to you and is equally or more precious to other developers too. Because of the vastly expanding knowledge, solutions are very valuable. The technologies have their own tutorials and guides and API documentations, but when it comes to practical problems, we are the ones who discover the real solutions. Therefore It’s actually a crime you commit on programming, if you let those precious solutions go without being shared. The knowledge base is highly disorganized. Google helps, only if we have the content. So don’t think of your blogs as something that lands you jobs. They are of extreme importance to the whole IT world. I’m sure the records I kept on my work on Blender have helped others on the matter, at least I wish so.

One might argue that you don’t always have the time to do this. That is a dilemma we face as developers our whole frikking life don’t we? But overall, the effort that you put in to this would not be a waste at all, this will ease the life of you and your fellow developers. Otherwise any effort that you put in doing just your work without sharing your knowledge, is a waste of effort considering the big picture, as you might waste others and even your time too, if someone has to research the solutions again. So lets ease each others developer lives with a little bit of effort.

Posted in Uncategorized | Leave a comment