Front-end Development in dotnet.sg

yeoman-bower-npm-gulp

The web development team in my office at Changi Airport is a rather small team. We have one designer, one UI/UX expert, and one front-end developer. Sometimes, when there are many projects happening at the same time, I will also work on the front-end tasks with the front-end developer.

In the dotnet.sg project, I have chance to work on front-end part too. Well, currently I am the only one who actively contribute to the dotnet.sg website anyway. =)

Screen Shot 2017-01-29 at 12.49.23 AM.png
Official website for Singapore .NET Developers Community: http://dotnet.sg

Tools

Unlike the projects I have in work, dotnet.sg project allows me to choose to work with tools that I’d like to explore and tools that helps me work more efficiently. Currently, for the front-end of dotnet.sg, I am using the following tools, i.e.

  • npm;
  • Yeoman;
  • Bower;
  • Gulp.

Getting Started

I am building the dotnet.sg website, which is an ASP .NET Core web app, on Mac with Visual Studio Code. Hence, before I work on the project, I have to download NodeJs to get npm. The npm is a package manager that helps to install tools like Yeoman, Bower, and Gulp.

After these tools are installed, I proceed to get a started template for my ASP .NET Core web app using Yeoman. Bower will then follow up immediately to install the required dependencies in the web project.

screen-shot-2017-01-28-at-9-03-10-pm
Starting a new ASP .NET Core project with Yeoman and Bower.

From Bower with bower.json…

Working on the dotnet.sg project helps me to explore more. Bower is one of the new things that I learnt in this project.

To develop a website, I normally make use of several common JS and CSS libraries, such as jQuery, jQuery UI, Bootstrap, Font Awesome, and so on. With so many libraries to manage, things could be quite messed up. This is where Bower comes to help.

Bower helps me to manage the 3rd party resources, such as Javascript libraries and frameworks, without the need to locate the script files for each resources myself.

For example, we can do a search of a library we want to use using Bower.

Screen Shot 2017-01-28 at 9.44.47 PM.png
Search the Font Awesome library in Bower.

To install the library, for example Font Awesome in this case, then with just one command, we can easily do it.

$ bower install fontawesome

The libraries will be installed in the directory as specified in the Bower Configuration file, .bowerrc. By default, the libraries will be located at the lib folder in wwwroot.

screen-shot-2017-01-28-at-10-08-44-pm
Downloaded libraries will be kept in wwwroot/lib as specified in .bowerrc.

Finally, to check the available versions of a library, simply use the following command to find out more about the library.

$ bower info fontawesome

I like Bower because checking bower.json into the source control ensures that every developer in the team has exactly the same code. On top of that, Bower also allows us to lock the libraries to a specific version. This will thus prevent some developers to download some different version of the same library from different sources themselves.

…to npm with package.json

So, now some of you may wonder, why are we using Bower when we have npm?

Currently, there are also developers supporting the act to stop using Bower and switch to npm. Libraries such as jQuery, jQuery UI, and Font Awesome, can be found on npm too. So, why do I still talk about Bower so much?

Screen Shot 2017-01-28 at 11.30.58 PM.png
Searching for packages in npm.

For ASP .NET Core project, I face a problem on referring to node_module from the View. Similar as Bower, npm will position the downloaded packages in a local folder also. The folder turns out to be node_module, which is on the same level as wwwroot folder in the project directory.

As ASP .NET Core serves the CSS, JS, and other static files from the wwwroot folder which doesn’t have node_module in it, the libraries downloaded from npm cannot be loaded. One way will be using Gulp Task but that one is too troublesome for my projects so I choose not to go that way.

Please share with me how to do it with npm in an easier way than with Bower, if you know any. Thanks!

Goodbye, Gulp

I first learnt Gulp was when Riza introduced it one year ago in .NET Developers Community Singapore meetup. He was then talking about the tooling in ASP .NET Core 1.0 projects.

Riza Talking about Gulp.png
Riza is sharing knowledge about Gulp during dotnet.sg meetup in Feb 2016.

However, about four months after the meetup, I came to a video on Channel9 announcing that the team removed Gulp from the default ASP .NET template. I’m okay with this change because using BundleMinifier to do bundling and minifying of CSS and JS now without using Gulp because using bundleconfig.json in BundleMinifier seems to be straightforward.

Screen Shot 2017-01-28 at 11.59.18 PM.png
Discussion on Channel 9 about the removal of Gulp in Jun 2016.

However, the SCSS compilation is something I don’t know how to do it without using Gulp (Please tell me if you know a better way. Thanks!).

To add back Gulp to my ASP .NET Core project, I do the following four steps.

  1. Create a package.json with only the two compulsory properties, i.e. name and version (Do this step only when package.json does not exist in the project directory);
  2. $ npm install --save-dev gulp
  3. $ npm install --save-dev gulp-sass
  4. Setup the generated gulp.js file as shown below.
var gulp = require('gulp');
var sass = require('gulp-sass');

gulp.task('compile-scss', function(){
    gulp.src('wwwroot/sass/**/*.scss')
        .pipe(sass().on('error', sass.logError))
        .pipe(gulp.dest('wwwroot/css/'));
})

//Watch task
gulp.task('default', function() {
    gulp.watch('wwwroot/sass/**/*.scss', ['compile-scss']);
})

After that, I just need to execute the following command to run gulp and changes made to the .scss files in the sass directory will trigger the Gulp Task to compile the SCSS to corresponding CSS.

$ gulp

There is also a very detailed online tutorial written by Ryan Christiani, the Head Instructor and Development Lead at HackerYou, explaining each step above.

Oh ya, in case you are wondering what is the difference between –save and –save-dev in the npm commands above, I like how it is summarized on Stack Overflow by Tuong Le, as shown below.

  • –save-dev is used to save the package for development purpose. Example: unit tests, minification.
  • –save is used to save the package required for the application to run.

Conclusion

I once heard people saying that web developers were the cheap labour in software development industry because they are still having the mindset that web developers just plug-and-play modules on WordPress.

After working on the dotnet.sg project and helping out in front-end development at work, I realize that web development is not an easy plug-and-play job at all.

Advertisements

IBM Connect 2015: SoftLayer and Bluemix

IBM Connect 2015 - SoftLayer - Bluemix

With different challenges emerging every other day, startups nowadays have to innovate and operate rapidly in order to achieve exponential growth in a short period of time. Hence, my friends working in startups always complain about the abuse of 4-letter word “asap”. Every task they receive always come with one requirement: It must be done asap. However, as pointed out in the book Rework by Jason Fried from Basecamp, when everything is expected to be done asap, nothing in fact can be really asap. So, how are startups going to monetize their ideas fast enough?

To answer the question, this year IBM Connect Singapore highlighted two cloud platforms, SoftLayer and Bluemix, which help to assist startups to build and launch their products at speed.

IBM Connect 2015 at Singapore Resorts World Sentosa
IBM Connect 2015 at Singapore Resorts World Sentosa

SoftLayer, IaaS from IBM

SoftLayer is a very well-known IaaS cloud service provider from IBM. Currently, SoftLayer has data centres across Asia, Australia, Europe, Brazil, and United States. William Lim, APAC Channel Development Manager at SoftLayer, stated during the event that there will be two new data centres are introduced for every two months on average. In addition, each data centre is connected to the Global Private Network which enable startups to deploy and manage their business applications worldwide.

With Global Private Network, SoftLayer users won’t be charged for any bandwidth usage across the network. Yup, free! Bandwidth between servers on the Global Private Network is unmetered and free. So, with this exciting feature, startups are now able to build true disaster recovery solutuion which requires file transfer from one server to another.

William Lim sharing story about Global Private Network.
William Lim sharing story about Global Private Network.

What excites me during the event is the concept of Bare Metal Server. With Microsoft Azure and Amazon Web Services (AWS), users do not get predictable and consistent performance especially for I/O intensive tasks when their applications are running on virtual-machine based hosting. In order to handle I/O intensive workloads, IBM SoftLayer offers their users a new type of server, Bare Metal Server.

A Bare Metal Server is a physical server which is fully dedicated to one single user. Bare Metal Server can be setup with cutting-edge Intel server-grade processors which can then maximize the server processing power. Hence, for those startups that would like to build Big Data applications, they can make use of Bare Metal Server from SoftLayer to perform data-intensive functions without worrying about latency and overhead delays.

Bluemix, PaaS from IBM

As a user of Microsoft Azure Cloud Service (PaaS), I am very glad to see the Bluemix, PaaS developed by IBM, is also being introduced in the IBM Connect event.

Amelia Johasky, IBM Cloud Leader (ASEAN), sharing how Bluemix works together with three key open compute technologies: Cloud Foundry, Docker, and OpenStack.
Amelia Johasky, IBM Cloud Leader (ASEAN), sharing how Bluemix works together with three key open compute technologies: Cloud Foundry, Docker, and OpenStack.

One of the reasons why I prefer PaaS over IaaS is because in a startup environment, developers always have too many todos and too little time. Hence, it is not a good idea to add the burden of managing servers to the developers. Instead, developers should just focus on innovation and development. In the world of PaaS, tons of useful libraries are made available and packaged nicely which allows developers to code, test, and deploy easily without worrying too much about the server configuration, database administration, and load balancing. (You can read about my pain of hosting web applications on Azure IaaS virtual machines here.)

After the IBM Connect event, I decide to try out Bluemix to see how it’s different from Azure Cloud Service.

The registration process is pretty straightforward. I started with the Web Application Template. In Bluemix, there are many programming languages supported, including the latest ASP .NET 5, the new open-source and cross-platform framework from Microsoft team!

Many web development platforms are available on Bluemix!
Many web development platforms are available on Bluemix!

I like how Bluemix is integrated with Git. It allows us to create a hosted Git repository that deploys to Bluemix automatically. The entire Git setup process is also very simple with just one click of the “Git” button. So every time after I push my commits to the repository, my app will be automatically updated on the server as well. Cool!

Bluemix enables us to deploy our web apps with Git.
Bluemix enables us to deploy our web apps with Git.

You can click on the button below to try out my simple YouTube related web app deployed on Bluemix.

Try out my app hosted on Bluemix at http://youtube-replayer.mybluemix.net/.

Bluemix is underlined by three key open compute technologies, i.e. Cloud Foundry, Docker, and OpenStack. What I have played with is just the Cloud Foundry part. In Bluemix, there is also an option to enable developers to deploy virtual machines. However, this option is currently beta and users can only have access to it if they are invited by IBM. Hence, I haven’t tried their VM option.

Finally, Bluemix currently only offers two regions, UK and US South. So for those who would like to have their apps hosted in other parts of the world, it may not be a good time to use Bluemix now.

YouTube RePlayer is now hosted on Bluemix.
YouTube RePlayer is now hosted on Bluemix.

Azure Blob Storage and File API

Azure Blob Storage - Azure SDK - ASP .NET MVC - Entity Framework - HTML5

When my applications were hosted on Windows Azure Virtual Machines (VM), we stored the images uploaded via our web applications in the hard disks of the VMs (except the temporary disk). However, when we started load balancing, we soon encountered a problem that the uploaded images were only found in one of the VMs. So we needed to find a centralized storage for those images.

Recently, when we are using Azure PaaS (aka Cloud Service), even without load balancing, we already encounter the same issue. That is simply because the hard drives used in Cloud Service instances are not persistent. Hence, a persistent file storage on the cloud is needed.

IaaS vs. PaaS
IaaS vs. PaaS

Blob Storage

Azure Blob Storage, according to Azure Documentation, is a service for storing large amount of unstructured data that can be accessed everywhere via HTTP or HTTPS. Hence, it is an ideal tool that we can use as the persistent image cloud storage.

There are two types of blob, Page Blob and Block Blob. Page Blob is commonly used for storing VHD files for VMs because it is optimized for random read and write operations.

For most of the files uploaded, it’s recommended to store as Block Blobs because large files will be split into smaller blocks and then uploaded concurrently. Hence, Block Blob is designed to give us faster upload and better throughput, which is great for image upload.

The maximum size for a Block Blob is 64 MB. Hence, if the uploaded file is more than 64 MB, we must upload it as a set of blocks; otherwise, we will receive status code 413 (Request Entity Too Large). For my web applications, there is no need for uploading an image which is more than 5MB most of the time. Hence, I can just limit the size of images before the user uploads them.

HttpPostedFileBase imageUpload;
...
if (imageUpload.ContentLength > 0 && imageUpload.ContentLength <= 5242880)
{
    //warn the user to resize the image
}

Let’s Try Uploading Images

I’m going to share how to upload more than one image to the Azure Blob Storage from an ASP .NET MVC 5 application. If you are going to upload just one image, simply remove the for loop and change List to just DBPhoto in the codes below.

First of all, I create a class to handle upload to Azure Storage operation.

public class AzureStorage
{
    public static async Task UploadAndSaveBlobAsync(
        HttpPostedFileBase imageFile, CloudBlobContainer container)
    {
        string blobName = Guid.NewGuid().ToString() + 
            Path.GetExtension(imageFile.FileName);

        CloudBlockBlob imageBlob = container.GetBlockBlobReference(blobName);
        using (var fileStream = imageFile.InputStream) 
        {
            await imageBlob.UploadFromStreamAsync(fileStream);
        }

        return imageBlob;
    }
}

So, in my controller, I have the following piece of code which will be called when an image is submitted via web page.

[HttpPost]
[ValidateAntiForgeryToken]
public async Task Create(
    [Bind(Include = "ImageUpload")] PhotoViewModel model)
{
    var validImageTypes = new string[] { "image/jpeg", "image/pjpeg", "image/png" };
    
    if (ModelState.IsValid) 
    {
        if (model.ImageUpload != null && model.ImageUpload.Count() > 0)
        {
            var storageAccount = CloudStorageAccount.Parse 
                (WebConfigurationManager.AppSettings["StorageConnectionString"]);

            var blobClient = storageAccount.CreateCloudBlobClient();
            blobClient.DefaultRequestOptions.RetryPolicy = 
                new LinearRetry(TimeSpan.FromSeconds(3), 3);  

            var imagesBlobContainer = blobClient.GetContainerReference("images");
            foreach (var item in model.ImageUpload) 
            { 
                if (item != null) {
                    continue;
                }
                
                if (validImageTypes.Contains(item.ContentType) && 
                    item.ContentLength > 0 && item.ContentLength <= 5242880)
                {
                    var blob = await AzureStorage.UploadAndSaveBlobAsync(item, imagesBlobContainer);
                    DBPhoto newPhoto = new DBPhoto(); 
                    newPhoto.URL = blob.Uri.ToString();
                    db.DBPhoto.Add(newPhoto); 
                } 
                else 
                {
                    // Show user error message 
                    return View(model); 
                }
            }
            db.SaveChanges();
            ... 
        } 
        else
        {
            // No image to upload
        } 
    }
    return View(model);
}

In the code above, there are many new cool things.

Firstly, it is the connection string to Azure Blob Storage, which I store in StorageConnectionString in web.config. The format for secure connection string is as follows.

DefaultEndpointsProtocol=https;AccountName=;AccountKey=;
Retrieve the access keys to the Storage Account.
Retrieve the access keys to the Storage Account.

Secondly, it’s LinearRetry. It is basically a retry policy which states how many times the program will retry and how much time needed between retries. In my case, it will only wait for 3 seconds after each try up to 3 tries.

Thirdly, I get the URL of the image on the Azure Blob Storage via blob.Uri.ToString() and store it into the database table. The URL will be used later for displaying the image as well as deleting the image.

Fourthly, I actually check to see if model.ImageUpload has null entries. This is because if I submit the form without any image to upload, model.ImageUpload has one entry. Not zero, but one. The only one entry is actually null. So if I don’t check to see whether the entry in model.ImageUpload is null, there will be an exception thrown.

The controller has such a long code. Luckily the code needed in the model and view is short and simple.

For the model PhotoViewModel, I have the following.

public class PhotoViewModel
{
    ...
    
    [Display(Name = "Current Images")]
    public List AvailablePhotos { get; set; }
}

For view, it is easy to allow selecting multiple files in the same view page. The “multiple = “true”” is to make sure more than one file can be selected in the File Explorer. You can omit this attribute if you only want at most one file being selected.

@Html.LabelFor(model => model.ImageUpload, new { style = "font-weight: bold;" })
@Html.TextBoxFor(model => model.ImageUpload, new { type = "file", multiple = "true" })
@Html.ValidationMessageFor(model => model.ImageUpload)

Image Size and HttpException

The image upload function looks fine. However, when images having size larger than a certain size is uploaded, HttpException will be thrown.

There is no way that having exception would be fun too! (Image Credit: Tari Tari)
There is no way that having exception would be fun too! (Image Credit: Tari Tari)

In order to prevent DOS attacks which upload huge files to the server, IIS by default only allows files which have size less than 4MB to be uploaded. Hence, although I earlier put a check to prevent image larger than 5MB to be uploaded, the exception will still be thrown if an image of size between 4 to 5MB is uploaded.

What if we just change the if clause above to allow only at most 4MB of image being uploaded? This won’t work because the exception is already thrown before the if condition is reached.

Then, can we just increase the IIS limit from 4MB to, let’s say, 100MB or something bigger? Sure. This can work. However, it still doesn’t stop someone uploads something bigger than the limit. Also, it makes attackers easier to exhaust your server with big files. Hence, expanding the upload size restriction is not really a full solution.

If you are interested, there are many good articles online discussing about this problem. I highlight some interesting ones below.

  1. Use HttpModule to Handle File Uploads;
  2. Use RIA (Rich Internet Application) Services in Silverlight (Seriously, we are talking about Silverlight in year 2015?);
  3. SubStatusCode = 13 in IIS 7;
  4. Catch the Exception in Global.asax.

I don’t really like the methods listed above, especially the 3rd and 4th options. It’s already too late to inform the user when the exception is thrown. Could we do something at client side before the images are being uploaded?

Luckily, we have File API in HTML 5. It allows to loop through the files in JavaScript to check their size. So, after the submit button is clicked, I will call a JavaScript method to check for the size of the images before they are being uploaded.

function IsFileSizeAcceptable() {
    if (typeof FileReader !== "undefined") {
        var filesBeingUploaded = document.getElementById('ImageUpload').files;
        for (var i = 0; i < filesBeingUploaded.length; i++) {
            if (filesBeingUploaded[i].size >= 4194304) { // Less than 4MB only
                alert('The file ' + filesBeingUploaded[i].name + ' is too large. Please remove it from your selection.');
                return false;
            }
        }
    }
    return true;
}
File API is currently supported in major modern browsers. (Image Credit: http://caniuse.com/#feat=fileapi)
File API is currently supported in major modern browsers. (Image Credit: http://caniuse.com/#feat=fileapi)

Remove from Azure Blob Storage

It’s normal that files uploaded to storage will be removed later. So how are we going to implement this feature in our ASP .NET MVC 5 application?

First of all, I added the following code to my AzureStorage.cs.

public static async Task DeleteBlobAsync(Uri blobUri, CloudBlobContainer container)
{
    string blobName = blobUri.Segments[blobUri.Segments.Length - 1];
    CloudBlockBlob blobToDelete = container.GetBlockBlobReference(blobName);

    await blobToDelete.DeleteAsync(); 
}

Secondly, I just pass in the Azure Storage URL of the image that I would like to remove and then call the DeleteBlobAsync method.

Uri blobUri = new Uri();
await AzureStorage.DeleteBlobAsync(blobUri, imagesBlobContainer);

Then the image will be deleted from the Azure Storage successfully.

Global.asax.cs and Blob Container

In order to have my application to create a blob container automatically if it doesn’t already exist, I add a few lines in Global.asax.cs as follows.

var storageAccount = CloudStorageAccount.Parse(
    WebConfigurationManager.AppSettings["StorageConnectionString"]);
var blobClient = storageAccount.CreateCloudBlobClient();
var imagesBlobContainer = blobClient.GetContainerReference("images");
if (imagesBlobContainer.CreateIfNotExists())
{
    imagesBlobContainer.SetPermissions(new BlobContainerPermissions
        {
            PublicAccess = BlobContainerPublicAccessType.Blob
        });
}

Write a Console Program to Upload File to Azure Storage

So, how is it done if we are developing a console application, instead of web application?

Windows Azure Storage NuGet Package needs to be installed first.
Windows Azure Storage NuGet Package needs to be installed first.

The codes below show how I upload an html file from my local hard disk to Azure Blob Storage. Then I can share the Azure Storage URL of the file to my friends so that they can read the web page.

Similar to what I do in web application, this is how I connect to the Storage account via https.

var azureStorageAccount = new CloudStorageAccount(
    new StorageCredentials("", ""), true);

This is how I access the container.

var blobClient = new CloudBlobClient(azureStorageAccount.BlobStorageUri, azureStorageAccount.Credentials);
var container = blobClient.GetContainerReference("myfiles");

Then the next thing I do is just upload the local file to Azure Storage by specifying the file name, content type, etc.

CloudBlockBlob blob = container.GetBlockBlobReference("mysimplepage.html");
using (Stream file = System.IO.File.OpenRead(@"C:\Users\ChunLin\Documents\mysimplepage.html")) 
{
    blob.Properties.ContentType = "text/html"; 
    blob.UploadFromStream(file); 
}

Yup, that’s all. =)

Pricing

Hosting your files on cloud storage is sure convenience. However, Azure Blob Storage is not free. The following table shows the current pricing of Azure Block Blob Storage in South East Asia region. To get the latest pricing details, please visit Azure Storage Pricing page.

Azure Standard Block Blob Storage in SEA Pricing
Azure Standard Block Blob Storage in SEA Pricing

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

jQuery Plugins: XDSoft DateTimePicker and Jssor Slider

jQuery - DateTimePicker - Jssor Slider

It is quite common that we need our users to input date and time on the web pages. For example, when you search for flight schedules, you normally need to tell the search engine your journey period.

The Datepicker offered by jQuery UI is normally what people will use in their website. It offers a user-friendly way for the user to input a date from a popup calendar. However, it has a disadvantage. Datepicker doesn’t come with an interface for users to input the time. Workaround is normally to provide two more drop-down boxes for user to choose the hour and minute. However, this means that users have to click more times to input both date and time.

A long list of time for customer to pick in The Haven Gateway Lounge reservation page.
A long list of time for customer to pick in The Haven Gateway Lounge reservation page.

DateTimePicker

To solve the problem, I use the DateTimePicker, a cool jQuery plugin from XDSoft. Besides the calendar view, it also has a timer view which enables us to specify the time.

To use the plugin, first I need to include the DateTimePicker CSS file as well as the DateTimePicker JavaScript library.

After that, I need to bind the plugin to all HTML elements having “date_field” as their class.

$(function () {
    $('.date_field').datetimepicker({ format: 'Y-m-d H:i', step: 15, minDate: '0' });
}

In the sample code above, the step defines the gap (in terms of minutes) between two time selections in the plugin. The value of minDate is set to 0 so that the earliest date the user can choose is today.

That’s all. Now you can use the plugin in a text field for user to input both date and time.

@Html.EditorFor(model => model.IncidentTime, 
    new { 
        htmlAttributes = new { 
            @class = "form-control date_field", 
            @placeholder = "Incident Date & Time", 
            @style = "max-width: 100%" 
        } 
    }
)
DateTimePicker enables us to specify both date and time in a user-friendly way.
DateTimePicker enables us to specify both date and time in a user-friendly way.

More Options in DateTimePicker

There are many options available in the plugin. I will just highlight some that I use often here.

If you would like to restrict the time options that can be chosen, you can use the allowTimes together with a defaultTime value as demonstrated below.

$('.date_field_2').datetimepicker({
    format: 'Y-m-d H:i', step: 15, defaultTime: '09:00', minDate: '0',
    allowTimes: [
        '09:00', '09:15', '09:30', '09:45',
        '10:00', '10:15', '10:30', '10:45',
        '11:00', '11:15', '11:30', '11:45',
        '12:00'
    ]
});

The time picker will then only show the 13 options specified in the code above. If user doesn’t pick any of the time option, then by default the chosen time will be 9am.

In case you would like to hide the time picker, you can do so by setting timepicker to false.

$('.date_field').datetimepicker({ format: 'Y-m-d', timepicker: false });

Some users sent their feedback to me about the visibility of time picker on the plugin. To them, the time picker is not so obvious. Hence, I change to use a brighter background colour in one of the class definition in jquery.datetimepicker.css.

.xdsoft_datetimepicker .xdsoft_timepicker .xdsoft_time_box >div >div {
    background: #5cb85c; /* Updated here */
    border-top: 1px solid #ddd;
    color: #666;
    font-size: 12px;
    text-align: center;
    border-collapse: collapse;
    cursor: pointer;
    border-bottom-width: 0;
    height: 25px;
    line-height: 25px;
}

Image Slider

Jssor Slider is another free jQuery plugin that I like. It provides a convenient way to do image slider.

I tried it out in my MVC project. Basically, what I do is just use one of the examples as reference, and then include one JS library (Suggestion from the author: Use ‘jssor.slider.mini.js’ (40KB for jQuery Plugin) or ‘jssor.slider.min.js’ (60KB for No-jQuery Version) for release) and some other JavaScript codes together with some inline CSS. I don’t want to talk much about it here because, hey, they have hosted the code and samples on GitHub for public to download!

Jssor Slider with thumbnail navigator is one of the available templates that I like.
Jssor Slider with thumbnail navigator is one of the available templates that I like.

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Playing with Google Maps API

Google Maps - Google Developers - Newtonsoft JSON - Bing Maps

“Given an address, how do I get its latitude and longitude?”

I had been finding the solution for this problem for a long time until I discovered the API from Google Maps, the Geocoding Service.

Recently, I found out that my kampung was actually searchable on Google Maps Street View.
Recently, I found out that my kampung was actually searchable on Google Maps Street View.

Geocoding

According to the definition given in the Geocoding Service, geocoding is the process of converting human-readable address into geographic coordinates, such as latitude and longitude. Sometimes, the results returned can also include other information like postal code and bounds.

To do a latitude-longitude lookup of a given address, I just need to pass the a GeocodeRequest object Geocoder.geocode method. For example, if I want to find out the latitude and longitude of Changi Airport, I just do the following in JavaScript.

https://maps.googleapis.com/maps/api/js?libraries=places


var geocoder = new google.maps.Geocoder();
if (geocoder) {
    geocoder.geocode(
        { address: "Changi Airport" }, 
        function (result, status) {
            if (status != google.maps.GeocoderStatus.OK) {
                alert(address + " not found!");
            } else {
                var topPick = result[0]; // The first result returned
                
                var selectedLatitude = topPick.geometry.location.lat();
                var selectedLongitude = topPick.geometry.location.lng();

                alert("Latitude: " + selectedLatitude.toFixed(2));
                alert("Longitude: " + selectedLongitude.toFixed(2));
            }
        }
    );
} else {
    alert("Geocoder is not available.");
}

The above method is recommended for dynamic geocoding which will response to user input in real time. However, if what is available is a list of valid addresses, the Google Geocoding API will be another tool that you can use, especially in server applications. The Geocoding API is what I tried out in the beginning too, as shown in the C# code below.

var googleURL = "http://maps.googleapis.com/maps/api/geocode/json?address=" + 
    Server.UrlEncode(address) + "&sensor=false";

using (var webClient = new System.Net.WebClient())
{
    var json = webClient.DownloadString(googleURL);
    dynamic dynObj = JsonConvert.DeserializeObject(json); 
    foreach (var data in dynObj.results) 
    {
        var latitude = data.geometry.location.lat;
        var longitude = data.geometry.location.lng;
        ...
    } 
}

The reason of using dynamic JSON object here is because the Geocoding API returns many information, as mentioned earlier, and what I need is basically just the latitude and longitude. So dynamic JSON parsing allows me to get the data without mapping the entire API to a C# data structure. You can read more about this on Rick Strahl’s post about Dynamic JSON Parsing with JSON.NET. He also uses it for Google Maps related API.

The reason that I don’t use the Geocoding API is because there are usage limits. For each day, we can only call the API 2,500 times and only 5 calls per second are allowed. This means that in order to use the API, we have to get the API Key from Google Developer Console first. Also, it is recommended for use in server applications. Thus I change to use the Geocoding Service.

Where to Get the Address?

This seems to be a weird question. The reason why I worry about this is because it’s very easy to have typos in user input. Sometimes, having a typo is an address can mean two different places, for example the two famous cities in Malaysia, Klang and Kluang. The one without “u” is located at Kuala Lumpur area while the one with “u” is near to Singapore.

Klang and Kluang
Klang and Kluang

So I use the Place Autocomplete from Google Maps JavaScript API to provide user a list of valid place name suggestions.

https://maps.googleapis.com/maps/api/js?libraries=places

...

<input id="LocationName" name="LocationName" type="text" value="">

...


$(function () {
    var input = document.getElementById('LocationName');
    var options = {
        types: ['address'], 
        componentRestrictions: { country: 'tw' }
    };

    autocomplete = new google.maps.places.Autocomplete(input, options);
});

In the code above, I restricted the places which will be suggested by the Place Autocomplete to be only places in Taiwan (tw). Also, what I choose in my code above is “address”, which means the Place Autocomplete will only return me addresses. There are a few Place Types available.

The interesting thing is that even when I input simplified Chinese characters in the LocationName textbox, the Place Autocomplete is able to suggest me the correct addresses in Taiwan, which are displayed in traditional Chinese.

If I search Malaysia places (which are mostly named in Malay or English) with Chinese words, even though the Place Autocomplete will not show anything, the Geocoder is still able to return me accurate results for some popular cities.

Google Place Autocomplete can understand Chinese!
Google Place Autocomplete can understand Chinese!

I also notice that if I view the source of the web page, there will be an attribute called “autocomplete” in the LocationName textbox and its value is set to false. However, this should not be a problem for Place Autocomplete API to work. So don’t be frightened if you see that.

<input ... id="LocationName" name="LocationName" type="text" value="" autocomplete="off">

Putting Two Together

Isn’t it good if it can show the location of the address on Google Map after keying in the address in the textbox? Well, it’s simple to do so.

Remember the script to look for Changi Airport latitude and longitude above? I just put the code in a function called showLatLngOfAddress which accepts a parameter as address. Then call it when the LocationName loses focus.

$('#LocationName').blur(function () {
    showLatLngOfAddress(input.value);
});

In addition, I add a few more lines of code to showLatLng to draw a marker on the Google Map to point out the location of the given address on a map.

var marker = null;

function showLatLngOfAddress(address) {
    ...

    var topPick = result[0];

    ...

    //center the map over the result
    map.setCenter(topPick.geometry.location);
    
    //remove existing marker (if any)
    if (marker != null)
    {
        marker.setMap(null);
    }

    //place a marker at the location
    marker = new google.maps.Marker(
    {
        map: map, 
        position: topPick.geometry.location,
        animation: google.maps.Animation.DROP,
        draggable: true
    });
}

Finally, I not only make the marker to be draggable, but also enable it to update the latitude and longitude of the address when it is dragged to another location on the map.

google.maps.event.addListener(marker, 'drag', function (event) {
    alert('New Latitude: ' + event.latLng.lat().toFixed(2));
    alert('New Longitude: ' + event.latLng.lng().toFixed(2));
});
Do you know where 台北大桥 is? The map will tell you.
Do you know where 台北大桥 is? The map will tell you.

Bing Maps

If you are interested in using Bing Maps, there are Bing Maps REST Services available too.

I tried to search “Kluang” using Bing Maps API, it returned me two locations. One was in Malaysia and another one was near to Palembang in Indonesia! Wow, cool! On the other hand, Google Places returned me only the Kluang in Malaysia.

Unlike Place Autocomplete from Google, it is not straightforward to do place name suggestion using Bing Maps. If you are interested, please read a tutorial written by Vivien Chevallier on how to use the Bing Maps REST Services with jQuery to build an autocomplete box and find a location dynamically. I haven’t tried it out though. Anyway, Google APIs are still easier to use. =P

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Protect Your ASP .NET Applications

ASP .NET MVC - Entity Framework - reCAPTCHA - OWASP - JSON

Here is a just a few items that I learnt on how to protect and secure my web applications in recent ASP .NET projects.

reCAPTCHA in Razor

CAPTCHA is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”.

CAPTCHA is a program to find out if the current user is whether a human or a robot by asking the user to do some challenge-response tests. This feature is important in some websites to prevent machine to, for example, auto login to the websites, to do online transactions, to register as members, and so on. Luckily, it’s now very easy to include CAPTCHA in our ASP .NET MVC web applications.

Register your website here to use reCAPTCHA: https://www.google.com/recaptcha/admin.
Register your website here to use reCAPTCHA: https://www.google.com/recaptcha/admin.

reCAPTCHA is a free Google CAPTCHA service that comes in the form of widget that can be added to websites easily. So, how do we implement reCAPTCHA in our ASP .NET MVC sites?

The library that I use is called ReCaptcha for MVC5, which can be downloaded from Codeplex. With the help of it, I am able to easily plugin reCAPTCHA in my MVC5 web applications.

After adding ReCaptcha.Mvc5.dll in my project, I will need to import its namespace to the Razor view of the page which needs to have reCAPTCHA widget.

@using ReCaptcha.Mvc5;

To render the reCAPTCHA widget in, for example, a form, we will do the following.

< div class="form-group">
    @Html.LabelFor(model => model.recaptcha_response_field, new { @class = "control-label col-md-2" })
    < div class="col-md-10">
        <!--Render the recaptcha-->
        @Html.reCAPTCHA("<public key here>")
    < /div>
 < /div>

The public key can be retrieved from the official reCAPTCHA page after you register your website there.

reCAPTCHA Widget on Website
reCAPTCHA Widget on Website

In the code above, there is a field called recaptcha_response_field, which will be added in our model class as demonstrated below.

public class RegistrationViewModel : ReCaptchaViewModel
{
    ...

    [Display(Name = "Word Verification")]
    public override string recaptcha_response_field { get; set; }
}

To do verification in the controller, we will have the following code to help us.

[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> Register(RegistrationViewModel registrationVM)
{
    ...

    if (!string.IsNullOrEmpty(checkoutVM.recaptcha_response_field))
    {
        // Verify the recaptcha response.
        ReCaptchaResponse response = 
            await this.verifyReCAPTCHA(registrationVM, "<private key here>", true);

        if (response.Success)
        {
            // Yay, the user is human!
        } 
        else 
        {
            ModelState.AddModelError("", "Please enter correct verification word.");
        }
    }
}

The private key can also be found in the official reCAPTCHA page after you have submitted your website.

After doing all these, you are now be able to have a working reCAPTCHA widget in your website.

XSRF: Cross-Site Request Forgery

In the controller code above, there is one attribute called ValidateAntiForgeryToken. The purpose of this attribute is to prevent XSRF by adding anti-forgery tokens in both a hidden form field and the cookies to the server.

I draw a graph for me to better explain about what XSRF is.

XSRF (Cross-Site Request Forgery)
XSRF (Cross-Site Request Forgery)

Steps are as follows.

  1. The user logs in to, for example, a bank website.
  2. The response header from the bank site will contain the user’s authentication cookie. Since authentication cookie is a session cookie, it will only be cleared when the process ends. Thus, until that time, the browser will always include the cookie with each request to the same bank website.
  3. The attacker sends to the user a link and somehow encourage the user to click on it. This causes sending a request to the attacker’s server.
  4. The attacker website has the following form.
    <body onload="document.getElementById('attack-form').submit()">
        <form id="fm1" action="https://bank.com/TransferMoney" method="post">
            <input name="transferTo" value="attackerAccount" />
            <input name="currency" value="USD" />
            <input name="money" value="7,000,000,000" />
        </form>
    </body>
  5. Because of Step 4, the user will be forced to send a request to the bank website to transfer money to attacker’s account with the user’s authentication cookie.

Hence, the attribute ValidateAntiForgeryToken helps to avoid XSRF by checking both the cookie and form have anti-forgery tokens and their values match.

Mass-Assignment Vulnerability and Over-Posting Attack

Few years ago, Github was found to have Mass-Assignment Vulnerability. The vulnerability allows people to perform Over-Posting Attack to the site so that the attackers can modify data items which are not normally allowed to access. Due to the fact that ASP .NET MVC web application is using Model Binding, the same vulnerability can happen in ASP .NET MVC environment as well.

You want to control what is being passed into the binder.
You want to control what is being passed into the binder.

There are two my personal favourite solutions to avoid Over-Posting Attack.

One is using Bind attribute in the controller method. For example, in order to prevent users editing the value of isAdmin when they update their profile, I can do something as follows.

[HttpPost]
public ViewResult Edit([Bind(Exclude = "IsAdmin")] User user)
{
    ...
}

Alternatively, we can also use “Include” to define those fields that should be included in the binding.

Second solution is using view model. For example, the following class will not contain properties such as IsAdmin which are not allowed to be edited in the form post of profile edit.

public class UserProfileUpdateViewModel
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    ...
}

XSS: Cross-Site Scripting

According to OWASP (Open Web Application Security Project), XSS attacks

…are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites… Flaws that allow these attacks are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.

Kirill Saltanov from NUS is explaining to guests about XSS during 5th STePS event.
Kirill Saltanov from NUS is explaining to guests about XSS during 5th STePS event.

Currently, by default ASP .NET will throw exception if potentially dangerous content is detected in the request. In addition, the Razor view engine protect us against most of the XSS attacks by encoding data which is displayed to web page via the @ tag.

In View, we also need to encode any user-generated data that we are putting into our JavaScript code. Starting from ASP .NET 4.0, we can call HttpUtility.JavaScriptStringEncode. HttpUtility.JavaScriptStringEncode helps to encode a string so that it is safe to display and characters are escaped in a way that JavaScript can understand.

In order to avoid our database to have malicious markup and script, we need to encode the user inputs in the Controller as well using Server.HtmlEncode.

[AllowHtml]

There are some cases where our web application should accept HTML tags. For example, we have a <textarea> element in our blogging system where user can write the content of post, then we need to skip the default checking of ASP .NET.

To post HTML back to our Model, we can simply add the [AllowHtml] attribute to the corresponding property in the Model, for example

public class BlogPost {
    [Key]
    public int ID { get; set; }
    ...
    [AllowHtml]
    public string Content { get; set; }
}

Then in the View, we will need to use @Html.Raw to tell Razor not to encode the HTML markup.

@Html.Raw(post.Content)

Wait… Won’t this make XSS attack possible in our website? Yup, of course. So, we must be very careful whenever we are trying to bypass the Razor encoding. The solution will then be using AntiXSS encoding library from Microsoft.

AntiXSS uses a safe list approach to encoding. With its help, we will then able to remove any malicious script from the user input in the Controller, as demonstrated below.

[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult CreatePost(BlogPost post)
{
    if (ModelState.IsValid)
    {
        ...
        post.Content = Sanitizer.GetSafeHtmlFragment(post.Content);
        ...
        db.BlogPosts.Add(post);
        db.SaveChanges();
        return RedirectToAction("Index");
    }
    return View(post);
}

ASP .NET Request Validation

Previously in the discussion of XSS, we know that by default ASP .NET throws exception if potentially dangerous content is detected in the request. This is because of the existence of ASP .NET Request Validation.

However, according to OWASP, Request Validation should not be used as our only method of XSS protection because it does not guarantee to catch every type of invalid input.

HttpOnly Cookies

In order to reduce the risk of XSS, popular modern browsers have added a new attribute to cookie called HttpOnly Cookie. This new attribute specifies that a cookie is not accessible through script. Hence, it prevents the sensitive data contained in the cookie can be sent to attacker’s side via malicious JavaScript in XSS attack.

When a cookie is labelled as HttpOnly, it tells the browser that the cookie should only be accessed by the server. It is very easy to check which cookies are HttpOnly in the developer tool of modern browsers.

Microsoft Edge F12 Developer Tools can tell which are the HttpOnly cookies.
Microsoft Edge F12 Developer Tools can tell which are the HttpOnly cookies.

So, how do we create HttpOnly cookies in ASP .NET web applications? Just add a new line to set HttpOnly attribute of the cookie to true is fine.

HttpCookie myCookie = new HttpCookie("MyHttpOnlyCookie");
myCookie["Message"] = "Hello, world!";
myCookie.Expires = DateTime.Now.AddDays(30);
myCookie.HttpOnly = true;
Response.Cookies.Add(myCookie);

Alternatively, HttpOnly attribute can be set in web.config.

<httpCookies httpOnlyCookies="true" ...>

However, as pointed out in OWASP, if a browser is too old to support HttpOnly cookie, the attribute will be ignored by the browser and thus the cookies will be vulnerable to XSS attack. Also according to MSDN, HttpOnly does not prevent attacker with access to the network channel from accessing the cookie directly, so it recommends the use of SSL in addition of HttpOnly attribute.

HttpOnly Cookie was introduced in 2002 in IE6. Firefox 2.0.0.5 only supported HttpOnly attribute in 2007, 5 years later. However, soon people realized that in Firefox, there was still a bug in the HttpOnly implementation. Firefox allowed attackers to do an XMLHttpRequest to get the cookie values from the HTTP Response headers. 2 years later, in 2009, Mozilla finally fixed the bug. Since then, the XMLHttpRequest can no longer access the Set-Cookie and Set-Cookie2 headers of any response no matter the HttpOnly attribute is set to true or not.

Browserscope provides a good overview about the security functionalities in major browsers.
Browserscope provides a good overview about the security functionalities in major browsers.

SQL Injection and Entity SQL

When I first learned SQL in university, I always thought escaping user inputs helped to prevent SQL Injection. This approach doesn’t work actually. I just read an article written by Steve Friedl regarding how escaping the input strings does not protect our applications from being attacked by SQL Injection. The following is the example Steve gave.

SELECT fieldlist
FROM table
WHERE id = 23 OR 1=1;  -- Boom! Always matches!

When I was working in the Summer Fellowship Programme, I started to use Parameterized SQL.

SqlConnection conn = new SqlConnection(connectionString); 
conn.Open(); 
string sql = "SELECT fieldlist FROM table WHERE id = @id";
SqlCommand cmd = new SqlCommand(sql); 
cmd.Parameters.Add("@id", SqlDbType.Int, id); 
SqlDataReader reader = cmd.ExecuteReader();

This approach provides a huge security performance benefits.

In January, I started to learn Entity Framework. In Entity Framework, there are three types of queries:

  • Native SQL
  • Entity SQL
  • LINQ to Entity

In the first two types, there is a risk of allowing SQL Injection if the developers are not careful enough. Hence, it’s recommended to use parameterized queries. In addition, we can also use Query Builder Methods to safely construct Entity SQL, for example

ObjectQuery<Flight> query =
    context.Flights
    .Where("it.FlightCode = @code",
    new ObjectParameter("code", flightCode));

However, if we choose to use LINQ to Entity, which does not compose queries by using string manipulation and concatenation, we will not have the problem of being attacked by traditional SQL Injection.

JsonResult and JSON Hijacking

Using the MVC JsonResult, we are able to make our controller in ASP .NET MVC application to return Json. However, by default, ASP .NET MVC does not allow us to response to an HTTP GET request with a JSON payload (Book: Professional ASP .NET MVC 5). Hence, if we test the controller by just typing the URL directly in the browser, we will receive the following error message.

This request has been blocked because sensitive information could be disclosed to third party web sites when this is used in a GET request. To allow GET requests, set JsonRequestBehavior to AllowGet.

Since the method only accepts POST requests, unless Cross-Origin Resource Sharing (CORS) is implemented, the browser will be able to protect our data from returning the Json result to other domains.

This is actually a feature introduced by ASP .NET MVC team in order to mitigate a security threat known as JSON Hijacking. JSON Hijacking is an attack similar to XSRF where attacker can access cross-domain JSON data which is returned as array literals.

The reason why “returning JSON data as array” is dangerous is that although browsers nowadays stop us from making cross domain HTTP request via JavaScript, we are still able to use a <script> tag to make the browser load a script from another domain.

<script src="https://www.bank.com/Home/AccountBalance/12"></script>

Due to the fact that a JSON array will be treated as a valid JavaScript script and can thus be executed. So, we need to wrap the JSON result in an object, just like what ASP .NET and WCF do. The ASP.NET AJAX library, for example, automatically wraps JSON data with { d: [] } construct to make the returned value to become an invalid JavaScript statement which cannot be executed:

{"d" : ["balance", "$7,000,000,000.00"] }

So, to avoid JSON Hijacking, we need to

  1. never return JSON array
  2. not allow HTTP GET request to get the sensitive data

Nowadays, even though JSON Hijacking is no longer a known problem in modern browsers, it is still a concern because “you shouldn’t stop plugging a security hole just because it isn’t likely to be exploited“.

By the way, GMail was successfully exploited via JSON Hijacking. =)

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Nice Report Always Comes with Colourful Charts

Normally, we will show only tabular data in our online report module. However, tables are not exciting. So, why don’t we add some charts to the report to better describe the data? Thanks to Google Charts, web developers can add colourful charts to the web pages easily.

Step 1: The Required Libraries

The first thing we need is the Google API Loader which allows us later to easily load APIs from Google. So, with the help of it, we can make use of the Google Hosted Libraries.

 type="text/javascript" src="https://www.google.com/jsapi">

After that, we will load the API which will help us drawing charts. To do so, we will have the following JavaScript code.

google.load("visualization", "1.1", { packages: ["corechart"] });

Material Design

What is Google Material Design? There is an interesting video from Google Design team sharing the ideas behind it. (Image Credit: Google Design)
What is Google Material Design? There is an interesting video from Google Design team sharing the ideas behind it. (Image Credit: Google Design)

Recently, in order to support a common look-and-feel across all Google properties and apps running on Google platforms, Material Design is introduced to Google Charts as well.

There are more than 20 of charts available in Google Chart Gallery. Depends on which chart that you would like to use, you need to load different API in order to use the Material version of each chart. I played around with some of them which are useful in my use cases.

The first chart that I use is the Column Chart which is to draw vertical bar chart. So, the API can be called as follows.

google.load("visualization", "1.1", { packages: ["bar"] });

Other than vertical bar chart, another common graphs used in report will be Line Chart. The API of Line Chart can be called as follows.

google.load("visualization", "1.1", { packages: ["line"] });
Line Chart is available on Google Charts.
Line Chart is available on Google Charts.

The third chart that I use is Timeline. This chart is different from the two charts introduced above because so far I still can’t find the non-Material version of it. So, the only way to call the API of Timeline is as follows.

google.load("visualization", "1.1", { packages: ["timeline"] });
Timeline is a chart describing the happening events over time.
Timeline is a chart describing the happening events over time.

Step 2: The Data

After we have loaded API of the chart that we want to use, then we need to pump in the data.

var data = new google.visualization.DataTable();
data.addColumn('string', 'Branch');
data.addColumn('number', 'Sales');
data.addRows([
    ['Ang Mo Kio', 1205.80],
    ['Bedok', 828.90],
    ['Clementi', 2222.10],
    ['Dhoby Ghaut', 3180.00]
]);

In the code above, I created a data table with 2 columns. Then I added 4 rows of data with addRows.

The addRows part can be done using a simple for loop. However, due to the fact that not all browsers support Tailing Comma, the for loop needs to have additional step to remove Tailing Comma.

Step 3: Chart Render

After we have the data, now we can proceed to draw the chart. For example, if we want to draw a Column Chart for the data table above, then we will use the following code.

var chart = new google.visualization.ColumnChart(document.getElementById('chart_div'));chart.draw(data);

The HTML element chart_div above is just an empty div where the chart should be rendered at.

We will then be able to get the following diagram (Non-Material).

A simple Column Chart.
A simple Column Chart.

Customization of Charts with Options

Google Charts allows us to customize the diagram. For example, we can add title for the diagram, horizontal axis, and vertical axis.

var options = {
    title: 'Sales of Branches',
    hAxis: {
        title: 'Branch'
    },
    vAxis: {
        title: 'Amount (SGD)',
        minValue: 0
    },
    legend: {
        position: 'none'
    }
 };

chart.draw(data, options);

In the code above, I not only added titles, but I also force the vertical axis to start from 0 and hide the legend by setting its position to none.

Column chart is now updated with helpful titles.
Column chart is now updated with helpful titles.

What we have seen so far is the non-Material Column Chart. So how will a Material Column Chart look like?

To get a Material Column Chart, we will change the code above to the following.

var options = {
    chart: {
        title: 'Sales of Branches',
        subtitle: '2015 First Quarter'
    },
    axes: {
        y: { 0: { label: 'Amount (SGD)' } },
        x: { 0: { label: 'Branch' } } 
    },
    legend: {
        position: 'none'
    }
};

var chart = new google.charts.Bar(document.getElementById('chart_div'));

chart.draw(data, options);

Then we will be able to get the Material version of the chart. Now, we are even able to define a subtitle for the diagram.

Yup, we successfully upgraded our chart to the Material version.
Yup, we successfully upgraded our chart to the Material version.

In case you would like convert a non-Material Column Chart to a Material version, you can do so with the code below too.

chart.draw(data, google.charts.Bar.convertOptions(options));

Challenge with Tab in Bootstrap

When I was adding Google Charts to one of my web page with tabs using Bootstrap framework, I realized there was a problem with the display of the rendered diagram. The labels in the chart are incorrectly positioned. This problem has been discussed on Stack Overflow as well.

 

Display Issues of Google Charts in Bootstrap Tabs
Display Issues of Google Charts in Bootstrap Tabs

Interestingly, if the chart happens to be in the first tab which is visible by default, then there won’t be any display issue on the chart. This problem only occurs when the charts are located in those subsequent tabs which are hidden during the first load of the page. When user clicks on any of those subsequent tabs, then the display issue will happen.

So one obvious solution is actually to only call the Google API to render the graph when the tab is clicked. To be safe enough, I actually put a delay to the click event of the tab so that the chart will be drawn 3 seconds after the the corresponding tab is clicked. This seems to help fixing the display problem.

Similar to my solution, there is also a better alternative which is to bind the draw function to the show event of the tab, as discussed on Stack Overflow. I like the solution too because of its cleanliness of the code.

Try More with Google Charts

Google Charts is undoubtedly a very easy-to-use solution for web developers to present their data. So, please try it out and be amazed by the number of charts available on Google Charts.

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner