Burger and Cheese

xamarin-cognitive-services-android-voicetext

As a web developer, I don’t have many chances to play with mobile app projects. So rather than limit myself to just one field, I love to explore other technologies, especially mobile app development.

Burger Project: My First Xamarin App

Last month, I attended a Xamarin talk at Microsoft Singapore office with my colleague. The talk was about authentication and authorization with social networks such as Facebook and Twitter via Azure App Service: Mobile App.

Ben Ishiyama-Levy is talking about how Xamarin and Microsoft Azure works together.
Ben Ishiyama-Levy is talking about how Xamarin and Microsoft Azure works together.

The speaker is Ben Ishiyama-Levy, a Xamarin evangelist. His talk inspired me to further explore how I could retrieve user info from social network after authenticating the users.

Because I am geek-first and I really want to find out more, so I continue to read more about this topic. With the help from my colleague, I developed a simple Xamarin.Android app to demonstrate the Authentication and logged-in user’s info retrieval.

The demo app is called Burger and it can be found on my Github repository: https://github.com/goh-chunlin/Burger.

Challenges in Burger Project

Retrieving user's info from social network.
Retrieving user’s info from social network.

In Burger project, the first big challenge is to understand how Azure App Service: Mobile App works in Xamarin. Luckily, with the material and tutorial given in the Xamarin talk from Ben, I was able to get a quick start on this.

My colleague also shared another tutorial which is about getting authenticated user’s personal details on Universal Windows Platform (UWP). It helps me a lot to understand about how mobile app and Azure App Service can work together.

My second challenge in this project is to understand Facebook Graph API. I still remember that I spent quite some time finding out why I could not retrieve the friend list of a logged-in Facebook user. With the introduction of the Facebook Graph API 2.0, access to a user’s friends list via /me/friends is limited to just friends using the same app. Hence after reading a few other online tutorials, I finally somehow able to get another subset of a user’s friends via /me/taggable_friends.

In this project, it’s also the first time I apply Reflection in my personal project. It helps me easily get the according social network login class with a neat and organized code.

microsoftdeveloperday
Microsoft Developer Day at NUS, Singapore in May 2016

Cheese Project: When Google Speech Meets MS LUIS on Android

Few months ago, I’m fortunate to represent my company to attend Microsoft Developer Day 2016 in National University of Singapore (NUS).

The day is the first time Microsoft CEO Satya Nadella comes to Singapore. It’s also my first time learn about the powerful Cognitive Services and LUIS (Language Understanding Intelligence Service) in Microsoft Azure in Riza’s talk.

presentation
Riza’s presentation about Microsoft Cognitive APIs during Microsoft Developer Day.

Challenges in Cheese Project

Everyday, it takes about one hour for me to reach home from office. Hence, I will only have two to three hours every night to work on personal projects and learning. During weekends, when people are having fun out there, I will spend time on researching about some exciting new technologies.

There are many advance topics in LUIS. I still remember that when I was learning how LUIS works, my friend was actually playing the Rise of the Tomb Raider beside me. So while he was there phew-phew-phew, I was doing data training on LUIS web interface.

luis
Microsoft LUIS (Language Understanding Intelligence Service) and Intents

Currently, I only worked on some simple intents, such as returning me current date and time as well as understanding which language I want to translate to.

My first idea in Cheese project is to build an Android app such that if I say “Please translate blah-blah to xxx language”, the app will understand and do the translation accordingly. This can be quite easily done with the help of both LUIS and Google Translate.

After showing this app to my colleagues, we realized one problem in the app. It’s too troublesome for users to keep saying “Please translate blah-blah to xxx language” every time they need to translate something. Hence, recently I have changed it to use GUI to provide language selection. This, however, reduces the role played by LUIS in this project.

voicetext
VoiceText provides a range of speakers and voices with emotions!

To make the project even more fun, I implemented the VoiceText Web API from Japanese in the Android app. The cool thing about this TTS (Text-To-Speech) API is that it allows developers to specify the mood and characteristic of the voice. The challenge, of course, is to read the API written in Japanese. =P

Oh ya, this is the link to my Cheese repository on Github: https://github.com/goh-chunlin/Cheese. I will continue to work on this project while exploring more about LUIS. Stay tuned.

languagelist    googlespeech    SuccessfullyTranslated.png

After-Work Personal Projects

There are still more things in mobile app development for me to learn. Even though most of the time I feel exhausted after long work day, working on new and exciting technologies helps me getting energized again in the evening.

I’m not as hardworking as my friends who are willing to sacrifice their sleep for their hobby projects and learning, hence the progress of my personal project development is kind of slow. Oh well, at least now I have my little app to help me talking to people when I travel to Hong Kong and Japan next year!

Growth Hacker: Hybrid of Marketer and Programmer

Growth Hacker Marketing and Growth Hacking Handbook

I don’t know why people like to call software developers as hackers. Even though calling them hackers is way better than using other names like ninjas or code monkeys, I think it’s still not appropriate to call developers hackers. So, I find it quite weird when I met someone who is Growth Hacker. Err… Hack what?

Growth Hacker

Andrew Chen, an investor for tech startups, describes growth hacker as

a hybrid of marketer and coder, one who looks at the traditional question of “How do I get customers for my product?” and answers with A/B tests, landing pages, viral factor, email deliverability, and Open Graph.

A growth hacker is someone who has thrown out the playbook of traditional marketing and replaced it with only what is testable, trackable, and scalable. Their tools are e-mails, pay-per-click ads, blogs, and platform APIs instead of commercials, publicity, and money.
(Holiday, Ryan. 2013. Growth Hacker Marketing. New York : Penguin Group, 2013)

Hence, growth hacker plays an important role in startup or SME which has little to no resources. Growth hacker needs to make use of their programming skills to provide more scientifically way of understanding who customers are and where they are.

Developer + Marketer = Growth Hacker
Developer + Marketer = Growth Hacker

Growth hacking highlights the importance of Product Market Fit. Instead of expecting marketers to show a product that nobody wants, company with growth hacking mindset will now spend time on trying different way of improving product based on customer feedback.

Growth hacker tests the ideas by letting customers to try our different version of the product and then ask the customers what they like about the product. One way to receive feedback is by looking at conversion rate or by the number of Facebook likes received. Hence, this helps the company to publish a product which is worth marketing and has majority of its customers love to use.

Different Stages in Growth Hacking

I’m glad to have talked to people who are experienced in growth hacking. They recommended me two books to read. One is the Growth Hacker Marketing by Ryan Holiday.

In Ryan Holiday’s book, growth hacking basically has 3 stages as follows.

  1. Finding your growth hack;
  2. Going viral;
  3. Closing the loop: Retention and optimization.
3 stages in Growth Hacker Marketing.
3 stages in Growth Hacker Marketing.

Growth Hacking Tactics

Another book which is recommended to me to read is Growth Hacking Handbook written by Jon Yongfook. The book suggested 100 growth tactics which many startups have successfully applied over the last 2 decades.

I am not going to list all the 100 items here. So, I will just highlight some of them which I find to be interesting.

Tactic 01: The We Can’t Go Back Jack Hack

On the signup page or checkout page, disable or remove all navigational elements that would enable a user to go back to the previous page. This includes disabling your site logo, which is often linked to the homepage.

The reason of having this tactic is because preventing users from leaving a process is sometimes good enough to force them to complete the entire signup/booking process.

In the Amazon checkout page, user can't go back to the homepage by clicking on the logo.
In the Amazon checkout page, user can’t go back to the homepage by clicking on the logo. Instead they have to click on the tiny link at the bottom.

I don’t really like this idea because it’s sort of locking customers in your shop and then telling the customers that they can’t leave your shop until they make the payment. It creates a bad user experience.

Growth Hacking: Shut the door, release the dog and then lock the customers in our shop!
Growth Hacking: Shut the door, release the dog and then lock the customers in our shop!

Tactic 02: The Pre-filled Form Hack

Instead of forcing customers to buy our goods before leaving our shop, why don’t we just create a fast signup/checkout process?

One way of doing that is to make the form short and pre-fill form fields with information if we already have it, such as customer’s email address.

For example the checkout form below will scare customers away not because it’s too long but it also requests too much personal information from the customers without giving any explanation why the information is needed.

Booking ferry tickets from Singapore to Batam on Easybook requires you to enter every single passenger's name, gender, DOB, nationality, passport, and passport expiry date. On top of that, the system also requires you to manually specify the number of adults and children even though DOB of each passenger is already given.
Booking ferry tickets from Singapore to Batam on Easybook requires you to enter every single passenger’s name, gender, DOB, nationality, passport, and passport expiry date. On top of that, the system also requires you to manually specify the number of adults and children even though DOB of each passenger is already given.

Tactic 03: The As Seen On TV Hack

Users feel more comfortable when they know the products they are using now are something that other famous people have already used it. So having logos of media outlets who have mentioned about our product on our homepage is another form of social proof.

Drive.SG is using the As-Seen-On hack.
Drive.SG is using the As-Seen-On hack.

Tactic 04: The Multi Post Hack

This tactic is also quite straightforward. It basically means giving the option to post the website content to other social networks with just one click. That will help to push our product to a wider network via social networks.

Tactic 05: The Timebomb Hack

This is a pressure tactic where a time limit is set when a user is making critical decision such as a purchase. I don’t like to use this tactic in my ecommerce website because as a consumer, I prefer to choose what to buy without any pressure. I normally just quit the website when it forces me to finish a task in a very little time.

Customers needs to complete their transaction within 5 minutes on CurrencyBooking, a money changing platform.
Customers needs to complete their transaction within 5 minutes on CurrencyBooking, a money changing platform.

Tactic 06: The Winback Hack and the Negative Follow Up Hack

These two tactics are quite similar. The goal is to get feedback from users who are inactive and having incomplete transactions and incentivize them to return to the website.

A simple winback campaign can be automatically sending highly personalized email to those users who have not logged in for past 30 days. Normally, a winback campaign is our last chance at communicating with our users before they unsubscribe from us.

Sending email to customers who have signed up but do not buy any of our product is also important. That will help us to better understand why some customers do not want to buy from us (and sometimes it could be because of having bugs in the payment module).

The following is an email I received after subscribing an online service.

Hi Chun Lin,

I noticed you recently added our product to your cart but did not submit the order – wondering if you have any questions about our platform or pricing?

We’re always here to help so please do not hesitate to contact me! You can find my contact info, including my direct number, in the email signature below.

Direct number and contact info! Wow!

I don’t sign up with them in the end because I don’t really need their product at that moment. However, I’m still very impressed by their friendly email.

Tactic 07: The Intro Video Hack

If a picture is worth a thousand words, then a video is worth a million. Most people will find video content more interesting than standard text content. Hence, video content is useful to attract a significant number of inbound links and social shares for our website.

Recently, Google Search results page also displays Rich Snippet which contains information about the video embedded on the page. Hence, the Rich Snippet helps our web page to stand out from the other search results on the page. Therefore, users will be more inclined to click on the link pointing to our website.

Tactic 08: The Register to Save Hack

Instead of getting users to sign up first, we can choose to only ask users for their email after they have gone through some steps. This is to encourage users to try our product first to understand about the product better so that the eventual conversion to signup is easy.

Singapore Real Estate Exchange allows users to freely search, then requires login/signup when users want to shortlist the property.
Singapore Real Estate Exchange allows users to freely search, then requires login/signup when users want to shortlist the property.

Tactic 09: The Widget Hack and The Affiliate Program Hack

Instead of asking users to share our website via a link in text, why not giving them an embeddable widget to our product which can be easily added to any other website or blog? Then from there we can get those who embed our widget on their websites/blogs to participate in our affiliate program so that people are motivated to refer customers to our website.

Bus ticket search widget from BusOnlineTicket.com Affiliate Program.
Bus ticket search widget from BusOnlineTicket.com Affiliate Program.

Tactic 10: The Remarketing Tag Hack

Nowadays, on Facebook, there is an option to create Custom Audiences from our website which enable us to target our Facebook Adverts to only those who have visited our website. Alternatively, it also allows excluding existing customers so that we can focus on new customer acquisition campaigns.

IBM Connect 2015: SoftLayer and Bluemix

IBM Connect 2015 - SoftLayer - Bluemix

With different challenges emerging every other day, startups nowadays have to innovate and operate rapidly in order to achieve exponential growth in a short period of time. Hence, my friends working in startups always complain about the abuse of 4-letter word “asap”. Every task they receive always come with one requirement: It must be done asap. However, as pointed out in the book Rework by Jason Fried from Basecamp, when everything is expected to be done asap, nothing in fact can be really asap. So, how are startups going to monetize their ideas fast enough?

To answer the question, this year IBM Connect Singapore highlighted two cloud platforms, SoftLayer and Bluemix, which help to assist startups to build and launch their products at speed.

IBM Connect 2015 at Singapore Resorts World Sentosa
IBM Connect 2015 at Singapore Resorts World Sentosa

SoftLayer, IaaS from IBM

SoftLayer is a very well-known IaaS cloud service provider from IBM. Currently, SoftLayer has data centres across Asia, Australia, Europe, Brazil, and United States. William Lim, APAC Channel Development Manager at SoftLayer, stated during the event that there will be two new data centres are introduced for every two months on average. In addition, each data centre is connected to the Global Private Network which enable startups to deploy and manage their business applications worldwide.

With Global Private Network, SoftLayer users won’t be charged for any bandwidth usage across the network. Yup, free! Bandwidth between servers on the Global Private Network is unmetered and free. So, with this exciting feature, startups are now able to build true disaster recovery solutuion which requires file transfer from one server to another.

William Lim sharing story about Global Private Network.
William Lim sharing story about Global Private Network.

What excites me during the event is the concept of Bare Metal Server. With Microsoft Azure and Amazon Web Services (AWS), users do not get predictable and consistent performance especially for I/O intensive tasks when their applications are running on virtual-machine based hosting. In order to handle I/O intensive workloads, IBM SoftLayer offers their users a new type of server, Bare Metal Server.

A Bare Metal Server is a physical server which is fully dedicated to one single user. Bare Metal Server can be setup with cutting-edge Intel server-grade processors which can then maximize the server processing power. Hence, for those startups that would like to build Big Data applications, they can make use of Bare Metal Server from SoftLayer to perform data-intensive functions without worrying about latency and overhead delays.

Bluemix, PaaS from IBM

As a user of Microsoft Azure Cloud Service (PaaS), I am very glad to see the Bluemix, PaaS developed by IBM, is also being introduced in the IBM Connect event.

Amelia Johasky, IBM Cloud Leader (ASEAN), sharing how Bluemix works together with three key open compute technologies: Cloud Foundry, Docker, and OpenStack.
Amelia Johasky, IBM Cloud Leader (ASEAN), sharing how Bluemix works together with three key open compute technologies: Cloud Foundry, Docker, and OpenStack.

One of the reasons why I prefer PaaS over IaaS is because in a startup environment, developers always have too many todos and too little time. Hence, it is not a good idea to add the burden of managing servers to the developers. Instead, developers should just focus on innovation and development. In the world of PaaS, tons of useful libraries are made available and packaged nicely which allows developers to code, test, and deploy easily without worrying too much about the server configuration, database administration, and load balancing. (You can read about my pain of hosting web applications on Azure IaaS virtual machines here.)

After the IBM Connect event, I decide to try out Bluemix to see how it’s different from Azure Cloud Service.

The registration process is pretty straightforward. I started with the Web Application Template. In Bluemix, there are many programming languages supported, including the latest ASP .NET 5, the new open-source and cross-platform framework from Microsoft team!

Many web development platforms are available on Bluemix!
Many web development platforms are available on Bluemix!

I like how Bluemix is integrated with Git. It allows us to create a hosted Git repository that deploys to Bluemix automatically. The entire Git setup process is also very simple with just one click of the “Git” button. So every time after I push my commits to the repository, my app will be automatically updated on the server as well. Cool!

Bluemix enables us to deploy our web apps with Git.
Bluemix enables us to deploy our web apps with Git.

You can click on the button below to try out my simple YouTube related web app deployed on Bluemix.

Try out my app hosted on Bluemix at http://youtube-replayer.mybluemix.net/.

Bluemix is underlined by three key open compute technologies, i.e. Cloud Foundry, Docker, and OpenStack. What I have played with is just the Cloud Foundry part. In Bluemix, there is also an option to enable developers to deploy virtual machines. However, this option is currently beta and users can only have access to it if they are invited by IBM. Hence, I haven’t tried their VM option.

Finally, Bluemix currently only offers two regions, UK and US South. So for those who would like to have their apps hosted in other parts of the world, it may not be a good time to use Bluemix now.

YouTube RePlayer is now hosted on Bluemix.
YouTube RePlayer is now hosted on Bluemix.

Azure Blob Storage and File API

Azure Blob Storage - Azure SDK - ASP .NET MVC - Entity Framework - HTML5

When my applications were hosted on Windows Azure Virtual Machines (VM), we stored the images uploaded via our web applications in the hard disks of the VMs (except the temporary disk). However, when we started load balancing, we soon encountered a problem that the uploaded images were only found in one of the VMs. So we needed to find a centralized storage for those images.

Recently, when we are using Azure PaaS (aka Cloud Service), even without load balancing, we already encounter the same issue. That is simply because the hard drives used in Cloud Service instances are not persistent. Hence, a persistent file storage on the cloud is needed.

IaaS vs. PaaS
IaaS vs. PaaS

Blob Storage

Azure Blob Storage, according to Azure Documentation, is a service for storing large amount of unstructured data that can be accessed everywhere via HTTP or HTTPS. Hence, it is an ideal tool that we can use as the persistent image cloud storage.

There are two types of blob, Page Blob and Block Blob. Page Blob is commonly used for storing VHD files for VMs because it is optimized for random read and write operations.

For most of the files uploaded, it’s recommended to store as Block Blobs because large files will be split into smaller blocks and then uploaded concurrently. Hence, Block Blob is designed to give us faster upload and better throughput, which is great for image upload.

The maximum size for a Block Blob is 64 MB. Hence, if the uploaded file is more than 64 MB, we must upload it as a set of blocks; otherwise, we will receive status code 413 (Request Entity Too Large). For my web applications, there is no need for uploading an image which is more than 5MB most of the time. Hence, I can just limit the size of images before the user uploads them.

HttpPostedFileBase imageUpload;
...
if (imageUpload.ContentLength > 0 && imageUpload.ContentLength <= 5242880)
{
    //warn the user to resize the image
}

Let’s Try Uploading Images

I’m going to share how to upload more than one image to the Azure Blob Storage from an ASP .NET MVC 5 application. If you are going to upload just one image, simply remove the for loop and change List to just DBPhoto in the codes below.

First of all, I create a class to handle upload to Azure Storage operation.

public class AzureStorage
{
    public static async Task UploadAndSaveBlobAsync(
        HttpPostedFileBase imageFile, CloudBlobContainer container)
    {
        string blobName = Guid.NewGuid().ToString() + 
            Path.GetExtension(imageFile.FileName);

        CloudBlockBlob imageBlob = container.GetBlockBlobReference(blobName);
        using (var fileStream = imageFile.InputStream) 
        {
            await imageBlob.UploadFromStreamAsync(fileStream);
        }

        return imageBlob;
    }
}

So, in my controller, I have the following piece of code which will be called when an image is submitted via web page.

[HttpPost]
[ValidateAntiForgeryToken]
public async Task Create(
    [Bind(Include = "ImageUpload")] PhotoViewModel model)
{
    var validImageTypes = new string[] { "image/jpeg", "image/pjpeg", "image/png" };
    
    if (ModelState.IsValid) 
    {
        if (model.ImageUpload != null && model.ImageUpload.Count() > 0)
        {
            var storageAccount = CloudStorageAccount.Parse 
                (WebConfigurationManager.AppSettings["StorageConnectionString"]);

            var blobClient = storageAccount.CreateCloudBlobClient();
            blobClient.DefaultRequestOptions.RetryPolicy = 
                new LinearRetry(TimeSpan.FromSeconds(3), 3);  

            var imagesBlobContainer = blobClient.GetContainerReference("images");
            foreach (var item in model.ImageUpload) 
            { 
                if (item != null) {
                    continue;
                }
                
                if (validImageTypes.Contains(item.ContentType) && 
                    item.ContentLength > 0 && item.ContentLength <= 5242880)
                {
                    var blob = await AzureStorage.UploadAndSaveBlobAsync(item, imagesBlobContainer);
                    DBPhoto newPhoto = new DBPhoto(); 
                    newPhoto.URL = blob.Uri.ToString();
                    db.DBPhoto.Add(newPhoto); 
                } 
                else 
                {
                    // Show user error message 
                    return View(model); 
                }
            }
            db.SaveChanges();
            ... 
        } 
        else
        {
            // No image to upload
        } 
    }
    return View(model);
}

In the code above, there are many new cool things.

Firstly, it is the connection string to Azure Blob Storage, which I store in StorageConnectionString in web.config. The format for secure connection string is as follows.

DefaultEndpointsProtocol=https;AccountName=;AccountKey=;
Retrieve the access keys to the Storage Account.
Retrieve the access keys to the Storage Account.

Secondly, it’s LinearRetry. It is basically a retry policy which states how many times the program will retry and how much time needed between retries. In my case, it will only wait for 3 seconds after each try up to 3 tries.

Thirdly, I get the URL of the image on the Azure Blob Storage via blob.Uri.ToString() and store it into the database table. The URL will be used later for displaying the image as well as deleting the image.

Fourthly, I actually check to see if model.ImageUpload has null entries. This is because if I submit the form without any image to upload, model.ImageUpload has one entry. Not zero, but one. The only one entry is actually null. So if I don’t check to see whether the entry in model.ImageUpload is null, there will be an exception thrown.

The controller has such a long code. Luckily the code needed in the model and view is short and simple.

For the model PhotoViewModel, I have the following.

public class PhotoViewModel
{
    ...
    
    [Display(Name = "Current Images")]
    public List AvailablePhotos { get; set; }
}

For view, it is easy to allow selecting multiple files in the same view page. The “multiple = “true”” is to make sure more than one file can be selected in the File Explorer. You can omit this attribute if you only want at most one file being selected.

@Html.LabelFor(model => model.ImageUpload, new { style = "font-weight: bold;" })
@Html.TextBoxFor(model => model.ImageUpload, new { type = "file", multiple = "true" })
@Html.ValidationMessageFor(model => model.ImageUpload)

Image Size and HttpException

The image upload function looks fine. However, when images having size larger than a certain size is uploaded, HttpException will be thrown.

There is no way that having exception would be fun too! (Image Credit: Tari Tari)
There is no way that having exception would be fun too! (Image Credit: Tari Tari)

In order to prevent DOS attacks which upload huge files to the server, IIS by default only allows files which have size less than 4MB to be uploaded. Hence, although I earlier put a check to prevent image larger than 5MB to be uploaded, the exception will still be thrown if an image of size between 4 to 5MB is uploaded.

What if we just change the if clause above to allow only at most 4MB of image being uploaded? This won’t work because the exception is already thrown before the if condition is reached.

Then, can we just increase the IIS limit from 4MB to, let’s say, 100MB or something bigger? Sure. This can work. However, it still doesn’t stop someone uploads something bigger than the limit. Also, it makes attackers easier to exhaust your server with big files. Hence, expanding the upload size restriction is not really a full solution.

If you are interested, there are many good articles online discussing about this problem. I highlight some interesting ones below.

  1. Use HttpModule to Handle File Uploads;
  2. Use RIA (Rich Internet Application) Services in Silverlight (Seriously, we are talking about Silverlight in year 2015?);
  3. SubStatusCode = 13 in IIS 7;
  4. Catch the Exception in Global.asax.

I don’t really like the methods listed above, especially the 3rd and 4th options. It’s already too late to inform the user when the exception is thrown. Could we do something at client side before the images are being uploaded?

Luckily, we have File API in HTML 5. It allows to loop through the files in JavaScript to check their size. So, after the submit button is clicked, I will call a JavaScript method to check for the size of the images before they are being uploaded.

function IsFileSizeAcceptable() {
    if (typeof FileReader !== "undefined") {
        var filesBeingUploaded = document.getElementById('ImageUpload').files;
        for (var i = 0; i < filesBeingUploaded.length; i++) {
            if (filesBeingUploaded[i].size >= 4194304) { // Less than 4MB only
                alert('The file ' + filesBeingUploaded[i].name + ' is too large. Please remove it from your selection.');
                return false;
            }
        }
    }
    return true;
}
File API is currently supported in major modern browsers. (Image Credit: http://caniuse.com/#feat=fileapi)
File API is currently supported in major modern browsers. (Image Credit: http://caniuse.com/#feat=fileapi)

Remove from Azure Blob Storage

It’s normal that files uploaded to storage will be removed later. So how are we going to implement this feature in our ASP .NET MVC 5 application?

First of all, I added the following code to my AzureStorage.cs.

public static async Task DeleteBlobAsync(Uri blobUri, CloudBlobContainer container)
{
    string blobName = blobUri.Segments[blobUri.Segments.Length - 1];
    CloudBlockBlob blobToDelete = container.GetBlockBlobReference(blobName);

    await blobToDelete.DeleteAsync(); 
}

Secondly, I just pass in the Azure Storage URL of the image that I would like to remove and then call the DeleteBlobAsync method.

Uri blobUri = new Uri();
await AzureStorage.DeleteBlobAsync(blobUri, imagesBlobContainer);

Then the image will be deleted from the Azure Storage successfully.

Global.asax.cs and Blob Container

In order to have my application to create a blob container automatically if it doesn’t already exist, I add a few lines in Global.asax.cs as follows.

var storageAccount = CloudStorageAccount.Parse(
    WebConfigurationManager.AppSettings["StorageConnectionString"]);
var blobClient = storageAccount.CreateCloudBlobClient();
var imagesBlobContainer = blobClient.GetContainerReference("images");
if (imagesBlobContainer.CreateIfNotExists())
{
    imagesBlobContainer.SetPermissions(new BlobContainerPermissions
        {
            PublicAccess = BlobContainerPublicAccessType.Blob
        });
}

Write a Console Program to Upload File to Azure Storage

So, how is it done if we are developing a console application, instead of web application?

Windows Azure Storage NuGet Package needs to be installed first.
Windows Azure Storage NuGet Package needs to be installed first.

The codes below show how I upload an html file from my local hard disk to Azure Blob Storage. Then I can share the Azure Storage URL of the file to my friends so that they can read the web page.

Similar to what I do in web application, this is how I connect to the Storage account via https.

var azureStorageAccount = new CloudStorageAccount(
    new StorageCredentials("", ""), true);

This is how I access the container.

var blobClient = new CloudBlobClient(azureStorageAccount.BlobStorageUri, azureStorageAccount.Credentials);
var container = blobClient.GetContainerReference("myfiles");

Then the next thing I do is just upload the local file to Azure Storage by specifying the file name, content type, etc.

CloudBlockBlob blob = container.GetBlockBlobReference("mysimplepage.html");
using (Stream file = System.IO.File.OpenRead(@"C:\Users\ChunLin\Documents\mysimplepage.html")) 
{
    blob.Properties.ContentType = "text/html"; 
    blob.UploadFromStream(file); 
}

Yup, that’s all. =)

Pricing

Hosting your files on cloud storage is sure convenience. However, Azure Blob Storage is not free. The following table shows the current pricing of Azure Block Blob Storage in South East Asia region. To get the latest pricing details, please visit Azure Storage Pricing page.

Azure Standard Block Blob Storage in SEA Pricing
Azure Standard Block Blob Storage in SEA Pricing

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Role Management and Social Network Login

ASP .NET MVC - Entity Framework - Facebook - Google - Twitter

Often, we need to specify the resources users in our web application are allowed to access. For example, the sales report can only be seen by managers. The control panel can only be accessed by admin of the company.

Individual User Account

In Visual Studio 2013, when we first create an ASP .NET MVC5 project, we will always have the option to choose authentication mode. One of the available modes is Individual User Account.

Individual User Account is the default Authentication method.
Individual User Account is the default Authentication method.

Individual User Account offers two channels for users to log in.

Firstly, user can register on the web application by entering email and password. The application will then create an account with the password hashed and stored in the database. Next time, the user can just log in with email and password which will be verified by the ASP .NET Identity.

Secondly, user can also register and log in with external service, such as Facebook, Twitter, and Google+. Interestingly, no password will be stored in our database for this method. Instead, the user will be authenticated by signing in to the external service.

Login to our ASP .NET web application via Twitter.
Login to our ASP .NET web application via Twitter.

Identity and Entity Framework 6 Code First

When an ASP .NET MVC 5 web application with Individual User Account as Authentication is created, a new ASP .NET Identity Provider using EF6 Code First will be added to the project as well.

Calling Code First "Code-Based Modeling" is more suitable.
Calling Code First “Code-Based Modeling” is more suitable. (Reference)

Code First APIs will create new database if no existing database attached to the web application. Code First will map our entity classes with the database using default conventions. Hence, with the Code First approach, the developers can focus on the domain design and later have the database tables created according to the entity classes.

Because of Code First, in the first run of the application which has no database attached to it, EF6 will automatically create a database. If we have attempted to access any Identity functionality, there will be following 5 tables created automatically.

  • AspNetRoles
  • AspNetUserClaims
  • AspNetUserLogins
  • AspNetUserRoles
  • AspNetUsers

Role Based Security

Besides AspNetUserClaims table, the other four tables will be used in the role based security in our ASP .NET web application.

AspNetUsers table stores the profile information of a user, such as Email, Password, and Phone Number. To add more fields to the table, simply add the new fields in ApplicationUser class in IdentityModels.cs.

public class ApplicationUser : IdentityUser
{
    ...

    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
}

To create new role, we can do the following in the Seed() method in Configuration.cs, as suggested in an online tutorial about ASP .NET Identity.

using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
...

internal sealed class Configuration : DbMigrationsConfiguration<ApplicationDbContext>
{
    ...

    protected override void Seed(ApplicationDbContext context)
    {
        var roleManager = 
            new RoleManager<IdentityRole>(new RoleStore<IdentityRole>(context));
       
        //Create Role Admin if it does not exist
        if (!roleManager.RoleExists("Admin"))
        {
            roleManager.Create(new IdentityRole("Admin"));
        }
    }
}

To add a user to one or many roles, we can do the following. Hence, we can assign roles to new user upon registration.

var roleManager = 
    new RoleManager<IdentityRole>(new RoleStore<IdentityRole>(context));
var roles = roleManager.Roles.ToList();

foreach(var role in roles) { 
    var isInRole = await UserManager.IsInRoleAsync(userId, role); 
    if(!isInRole) 
    { 
         await UserManager.AddToRoleAsync(userId, role); 
    }
}

So, when user is accessing a page which is allowed for members having a certain role, we first need to check if the user is logging in with the following code.

if (Request.IsAuthenticated)
{
    ...
}

Inside the IF statement, we can continue to check if the user is having a certain role, as shown in the following code.

if (Request.IsAuthenticated && User.IsInRole("Admin"))
{
    ...
}

Alternatively, if we only allow the page to be accessed by Admin user, then we can use AuthorizeAttribute.

[Authorize(Roles="Admin")]
public ActionResult Report()
{
    . . .
}

Facebook OAuth2 Authentication

As said earlier, Individual User Account allows user to log in to the web application via external service, such as Facebook, as well. Before we can use the Facebook OAuth2 authentication, we need to register as a Facebook developer (Instruction here). I have already registered as a Facebook developer few years ago, so I just start directly from the Facebook Developers page.

First of all, we will click on the “Add a New App” button to begin. Then we will choose “Website” as our platform.

Adding a new app in Facebook Developers.
Adding a new app in Facebook Developers.

Secondly, we will key in name of our web application before we can create a new Facebook App ID. After that, we will select a category for our app.

Entering app name.
Entering app name.

Thirdly, we have to provide the URL of our website. Fortunately, Facebook allows us to key in non-https localhost URL. =)

Yup, tell them about our site!
Yup, tell them about our site!

After that, we just scroll up to the top of the page and then click on the “Skip Quick Start” button. It will then bring us to a page with more details about the new Facebook App that we have just created.

Facebook App ID and App Secret can be found in the Dashboard of our app.
Facebook App ID and App Secret can be found in the Dashboard of our app.

With the App ID and App Secret, we can now put in these values to the sample codes in Startup.Auth.cs to activate Facebook login. Yup, now user can just log in to our web application with their Facebook account!

After logging in, user still need to enter their email address in order to finish the new user registration process on our website. Without doing this step, both the AspNetUserLogins and AspNetUsers tables in our database will have no record of this user.

Once the user finishes the registration, we will be able to see their info in both of the tables mentioned above. The AspNetUserLogins table will keep data such as Login Provider (Facebook), Provider Key (a reference key to Facebook users table), and UserId (which is a reference key to AspNetUsers table).

Interestingly, as Facebook says, “(The web app) may still have the data you share with them” even though we unlink the app from our Facebook account.

Link with Google

To enable user to log in to our ASP .NET website using Google account, we will head towards the Google Developers Console to configure.

In the first step, we need to give a name to our project. Next, we can just click on the “Create” button to add the project to the console.

Adding a new project in Google Developers Console.
Adding a new project in Google Developers Console.

After the project is created, we will proceed to the Credentials under the APIs & Auth section.

"You do not have sufficient permissions to view this page." What?
“You do not have sufficient permissions to view this page.” What?

If you encounter issue on viewing the Credentials page because it kept complaining “You do not have sufficient permissions to view this page”, please switch to use another browser which has no Google account already signed in. For my case, I use the new browser from Microsoft, Edge.

Create new Client ID.
Create new Client ID.

Click on the “Create New Client ID” button under OAuth. It will then ask for Application Type. For our case, it will be the default option, “Web application”.

Select application type.
Select application type.

Do you notice the little warning there saying we need to provide a Product Name? So after that, we will be brought to the Consent Screen page to fill in our Product Name. In the same page, we can also key in URL to our homepage, product logo, Google+ page, privacy policy, and ToS.

After saving the updates on Consent Screen page, we will be prompted to key in two important information: Authorized JS Origins and Authorized Redirect URIs. For local testing purpose, it accepts non-https localhost URL as well.

After that, we should receive a Client ID for our web application.

Google Client ID and Client Secret.
Google Client ID and Client Secret.

Before going back to Visual Studio, we will proceed to the APIs section under the APIs & Auth. There, we can enable the Google+ API.

Enabling Google+ API.
Enabling Google+ API.

Same as Facebook, with the Client ID and Client Secret, we can now put in these values to the sample codes in Startup.Auth.cs to activate Google login. Yup, now user can just log in to our web application with their Google account!

Interestingly, I am not able to access the Credentials page after this again. =P

Logging In with Twitter

To get the Consumer Key and Consumer Secret from Twitter, we first need to login to the Twitter Apps.

According to Twitter, we must add mobile phone number to our Twitter profile before creating an application. For the Callback URL field, although it is optional, we have to put in our localhost URL (for testing environment) first. Otherwise, we will receive 401Unauthorized Error. Also, Twitter considers “localhost” as invalid in URL, so we have to use “127.0.0.1” instead.

After creating the new app, we will be given the Consumer Key and Consumer Secret that we can use to put in our Startup.Auth.cs.

Twitter Customer Key and Customer Secret.
Twitter Customer Key and Customer Secret.

More External Services Providing Login

If you would like to read more about allowing user to login to your ASP .NET website with 3rd party services, I would like to suggest a few articles to you.

Customizing Association Form

As mentioned earlier, we can modify the AspNetUsers table to store other profile information of a user by adding new fields in ApplicationUser class in IdentityModels.cs.

public class ApplicationUser : IdentityUser
{
    ...

    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
}
Association Form
Association Form

For external login, we need to update the fields to the Association Form as well so that no matter where the user comes from, we will always capture the same set of user info.

Firstly, in the AccountViewModels.cs, we need to add the three new fields to the ExternalLoginConfirmationViewModel.

public class ExternalLoginConfirmationViewModel
{
    [Required]
    [Display(Name = "Email")]
    public string Email { get; set; }

    [Required]
    [Display(Name = "First Name")]
    public string FirstName { get; set; }

    [Required]
    [Display(Name = "Last Name")]
    public string LastName { get; set; }

    [Required]
    [Display(Name = "Date of Birth")]
    public DateTime DateOfBirth { get; set; }
}

Then we will update the Views accordingly to enable user to key in those info.

In AccountController.cs, we will then add in logic to ExternalLoginConfirmation HttpPost method to store data of the three new fields into the AspNetUsers table.

var user = new ApplicationUser {
    ...
    FirstName = model.FirstName,
    LastName = model.LastName,
    DateOfBirth = model.DateOfBirth
};

If you are still not clear about what I am writing here, please read a more detailed tutorial written by Rick Anderson about adding new fields to the Association Form.

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Playing with Fiddler

Fiddler - HTTPS

I just downloaded Fiddler. I would like to see how I can make use of it, so I noted down some of the things that I have tried out.

Experiment 01: Process Filter

The first thing that I realized when I used Fiddler is that there are too many information being displayed especially when there are too many programs accessing the Internet. This is because, as advertised, Fiddler is a web debugging proxy for any browser (Microsoft Edge is included as well!) that works independently.

Fortunately, Fiddler providing a filtering function “Process Filter” to enable us to capture traffic coming from a particular browser, instead of all browsers.

Just drag and drop the icon on the browser you want to track.
Just drag and drop the icon on the browser you want to track.

Experiment 02: Performance Profiling

By just filtering and selecting the relevant sessions, we would be able to generate a web page performance report about total number of requests, total bytes sent and received, response time, DNS lookup time, response bytes by content type in a pie chart, etc.

Performance profiling of id.easybook.com, an Indonesia bus ticket booking website.
Performance profiling of id.easybook.com, an Indonesia bus ticket booking website.

By clicking on the “Timeline” tab, we will be able to get an overview of activities recorded. It is one of the useful features to start investigating performance issues in our web application.

Transfer Timeline diagram of id.easybook.com.
Transfer Timeline diagram of id.easybook.com.

Experiment 03: Decrypt HTTPS Traffic

By default, Fiddler disables HTTPS decryption. However, nowadays most of the websites that we would like to debug are using HTTPS encryption. So, it’s sometimes necessary to set it up to work with HTTPS traffic.

HTTPS decryption is disabled by default.
HTTPS decryption is disabled by default.

First of all, we just click Tools -> Fiddler Options.

In the “HTTPS” tab of the popup window, we need to enable both “Capture HTTP CONNECTs” and “Decrypt HTTPS Traffic”. To intercept HTTPS traffic, Fiddler generates a unique root certificate. In order to suppress Windows security warnings, Fiddler recommends to have our PC to trust the cert. Hence, there will be a warning message shown after we click on the “OK” button.

Yes, scary text! Are you sure you want to trust the certificate?
Yes, scary text! Are you sure you want to trust the certificate?

However, Windows cannot validate the certificate properly, so we will be asked if we really want to install the cert.

Are you sure you want to install certificate from DO_NOT_TRUST_FiddlerRoot?
Are you sure you want to install certificate from DO_NOT_TRUST_FiddlerRoot?

Finally, we will also be asked if we wish to add the cert to our PC’s Trusted Root List.

Adding cert to PC Trusted Root List.
Adding cert to PC Trusted Root List.

If we want to remove the cert from the PC’s Trusted Root List, we can always do so by clicking on the “Remove Interception Certificate” button in the Fiddler Options window.

Removing cert from PC Trusted Root List.
Removing cert from PC Trusted Root List.

To understand the implications of enabling HTTPS encryption and installing the cert, you can read a discussion on Information Security Stack Exchange about 3rd party root certificates.

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Entity Framework and Database

By using Entity Framework, we can save a lot of time on writing SQL ourselves because Entity Framework, a Microsoft-supported ORM for .NET, is able to generate the SQL for us.

I started to use ADO .NET when I was building .NET web applications in my first job. I learnt about how to call stored procedures with ADO .NET. I witnessed how my colleague wrote a 400-line SQL to complete a task which we normally will choose to do it in C#. I also realized the pain of forgetting to update the stored procedure when the C# code is already different.

After that, my friend introduced me Entity Framework when I was working on my first ASP .NET MVC project. Since then, I have been using Entity Framework because it enables me to deliver my web applications faster without writing (and debugging) any SQL myself. I read a very interesting article comparing between Entity Framework and ADO .NET. The author also acknowledged that the performance of Entity Framework was slower than hand-coded ADO .NET. He emphasized that, however, Entity Framework did maximize his productivity.

How I react when I read a 400-line stored procedure submitted by my colleague.
How I react when I read a 400-line stored procedure submitted by my colleague.

What Is Happening in Database with Entity Framework?

The SQL generated by Entity Framework is believed to be pretty good. However, it’s still nice to be aware of what SQL is being generated. For example, I have the following code to retrieve Singapore weather info.

using (var db = new ApplicationDbContext())
{
    var forecastRecords = db.SingaporeWeathers.ToList();
}

In Visual Studio, I can just mouse-over “SingaporeWeather” to get the following query.

SELECT 
    [Extent1].[RecordID] AS [RecordID], 
    [Extent1].[LocationID] AS [LocationID], 
    [Extent1].[WeatherDescription] AS [WeatherDescription], 
    [Extent1].[Temperature] AS [Temperature], 
    [Extent1].[UpdateDate] AS [UpdateDate]
FROM [dbo].[SingaporeWeathers] AS [Extent1]

If I have the following code which retrieves only records having temperature greater than 37, then I can use ToString().

using (var db = new ApplicationDbContext())
{
    var query = from sw in db.SingaporeWeathers where sw.Temperature > 37 select sw;
    Console.WriteLine(query.ToString());
}
SELECT
     [Extent1].[RecordID] AS [RecordID],
     [Extent1].[LocationID] AS [LocationID],
     [Extent1].[WeatherDescription] AS [WeatherDescription]
     [Extent1].[Temperature] AS [Temperature],
     [Extent1].[UpdateDate] AS [UpdateDate]
FROM [dbo].[SingaporeWeathers] AS [Extent1]
WHERE [Extent1].[Temperature] > cast(37 as decimal(18))

I am using DBContect API, so I can just use ToString(). Alternatively, you can also use ToTraceString(), which is a method of ObjectQuery, to get the generated SQL.

SQL Logging in Entity Framework 6

It is a great news for developer when Entity Framework is announced to have SQL Logging feature added For example, to write database logs to a file, I just need to do as follows.

using (var db = new ApplicationDbContext())
{
    var logFile = new StreamWriter("C:\\temp\\log.txt");
    db.Database.Log = logFile.Write;
    var forecastRecords = db.SingaporeWeathers.Where(x => x.Temperature > 37).ToList();
    logFile.Close();
}

Then in the log file, I can see logs as follows.

...
Closed connection at 6/6/2015 10:59:32 PM +08:00
Opened connection at 6/6/2015 10:59:32 PM +08:00
SELECT TOP (1) 
    [Project1].[C1] AS [C1], 
    [Project1].[MigrationId] AS [MigrationId], 
    [Project1].[Model] AS [Model], 
    [Project1].[ProductVersion] AS [ProductVersion]
FROM ( SELECT 
    [Extent1].[MigrationId] AS [MigrationId], 
    [Extent1].[Model] AS [Model], 
    [Extent1].[ProductVersion] AS [ProductVersion], 
    1 AS [C1]
    FROM [dbo].[__MigrationHistory] AS [Extent1]
    WHERE [Extent1].[ContextKey] = @p__linq__0
) AS [Project1]
ORDER BY [Project1].[MigrationId] DESC
-- p__linq__0: 'MyWeb.Migrations.Configuration' (Type = String, Size = 4000)
-- Executing at 6/6/2015 10:59:32 PM +08:00
-- Completed in 70 ms with result: SqlDataReader

Closed connection at 6/6/2015 10:59:32 PM +08:00
Opened connection at 6/6/2015 10:59:32 PM +08:00
SELECT 
    [Extent1].[RecordID] AS [RecordID], 
    [Extent1].[WeatherDate] AS [WeatherDate], 
    [Extent1].[WeatherDescription] AS [WeatherDescription], 
    [Extent1].[WeatherSecondaryDescription] AS [WeatherSecondaryDescription], 
    [Extent1].[IconFileName] AS [IconFileName], 
    [Extent1].[Temperature] AS [Temperature], 
    [Extent1].[UpdateDate] AS [UpdateDate]
FROM [dbo].[Weathers] AS [Extent1]
WHERE [Extent1].[Temperature] > cast(37 as decimal(18))
-- Executing at 6/6/2015 10:59:33 PM +08:00
-- Completed in 28 ms with result: SqlDataReader
...

So, as you can see, even the Code First migration related activity is logged as well. If you would like to know what are being logged, you can read an article about SQL Logging in EF6 which was written before it’s released.

Migration and the Verbose Flag

Speaking of Code First migration, if you would like to find out the SQL being generated when Update-Database is executed, you can add a Verbose flag to the command.

Update-Database -Verbose

Navigation Property

“I have no idea why tables in our database don’t have any relationship especially when we are using relational database.”

I heard from my friend that my ex-colleague shouted this in the office. He left his job few days after. I think bad codes and bad design do anger some of the developers. So, how do we do “relationship” in Entity Framework Code First? How do we specify the foreign key?

I quit!
I quit!

In Entity Framework, we use the Navigation Property to represent the foreign key relationship inside the database. With Navigation Property, we can define relationship between entities.

If we have a 1-to-1 Relationship between two entities, then we can have the following code.

public class Entity1
{
    [Key]
    public int Entity1ID { get; set; }
    public virtual Entity2 Entity2 { get; set; }
}

public class Entity2
{
    [Key, ForeignKey("Entity1")]
    public int Entity1ID { get; set; }
    public virtual Entity1 Entity1 { get; set; }
}

By default, navigation properties are not loaded. Here, the virtual keyword is used to achieve the lazy loading, so that the entity is automatically loaded from the database the first time a property referring to the entity is accessed.

However, there are people against using virtual keyword because they claim that lazy loading will have subtle performance issue in the application using it. So, what they suggest is to use the include keyword, for example

dbContext.Entity1.Include(x => x.Entity2).ToArray();

By specifying the ForeignKey attribute for Entity1ID in Entity2 class, Code First will then create a 1-to-1 Relationship between Entity1 and Entity2 using the DataAnnotations attributes.

For 1-to-n Relationship, we then need to change the navigation property, for example, in Entity1 class to use collection as demonstrated in the code below.

public class Entity1
{
    [Key]
    public int Entity1ID { get; set; }
    public virtual ICollection<Entity2> Entity2s { get; set; }
}

Finally, how about n-to-m Relationship? We will just need to change the navigation property in both Entity1 and Entity2 classes to use collection.

public class Entity2
{
    [Key]
    public int Entity2ID { get; set; }
    public virtual ICollection<Entity1> Entity1s { get; set; }
}

Together with the following model builder statement.

protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
    modelBuilder.Entity<Entity2>()
        .HasMany(e2 => e2.Entity1s)
        .WithMany(e1 => e1.Entity2s)
        .Map(e12 => 
            {
                e12.MapLeftKey("Entity1ID");
                e12.MapRightKey("Entity2ID");
                e12.ToTable("Entity12");
            });
}

The code above is using Fluent API which won’t be discussed in this post.

Database Context Disposal

When I first used Scaffolding in MVC 5, I noticed the template of controller class it generates look something as follows.

public class MyController : Controller
{
    private MyContext db = new MyContext();
    
    protected override void Dispose(bool disposing)
    {
        if (disposing) 
        {
            db.Dispose(); 
        } 
        base.Dispose(disposing);
    }
}

Before using Scaffolding, I have always been using the Using block, so I only create database context where I have to, as recommended in a discussion on StackOverflow. Also, the Using block will have the Dispose() be called automatically at the end of the block, so I don’t need to worry about forgetting to include the Dispose() method to dispose the database context in my controller.

Azure SQL: Database Backup and Restore

Before ending this post, I would like to share about how DB backup and restore is done in Azure SQL Database.

First of all, Azure SQL Database has built-in backups and even self-service point in time restores. Yay!

For each activate databases, Azure SQL will create a backup and geo-replicate it every hour to achieve 1-hour Recovery Point Objective (RPO).

If there is a need to migrate the database or archive it, we can also export the database from Azure SQL Database. Simply click on the Export button in the SQL Databases section of Azure Management Portal and then choose an Azure blob storage account to export the database to.

Finally, just provide the server login name and password to the database and you are good to go.

Export DB from Azure SQL Database.
Export DB from Azure SQL Database.

Later, we can also create a new database using the BACPAC file which is being generated by the Export function. In the Azure Management Portal, click New > Data Services > SQL Database > Import. This will open the Import Database dialog, as shown in the screenshot below.

Create a new database in Azure SQL Database by import BACPAC file.
Create a new database in Azure SQL Database by import BACPAC file.

Okai, that’s all for this post on Entity Framework, database, and Azure SQL Database. Thank you for your time and have a nice day!

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner