Never Share Your Secrets (Secret Manager and Azure Application Settings)

secret-manager-tool-azure-app-service-2

It’s important to keep app secrets out of our codes. Most of the app secrets are however still found in .config files. This way of handling app secrets becomes very risky when the codes are on public repository.

Thus, they are people put some dummy text in the .config files and inform the teammates to enter their respective app secrets. Things go ugly when this kind of “common understanding” among the teammates is messed up.

i-made-a-mistake-cannot-be-reversed
The moment when your app secrets are published on Github public repo. (Image from “Kono Aozora ni Yakusoku o”)

Secret Manager Tool

So when I am working on the dotnet.sg website, which is an ASP .NET Core project, I use the Secret Manager tool.It offers a way to store sensitive data such as app secrets in our local development machine.

To use the tool, firstly, I need to add it in project.json as follows.

{
    "userSecretsId": "aspnet-CommunityWeb-...",
    ...
    "tools": {
        ...
        "Microsoft.Extensions.SecretManager.Tools": "1.0.0-preview2-final"
    }
}

Due to the fact that the Secret Manager tool makes use of project specific configuration settings kept in user profile, we need to specify a userSecretsId value in the project.json as well.

After that, I can start storing the app secrets in the Secret Manager tool by entering the following command in the project directory.

$ dotnet user-secrets set AppSettings:MeetupWebApiKey ""

Take note that currently (Jan 2017) the values stored in the Secret Manager tool are not encrypted. So, it is just for development only.

As shown in the example above, the name of the secret is “AppSettings:MeetupWebApiKey”. This is because in the appsettings.json, I have the following.

{
    "AppSettings": {
        "MeetupWebApiKey": ""
    },
    ...
}

Alright, now the API key is stored in the Secret Manager tool, how is it accessed from the code?

By default, appsettings.json is already loaded in startup.cs. However, we still need to add the following bolded lines in startup.js to enable User Secrets as part of our configuration in the Startup constructor.

public class Startup
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);
            
        if (env.IsDevelopment())
        {
            builder.AddUserSecrets();
        }

        builder.AddEnvironmentVariables();

        Configuration = builder.Build();
    }
    ...
}

Then in the Models folder, I create a new class called AppSettings which will be used later when we load the app secrets:

public class AppSettings
{
    public string MeetupWebApiKey { get; set; }

    ...
}

So, let’s say I want to use the key in the HomeController, I just need to do the following.

public class HomeController : Controller
{
    private readonly AppSettings _appSettings;

    public HomeController(IOptions appSettings appSettings)
    {
        _appSettings = appSettings.Value;
    }

    public async Task Index()
    {
        string meetupWebApiKey = _appSettings.MeetupWebApiKey;
        ...
    }
    
    ...
}

Azure Application Settings

Just now Secret Manager tool has helped us on managing the app secrets in local development environment. How about when we deploy our web app to Microsoft Azure?

For dotnet.sg, I am hosting the website with Azure App Service. What so great about Azure App Service is that there is one thing called Application Settings.

Screen Shot 2017-01-29 at 11.19.42 PM.png
Application Settings option is available in Azure App Service.

For .NET applications, the settings in the “App Settings” will be injected into the AppSettings at runtime and override existing settings. Thus, even though I have empty strings in appsettings.json file in the project, as long as the correct values are stored in App Settings, there is no need to worry.

Thus, when we deploy web app to Azure App Service, we should never put our app secrets, connection strings in our .config and .json files or even worse, hardcode them.

Application Settings and Timezone

Oh ya, one more cool feature in App Settings that was introduced in 2015 is that we can change the server time zone for web app hosted on Azure App Service easily by just having a new entry as follows in the App Settings.

WEBSITE_TIME_ZONE            Singapore Standard Time

The setting above will change the server time zone to use Singapore local time. So DateTime.Now will return the current local time in Singapore.

References

If you would like to read more about the topics above, please refer to following websites.

Azure Blob Storage and File API

Azure Blob Storage - Azure SDK - ASP .NET MVC - Entity Framework - HTML5

When my applications were hosted on Windows Azure Virtual Machines (VM), we stored the images uploaded via our web applications in the hard disks of the VMs (except the temporary disk). However, when we started load balancing, we soon encountered a problem that the uploaded images were only found in one of the VMs. So we needed to find a centralized storage for those images.

Recently, when we are using Azure PaaS (aka Cloud Service), even without load balancing, we already encounter the same issue. That is simply because the hard drives used in Cloud Service instances are not persistent. Hence, a persistent file storage on the cloud is needed.

IaaS vs. PaaS
IaaS vs. PaaS

Blob Storage

Azure Blob Storage, according to Azure Documentation, is a service for storing large amount of unstructured data that can be accessed everywhere via HTTP or HTTPS. Hence, it is an ideal tool that we can use as the persistent image cloud storage.

There are two types of blob, Page Blob and Block Blob. Page Blob is commonly used for storing VHD files for VMs because it is optimized for random read and write operations.

For most of the files uploaded, it’s recommended to store as Block Blobs because large files will be split into smaller blocks and then uploaded concurrently. Hence, Block Blob is designed to give us faster upload and better throughput, which is great for image upload.

The maximum size for a Block Blob is 64 MB. Hence, if the uploaded file is more than 64 MB, we must upload it as a set of blocks; otherwise, we will receive status code 413 (Request Entity Too Large). For my web applications, there is no need for uploading an image which is more than 5MB most of the time. Hence, I can just limit the size of images before the user uploads them.

HttpPostedFileBase imageUpload;
...
if (imageUpload.ContentLength > 0 && imageUpload.ContentLength <= 5242880)
{
    //warn the user to resize the image
}

Let’s Try Uploading Images

I’m going to share how to upload more than one image to the Azure Blob Storage from an ASP .NET MVC 5 application. If you are going to upload just one image, simply remove the for loop and change List to just DBPhoto in the codes below.

First of all, I create a class to handle upload to Azure Storage operation.

public class AzureStorage
{
    public static async Task UploadAndSaveBlobAsync(
        HttpPostedFileBase imageFile, CloudBlobContainer container)
    {
        string blobName = Guid.NewGuid().ToString() + 
            Path.GetExtension(imageFile.FileName);

        CloudBlockBlob imageBlob = container.GetBlockBlobReference(blobName);
        using (var fileStream = imageFile.InputStream) 
        {
            await imageBlob.UploadFromStreamAsync(fileStream);
        }

        return imageBlob;
    }
}

So, in my controller, I have the following piece of code which will be called when an image is submitted via web page.

[HttpPost]
[ValidateAntiForgeryToken]
public async Task Create(
    [Bind(Include = "ImageUpload")] PhotoViewModel model)
{
    var validImageTypes = new string[] { "image/jpeg", "image/pjpeg", "image/png" };
    
    if (ModelState.IsValid) 
    {
        if (model.ImageUpload != null && model.ImageUpload.Count() > 0)
        {
            var storageAccount = CloudStorageAccount.Parse 
                (WebConfigurationManager.AppSettings["StorageConnectionString"]);

            var blobClient = storageAccount.CreateCloudBlobClient();
            blobClient.DefaultRequestOptions.RetryPolicy = 
                new LinearRetry(TimeSpan.FromSeconds(3), 3);  

            var imagesBlobContainer = blobClient.GetContainerReference("images");
            foreach (var item in model.ImageUpload) 
            { 
                if (item != null) {
                    continue;
                }
                
                if (validImageTypes.Contains(item.ContentType) && 
                    item.ContentLength > 0 && item.ContentLength <= 5242880)
                {
                    var blob = await AzureStorage.UploadAndSaveBlobAsync(item, imagesBlobContainer);
                    DBPhoto newPhoto = new DBPhoto(); 
                    newPhoto.URL = blob.Uri.ToString();
                    db.DBPhoto.Add(newPhoto); 
                } 
                else 
                {
                    // Show user error message 
                    return View(model); 
                }
            }
            db.SaveChanges();
            ... 
        } 
        else
        {
            // No image to upload
        } 
    }
    return View(model);
}

In the code above, there are many new cool things.

Firstly, it is the connection string to Azure Blob Storage, which I store in StorageConnectionString in web.config. The format for secure connection string is as follows.

DefaultEndpointsProtocol=https;AccountName=;AccountKey=;
Retrieve the access keys to the Storage Account.
Retrieve the access keys to the Storage Account.

Secondly, it’s LinearRetry. It is basically a retry policy which states how many times the program will retry and how much time needed between retries. In my case, it will only wait for 3 seconds after each try up to 3 tries.

Thirdly, I get the URL of the image on the Azure Blob Storage via blob.Uri.ToString() and store it into the database table. The URL will be used later for displaying the image as well as deleting the image.

Fourthly, I actually check to see if model.ImageUpload has null entries. This is because if I submit the form without any image to upload, model.ImageUpload has one entry. Not zero, but one. The only one entry is actually null. So if I don’t check to see whether the entry in model.ImageUpload is null, there will be an exception thrown.

The controller has such a long code. Luckily the code needed in the model and view is short and simple.

For the model PhotoViewModel, I have the following.

public class PhotoViewModel
{
    ...
    
    [Display(Name = "Current Images")]
    public List AvailablePhotos { get; set; }
}

For view, it is easy to allow selecting multiple files in the same view page. The “multiple = “true”” is to make sure more than one file can be selected in the File Explorer. You can omit this attribute if you only want at most one file being selected.

@Html.LabelFor(model => model.ImageUpload, new { style = "font-weight: bold;" })
@Html.TextBoxFor(model => model.ImageUpload, new { type = "file", multiple = "true" })
@Html.ValidationMessageFor(model => model.ImageUpload)

Image Size and HttpException

The image upload function looks fine. However, when images having size larger than a certain size is uploaded, HttpException will be thrown.

There is no way that having exception would be fun too! (Image Credit: Tari Tari)
There is no way that having exception would be fun too! (Image Credit: Tari Tari)

In order to prevent DOS attacks which upload huge files to the server, IIS by default only allows files which have size less than 4MB to be uploaded. Hence, although I earlier put a check to prevent image larger than 5MB to be uploaded, the exception will still be thrown if an image of size between 4 to 5MB is uploaded.

What if we just change the if clause above to allow only at most 4MB of image being uploaded? This won’t work because the exception is already thrown before the if condition is reached.

Then, can we just increase the IIS limit from 4MB to, let’s say, 100MB or something bigger? Sure. This can work. However, it still doesn’t stop someone uploads something bigger than the limit. Also, it makes attackers easier to exhaust your server with big files. Hence, expanding the upload size restriction is not really a full solution.

If you are interested, there are many good articles online discussing about this problem. I highlight some interesting ones below.

  1. Use HttpModule to Handle File Uploads;
  2. Use RIA (Rich Internet Application) Services in Silverlight (Seriously, we are talking about Silverlight in year 2015?);
  3. SubStatusCode = 13 in IIS 7;
  4. Catch the Exception in Global.asax.

I don’t really like the methods listed above, especially the 3rd and 4th options. It’s already too late to inform the user when the exception is thrown. Could we do something at client side before the images are being uploaded?

Luckily, we have File API in HTML 5. It allows to loop through the files in JavaScript to check their size. So, after the submit button is clicked, I will call a JavaScript method to check for the size of the images before they are being uploaded.

function IsFileSizeAcceptable() {
    if (typeof FileReader !== "undefined") {
        var filesBeingUploaded = document.getElementById('ImageUpload').files;
        for (var i = 0; i < filesBeingUploaded.length; i++) {
            if (filesBeingUploaded[i].size >= 4194304) { // Less than 4MB only
                alert('The file ' + filesBeingUploaded[i].name + ' is too large. Please remove it from your selection.');
                return false;
            }
        }
    }
    return true;
}
File API is currently supported in major modern browsers. (Image Credit: http://caniuse.com/#feat=fileapi)
File API is currently supported in major modern browsers. (Image Credit: http://caniuse.com/#feat=fileapi)

Remove from Azure Blob Storage

It’s normal that files uploaded to storage will be removed later. So how are we going to implement this feature in our ASP .NET MVC 5 application?

First of all, I added the following code to my AzureStorage.cs.

public static async Task DeleteBlobAsync(Uri blobUri, CloudBlobContainer container)
{
    string blobName = blobUri.Segments[blobUri.Segments.Length - 1];
    CloudBlockBlob blobToDelete = container.GetBlockBlobReference(blobName);

    await blobToDelete.DeleteAsync(); 
}

Secondly, I just pass in the Azure Storage URL of the image that I would like to remove and then call the DeleteBlobAsync method.

Uri blobUri = new Uri();
await AzureStorage.DeleteBlobAsync(blobUri, imagesBlobContainer);

Then the image will be deleted from the Azure Storage successfully.

Global.asax.cs and Blob Container

In order to have my application to create a blob container automatically if it doesn’t already exist, I add a few lines in Global.asax.cs as follows.

var storageAccount = CloudStorageAccount.Parse(
    WebConfigurationManager.AppSettings["StorageConnectionString"]);
var blobClient = storageAccount.CreateCloudBlobClient();
var imagesBlobContainer = blobClient.GetContainerReference("images");
if (imagesBlobContainer.CreateIfNotExists())
{
    imagesBlobContainer.SetPermissions(new BlobContainerPermissions
        {
            PublicAccess = BlobContainerPublicAccessType.Blob
        });
}

Write a Console Program to Upload File to Azure Storage

So, how is it done if we are developing a console application, instead of web application?

Windows Azure Storage NuGet Package needs to be installed first.
Windows Azure Storage NuGet Package needs to be installed first.

The codes below show how I upload an html file from my local hard disk to Azure Blob Storage. Then I can share the Azure Storage URL of the file to my friends so that they can read the web page.

Similar to what I do in web application, this is how I connect to the Storage account via https.

var azureStorageAccount = new CloudStorageAccount(
    new StorageCredentials("", ""), true);

This is how I access the container.

var blobClient = new CloudBlobClient(azureStorageAccount.BlobStorageUri, azureStorageAccount.Credentials);
var container = blobClient.GetContainerReference("myfiles");

Then the next thing I do is just upload the local file to Azure Storage by specifying the file name, content type, etc.

CloudBlockBlob blob = container.GetBlockBlobReference("mysimplepage.html");
using (Stream file = System.IO.File.OpenRead(@"C:\Users\ChunLin\Documents\mysimplepage.html")) 
{
    blob.Properties.ContentType = "text/html"; 
    blob.UploadFromStream(file); 
}

Yup, that’s all. =)

Pricing

Hosting your files on cloud storage is sure convenience. However, Azure Blob Storage is not free. The following table shows the current pricing of Azure Block Blob Storage in South East Asia region. To get the latest pricing details, please visit Azure Storage Pricing page.

Azure Standard Block Blob Storage in SEA Pricing
Azure Standard Block Blob Storage in SEA Pricing

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Role Management and Social Network Login

ASP .NET MVC - Entity Framework - Facebook - Google - Twitter

Often, we need to specify the resources users in our web application are allowed to access. For example, the sales report can only be seen by managers. The control panel can only be accessed by admin of the company.

Individual User Account

In Visual Studio 2013, when we first create an ASP .NET MVC5 project, we will always have the option to choose authentication mode. One of the available modes is Individual User Account.

Individual User Account is the default Authentication method.
Individual User Account is the default Authentication method.

Individual User Account offers two channels for users to log in.

Firstly, user can register on the web application by entering email and password. The application will then create an account with the password hashed and stored in the database. Next time, the user can just log in with email and password which will be verified by the ASP .NET Identity.

Secondly, user can also register and log in with external service, such as Facebook, Twitter, and Google+. Interestingly, no password will be stored in our database for this method. Instead, the user will be authenticated by signing in to the external service.

Login to our ASP .NET web application via Twitter.
Login to our ASP .NET web application via Twitter.

Identity and Entity Framework 6 Code First

When an ASP .NET MVC 5 web application with Individual User Account as Authentication is created, a new ASP .NET Identity Provider using EF6 Code First will be added to the project as well.

Calling Code First "Code-Based Modeling" is more suitable.
Calling Code First “Code-Based Modeling” is more suitable. (Reference)

Code First APIs will create new database if no existing database attached to the web application. Code First will map our entity classes with the database using default conventions. Hence, with the Code First approach, the developers can focus on the domain design and later have the database tables created according to the entity classes.

Because of Code First, in the first run of the application which has no database attached to it, EF6 will automatically create a database. If we have attempted to access any Identity functionality, there will be following 5 tables created automatically.

  • AspNetRoles
  • AspNetUserClaims
  • AspNetUserLogins
  • AspNetUserRoles
  • AspNetUsers

Role Based Security

Besides AspNetUserClaims table, the other four tables will be used in the role based security in our ASP .NET web application.

AspNetUsers table stores the profile information of a user, such as Email, Password, and Phone Number. To add more fields to the table, simply add the new fields in ApplicationUser class in IdentityModels.cs.

public class ApplicationUser : IdentityUser
{
    ...

    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
}

To create new role, we can do the following in the Seed() method in Configuration.cs, as suggested in an online tutorial about ASP .NET Identity.

using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
...

internal sealed class Configuration : DbMigrationsConfiguration<ApplicationDbContext>
{
    ...

    protected override void Seed(ApplicationDbContext context)
    {
        var roleManager = 
            new RoleManager<IdentityRole>(new RoleStore<IdentityRole>(context));
       
        //Create Role Admin if it does not exist
        if (!roleManager.RoleExists("Admin"))
        {
            roleManager.Create(new IdentityRole("Admin"));
        }
    }
}

To add a user to one or many roles, we can do the following. Hence, we can assign roles to new user upon registration.

var roleManager = 
    new RoleManager<IdentityRole>(new RoleStore<IdentityRole>(context));
var roles = roleManager.Roles.ToList();

foreach(var role in roles) { 
    var isInRole = await UserManager.IsInRoleAsync(userId, role); 
    if(!isInRole) 
    { 
         await UserManager.AddToRoleAsync(userId, role); 
    }
}

So, when user is accessing a page which is allowed for members having a certain role, we first need to check if the user is logging in with the following code.

if (Request.IsAuthenticated)
{
    ...
}

Inside the IF statement, we can continue to check if the user is having a certain role, as shown in the following code.

if (Request.IsAuthenticated && User.IsInRole("Admin"))
{
    ...
}

Alternatively, if we only allow the page to be accessed by Admin user, then we can use AuthorizeAttribute.

[Authorize(Roles="Admin")]
public ActionResult Report()
{
    . . .
}

Facebook OAuth2 Authentication

As said earlier, Individual User Account allows user to log in to the web application via external service, such as Facebook, as well. Before we can use the Facebook OAuth2 authentication, we need to register as a Facebook developer (Instruction here). I have already registered as a Facebook developer few years ago, so I just start directly from the Facebook Developers page.

First of all, we will click on the “Add a New App” button to begin. Then we will choose “Website” as our platform.

Adding a new app in Facebook Developers.
Adding a new app in Facebook Developers.

Secondly, we will key in name of our web application before we can create a new Facebook App ID. After that, we will select a category for our app.

Entering app name.
Entering app name.

Thirdly, we have to provide the URL of our website. Fortunately, Facebook allows us to key in non-https localhost URL. =)

Yup, tell them about our site!
Yup, tell them about our site!

After that, we just scroll up to the top of the page and then click on the “Skip Quick Start” button. It will then bring us to a page with more details about the new Facebook App that we have just created.

Facebook App ID and App Secret can be found in the Dashboard of our app.
Facebook App ID and App Secret can be found in the Dashboard of our app.

With the App ID and App Secret, we can now put in these values to the sample codes in Startup.Auth.cs to activate Facebook login. Yup, now user can just log in to our web application with their Facebook account!

After logging in, user still need to enter their email address in order to finish the new user registration process on our website. Without doing this step, both the AspNetUserLogins and AspNetUsers tables in our database will have no record of this user.

Once the user finishes the registration, we will be able to see their info in both of the tables mentioned above. The AspNetUserLogins table will keep data such as Login Provider (Facebook), Provider Key (a reference key to Facebook users table), and UserId (which is a reference key to AspNetUsers table).

Interestingly, as Facebook says, “(The web app) may still have the data you share with them” even though we unlink the app from our Facebook account.

Link with Google

To enable user to log in to our ASP .NET website using Google account, we will head towards the Google Developers Console to configure.

In the first step, we need to give a name to our project. Next, we can just click on the “Create” button to add the project to the console.

Adding a new project in Google Developers Console.
Adding a new project in Google Developers Console.

After the project is created, we will proceed to the Credentials under the APIs & Auth section.

"You do not have sufficient permissions to view this page." What?
“You do not have sufficient permissions to view this page.” What?

If you encounter issue on viewing the Credentials page because it kept complaining “You do not have sufficient permissions to view this page”, please switch to use another browser which has no Google account already signed in. For my case, I use the new browser from Microsoft, Edge.

Create new Client ID.
Create new Client ID.

Click on the “Create New Client ID” button under OAuth. It will then ask for Application Type. For our case, it will be the default option, “Web application”.

Select application type.
Select application type.

Do you notice the little warning there saying we need to provide a Product Name? So after that, we will be brought to the Consent Screen page to fill in our Product Name. In the same page, we can also key in URL to our homepage, product logo, Google+ page, privacy policy, and ToS.

After saving the updates on Consent Screen page, we will be prompted to key in two important information: Authorized JS Origins and Authorized Redirect URIs. For local testing purpose, it accepts non-https localhost URL as well.

After that, we should receive a Client ID for our web application.

Google Client ID and Client Secret.
Google Client ID and Client Secret.

Before going back to Visual Studio, we will proceed to the APIs section under the APIs & Auth. There, we can enable the Google+ API.

Enabling Google+ API.
Enabling Google+ API.

Same as Facebook, with the Client ID and Client Secret, we can now put in these values to the sample codes in Startup.Auth.cs to activate Google login. Yup, now user can just log in to our web application with their Google account!

Interestingly, I am not able to access the Credentials page after this again. =P

Logging In with Twitter

To get the Consumer Key and Consumer Secret from Twitter, we first need to login to the Twitter Apps.

According to Twitter, we must add mobile phone number to our Twitter profile before creating an application. For the Callback URL field, although it is optional, we have to put in our localhost URL (for testing environment) first. Otherwise, we will receive 401Unauthorized Error. Also, Twitter considers “localhost” as invalid in URL, so we have to use “127.0.0.1” instead.

After creating the new app, we will be given the Consumer Key and Consumer Secret that we can use to put in our Startup.Auth.cs.

Twitter Customer Key and Customer Secret.
Twitter Customer Key and Customer Secret.

More External Services Providing Login

If you would like to read more about allowing user to login to your ASP .NET website with 3rd party services, I would like to suggest a few articles to you.

Customizing Association Form

As mentioned earlier, we can modify the AspNetUsers table to store other profile information of a user by adding new fields in ApplicationUser class in IdentityModels.cs.

public class ApplicationUser : IdentityUser
{
    ...

    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
}
Association Form
Association Form

For external login, we need to update the fields to the Association Form as well so that no matter where the user comes from, we will always capture the same set of user info.

Firstly, in the AccountViewModels.cs, we need to add the three new fields to the ExternalLoginConfirmationViewModel.

public class ExternalLoginConfirmationViewModel
{
    [Required]
    [Display(Name = "Email")]
    public string Email { get; set; }

    [Required]
    [Display(Name = "First Name")]
    public string FirstName { get; set; }

    [Required]
    [Display(Name = "Last Name")]
    public string LastName { get; set; }

    [Required]
    [Display(Name = "Date of Birth")]
    public DateTime DateOfBirth { get; set; }
}

Then we will update the Views accordingly to enable user to key in those info.

In AccountController.cs, we will then add in logic to ExternalLoginConfirmation HttpPost method to store data of the three new fields into the AspNetUsers table.

var user = new ApplicationUser {
    ...
    FirstName = model.FirstName,
    LastName = model.LastName,
    DateOfBirth = model.DateOfBirth
};

If you are still not clear about what I am writing here, please read a more detailed tutorial written by Rick Anderson about adding new fields to the Association Form.

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Entity Framework and Database

By using Entity Framework, we can save a lot of time on writing SQL ourselves because Entity Framework, a Microsoft-supported ORM for .NET, is able to generate the SQL for us.

I started to use ADO .NET when I was building .NET web applications in my first job. I learnt about how to call stored procedures with ADO .NET. I witnessed how my colleague wrote a 400-line SQL to complete a task which we normally will choose to do it in C#. I also realized the pain of forgetting to update the stored procedure when the C# code is already different.

After that, my friend introduced me Entity Framework when I was working on my first ASP .NET MVC project. Since then, I have been using Entity Framework because it enables me to deliver my web applications faster without writing (and debugging) any SQL myself. I read a very interesting article comparing between Entity Framework and ADO .NET. The author also acknowledged that the performance of Entity Framework was slower than hand-coded ADO .NET. He emphasized that, however, Entity Framework did maximize his productivity.

How I react when I read a 400-line stored procedure submitted by my colleague.
How I react when I read a 400-line stored procedure submitted by my colleague.

What Is Happening in Database with Entity Framework?

The SQL generated by Entity Framework is believed to be pretty good. However, it’s still nice to be aware of what SQL is being generated. For example, I have the following code to retrieve Singapore weather info.

using (var db = new ApplicationDbContext())
{
    var forecastRecords = db.SingaporeWeathers.ToList();
}

In Visual Studio, I can just mouse-over “SingaporeWeather” to get the following query.

SELECT 
    [Extent1].[RecordID] AS [RecordID], 
    [Extent1].[LocationID] AS [LocationID], 
    [Extent1].[WeatherDescription] AS [WeatherDescription], 
    [Extent1].[Temperature] AS [Temperature], 
    [Extent1].[UpdateDate] AS [UpdateDate]
FROM [dbo].[SingaporeWeathers] AS [Extent1]

If I have the following code which retrieves only records having temperature greater than 37, then I can use ToString().

using (var db = new ApplicationDbContext())
{
    var query = from sw in db.SingaporeWeathers where sw.Temperature > 37 select sw;
    Console.WriteLine(query.ToString());
}
SELECT
     [Extent1].[RecordID] AS [RecordID],
     [Extent1].[LocationID] AS [LocationID],
     [Extent1].[WeatherDescription] AS [WeatherDescription]
     [Extent1].[Temperature] AS [Temperature],
     [Extent1].[UpdateDate] AS [UpdateDate]
FROM [dbo].[SingaporeWeathers] AS [Extent1]
WHERE [Extent1].[Temperature] > cast(37 as decimal(18))

I am using DBContect API, so I can just use ToString(). Alternatively, you can also use ToTraceString(), which is a method of ObjectQuery, to get the generated SQL.

SQL Logging in Entity Framework 6

It is a great news for developer when Entity Framework is announced to have SQL Logging feature added For example, to write database logs to a file, I just need to do as follows.

using (var db = new ApplicationDbContext())
{
    var logFile = new StreamWriter("C:\\temp\\log.txt");
    db.Database.Log = logFile.Write;
    var forecastRecords = db.SingaporeWeathers.Where(x => x.Temperature > 37).ToList();
    logFile.Close();
}

Then in the log file, I can see logs as follows.

...
Closed connection at 6/6/2015 10:59:32 PM +08:00
Opened connection at 6/6/2015 10:59:32 PM +08:00
SELECT TOP (1) 
    [Project1].[C1] AS [C1], 
    [Project1].[MigrationId] AS [MigrationId], 
    [Project1].[Model] AS [Model], 
    [Project1].[ProductVersion] AS [ProductVersion]
FROM ( SELECT 
    [Extent1].[MigrationId] AS [MigrationId], 
    [Extent1].[Model] AS [Model], 
    [Extent1].[ProductVersion] AS [ProductVersion], 
    1 AS [C1]
    FROM [dbo].[__MigrationHistory] AS [Extent1]
    WHERE [Extent1].[ContextKey] = @p__linq__0
) AS [Project1]
ORDER BY [Project1].[MigrationId] DESC
-- p__linq__0: 'MyWeb.Migrations.Configuration' (Type = String, Size = 4000)
-- Executing at 6/6/2015 10:59:32 PM +08:00
-- Completed in 70 ms with result: SqlDataReader

Closed connection at 6/6/2015 10:59:32 PM +08:00
Opened connection at 6/6/2015 10:59:32 PM +08:00
SELECT 
    [Extent1].[RecordID] AS [RecordID], 
    [Extent1].[WeatherDate] AS [WeatherDate], 
    [Extent1].[WeatherDescription] AS [WeatherDescription], 
    [Extent1].[WeatherSecondaryDescription] AS [WeatherSecondaryDescription], 
    [Extent1].[IconFileName] AS [IconFileName], 
    [Extent1].[Temperature] AS [Temperature], 
    [Extent1].[UpdateDate] AS [UpdateDate]
FROM [dbo].[Weathers] AS [Extent1]
WHERE [Extent1].[Temperature] > cast(37 as decimal(18))
-- Executing at 6/6/2015 10:59:33 PM +08:00
-- Completed in 28 ms with result: SqlDataReader
...

So, as you can see, even the Code First migration related activity is logged as well. If you would like to know what are being logged, you can read an article about SQL Logging in EF6 which was written before it’s released.

Migration and the Verbose Flag

Speaking of Code First migration, if you would like to find out the SQL being generated when Update-Database is executed, you can add a Verbose flag to the command.

Update-Database -Verbose

Navigation Property

“I have no idea why tables in our database don’t have any relationship especially when we are using relational database.”

I heard from my friend that my ex-colleague shouted this in the office. He left his job few days after. I think bad codes and bad design do anger some of the developers. So, how do we do “relationship” in Entity Framework Code First? How do we specify the foreign key?

I quit!
I quit!

In Entity Framework, we use the Navigation Property to represent the foreign key relationship inside the database. With Navigation Property, we can define relationship between entities.

If we have a 1-to-1 Relationship between two entities, then we can have the following code.

public class Entity1
{
    [Key]
    public int Entity1ID { get; set; }
    public virtual Entity2 Entity2 { get; set; }
}

public class Entity2
{
    [Key, ForeignKey("Entity1")]
    public int Entity1ID { get; set; }
    public virtual Entity1 Entity1 { get; set; }
}

By default, navigation properties are not loaded. Here, the virtual keyword is used to achieve the lazy loading, so that the entity is automatically loaded from the database the first time a property referring to the entity is accessed.

However, there are people against using virtual keyword because they claim that lazy loading will have subtle performance issue in the application using it. So, what they suggest is to use the include keyword, for example

dbContext.Entity1.Include(x => x.Entity2).ToArray();

By specifying the ForeignKey attribute for Entity1ID in Entity2 class, Code First will then create a 1-to-1 Relationship between Entity1 and Entity2 using the DataAnnotations attributes.

For 1-to-n Relationship, we then need to change the navigation property, for example, in Entity1 class to use collection as demonstrated in the code below.

public class Entity1
{
    [Key]
    public int Entity1ID { get; set; }
    public virtual ICollection<Entity2> Entity2s { get; set; }
}

Finally, how about n-to-m Relationship? We will just need to change the navigation property in both Entity1 and Entity2 classes to use collection.

public class Entity2
{
    [Key]
    public int Entity2ID { get; set; }
    public virtual ICollection<Entity1> Entity1s { get; set; }
}

Together with the following model builder statement.

protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
    modelBuilder.Entity<Entity2>()
        .HasMany(e2 => e2.Entity1s)
        .WithMany(e1 => e1.Entity2s)
        .Map(e12 => 
            {
                e12.MapLeftKey("Entity1ID");
                e12.MapRightKey("Entity2ID");
                e12.ToTable("Entity12");
            });
}

The code above is using Fluent API which won’t be discussed in this post.

Database Context Disposal

When I first used Scaffolding in MVC 5, I noticed the template of controller class it generates look something as follows.

public class MyController : Controller
{
    private MyContext db = new MyContext();
    
    protected override void Dispose(bool disposing)
    {
        if (disposing) 
        {
            db.Dispose(); 
        } 
        base.Dispose(disposing);
    }
}

Before using Scaffolding, I have always been using the Using block, so I only create database context where I have to, as recommended in a discussion on StackOverflow. Also, the Using block will have the Dispose() be called automatically at the end of the block, so I don’t need to worry about forgetting to include the Dispose() method to dispose the database context in my controller.

Azure SQL: Database Backup and Restore

Before ending this post, I would like to share about how DB backup and restore is done in Azure SQL Database.

First of all, Azure SQL Database has built-in backups and even self-service point in time restores. Yay!

For each activate databases, Azure SQL will create a backup and geo-replicate it every hour to achieve 1-hour Recovery Point Objective (RPO).

If there is a need to migrate the database or archive it, we can also export the database from Azure SQL Database. Simply click on the Export button in the SQL Databases section of Azure Management Portal and then choose an Azure blob storage account to export the database to.

Finally, just provide the server login name and password to the database and you are good to go.

Export DB from Azure SQL Database.
Export DB from Azure SQL Database.

Later, we can also create a new database using the BACPAC file which is being generated by the Export function. In the Azure Management Portal, click New > Data Services > SQL Database > Import. This will open the Import Database dialog, as shown in the screenshot below.

Create a new database in Azure SQL Database by import BACPAC file.
Create a new database in Azure SQL Database by import BACPAC file.

Okai, that’s all for this post on Entity Framework, database, and Azure SQL Database. Thank you for your time and have a nice day!

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Playing with Google Maps API

Google Maps - Google Developers - Newtonsoft JSON - Bing Maps

“Given an address, how do I get its latitude and longitude?”

I had been finding the solution for this problem for a long time until I discovered the API from Google Maps, the Geocoding Service.

Recently, I found out that my kampung was actually searchable on Google Maps Street View.
Recently, I found out that my kampung was actually searchable on Google Maps Street View.

Geocoding

According to the definition given in the Geocoding Service, geocoding is the process of converting human-readable address into geographic coordinates, such as latitude and longitude. Sometimes, the results returned can also include other information like postal code and bounds.

To do a latitude-longitude lookup of a given address, I just need to pass the a GeocodeRequest object Geocoder.geocode method. For example, if I want to find out the latitude and longitude of Changi Airport, I just do the following in JavaScript.

https://maps.googleapis.com/maps/api/js?libraries=places


var geocoder = new google.maps.Geocoder();
if (geocoder) {
    geocoder.geocode(
        { address: "Changi Airport" }, 
        function (result, status) {
            if (status != google.maps.GeocoderStatus.OK) {
                alert(address + " not found!");
            } else {
                var topPick = result[0]; // The first result returned
                
                var selectedLatitude = topPick.geometry.location.lat();
                var selectedLongitude = topPick.geometry.location.lng();

                alert("Latitude: " + selectedLatitude.toFixed(2));
                alert("Longitude: " + selectedLongitude.toFixed(2));
            }
        }
    );
} else {
    alert("Geocoder is not available.");
}

The above method is recommended for dynamic geocoding which will response to user input in real time. However, if what is available is a list of valid addresses, the Google Geocoding API will be another tool that you can use, especially in server applications. The Geocoding API is what I tried out in the beginning too, as shown in the C# code below.

var googleURL = "http://maps.googleapis.com/maps/api/geocode/json?address=" + 
    Server.UrlEncode(address) + "&sensor=false";

using (var webClient = new System.Net.WebClient())
{
    var json = webClient.DownloadString(googleURL);
    dynamic dynObj = JsonConvert.DeserializeObject(json); 
    foreach (var data in dynObj.results) 
    {
        var latitude = data.geometry.location.lat;
        var longitude = data.geometry.location.lng;
        ...
    } 
}

The reason of using dynamic JSON object here is because the Geocoding API returns many information, as mentioned earlier, and what I need is basically just the latitude and longitude. So dynamic JSON parsing allows me to get the data without mapping the entire API to a C# data structure. You can read more about this on Rick Strahl’s post about Dynamic JSON Parsing with JSON.NET. He also uses it for Google Maps related API.

The reason that I don’t use the Geocoding API is because there are usage limits. For each day, we can only call the API 2,500 times and only 5 calls per second are allowed. This means that in order to use the API, we have to get the API Key from Google Developer Console first. Also, it is recommended for use in server applications. Thus I change to use the Geocoding Service.

Where to Get the Address?

This seems to be a weird question. The reason why I worry about this is because it’s very easy to have typos in user input. Sometimes, having a typo is an address can mean two different places, for example the two famous cities in Malaysia, Klang and Kluang. The one without “u” is located at Kuala Lumpur area while the one with “u” is near to Singapore.

Klang and Kluang
Klang and Kluang

So I use the Place Autocomplete from Google Maps JavaScript API to provide user a list of valid place name suggestions.

https://maps.googleapis.com/maps/api/js?libraries=places

...

<input id="LocationName" name="LocationName" type="text" value="">

...


$(function () {
    var input = document.getElementById('LocationName');
    var options = {
        types: ['address'], 
        componentRestrictions: { country: 'tw' }
    };

    autocomplete = new google.maps.places.Autocomplete(input, options);
});

In the code above, I restricted the places which will be suggested by the Place Autocomplete to be only places in Taiwan (tw). Also, what I choose in my code above is “address”, which means the Place Autocomplete will only return me addresses. There are a few Place Types available.

The interesting thing is that even when I input simplified Chinese characters in the LocationName textbox, the Place Autocomplete is able to suggest me the correct addresses in Taiwan, which are displayed in traditional Chinese.

If I search Malaysia places (which are mostly named in Malay or English) with Chinese words, even though the Place Autocomplete will not show anything, the Geocoder is still able to return me accurate results for some popular cities.

Google Place Autocomplete can understand Chinese!
Google Place Autocomplete can understand Chinese!

I also notice that if I view the source of the web page, there will be an attribute called “autocomplete” in the LocationName textbox and its value is set to false. However, this should not be a problem for Place Autocomplete API to work. So don’t be frightened if you see that.

<input ... id="LocationName" name="LocationName" type="text" value="" autocomplete="off">

Putting Two Together

Isn’t it good if it can show the location of the address on Google Map after keying in the address in the textbox? Well, it’s simple to do so.

Remember the script to look for Changi Airport latitude and longitude above? I just put the code in a function called showLatLngOfAddress which accepts a parameter as address. Then call it when the LocationName loses focus.

$('#LocationName').blur(function () {
    showLatLngOfAddress(input.value);
});

In addition, I add a few more lines of code to showLatLng to draw a marker on the Google Map to point out the location of the given address on a map.

var marker = null;

function showLatLngOfAddress(address) {
    ...

    var topPick = result[0];

    ...

    //center the map over the result
    map.setCenter(topPick.geometry.location);
    
    //remove existing marker (if any)
    if (marker != null)
    {
        marker.setMap(null);
    }

    //place a marker at the location
    marker = new google.maps.Marker(
    {
        map: map, 
        position: topPick.geometry.location,
        animation: google.maps.Animation.DROP,
        draggable: true
    });
}

Finally, I not only make the marker to be draggable, but also enable it to update the latitude and longitude of the address when it is dragged to another location on the map.

google.maps.event.addListener(marker, 'drag', function (event) {
    alert('New Latitude: ' + event.latLng.lat().toFixed(2));
    alert('New Longitude: ' + event.latLng.lng().toFixed(2));
});
Do you know where 台北大桥 is? The map will tell you.
Do you know where 台北大桥 is? The map will tell you.

Bing Maps

If you are interested in using Bing Maps, there are Bing Maps REST Services available too.

I tried to search “Kluang” using Bing Maps API, it returned me two locations. One was in Malaysia and another one was near to Palembang in Indonesia! Wow, cool! On the other hand, Google Places returned me only the Kluang in Malaysia.

Unlike Place Autocomplete from Google, it is not straightforward to do place name suggestion using Bing Maps. If you are interested, please read a tutorial written by Vivien Chevallier on how to use the Bing Maps REST Services with jQuery to build an autocomplete box and find a location dynamically. I haven’t tried it out though. Anyway, Google APIs are still easier to use. =P

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Protect Your ASP .NET Applications

ASP .NET MVC - Entity Framework - reCAPTCHA - OWASP - JSON

Here is a just a few items that I learnt on how to protect and secure my web applications in recent ASP .NET projects.

reCAPTCHA in Razor

CAPTCHA is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”.

CAPTCHA is a program to find out if the current user is whether a human or a robot by asking the user to do some challenge-response tests. This feature is important in some websites to prevent machine to, for example, auto login to the websites, to do online transactions, to register as members, and so on. Luckily, it’s now very easy to include CAPTCHA in our ASP .NET MVC web applications.

Register your website here to use reCAPTCHA: https://www.google.com/recaptcha/admin.
Register your website here to use reCAPTCHA: https://www.google.com/recaptcha/admin.

reCAPTCHA is a free Google CAPTCHA service that comes in the form of widget that can be added to websites easily. So, how do we implement reCAPTCHA in our ASP .NET MVC sites?

The library that I use is called ReCaptcha for MVC5, which can be downloaded from Codeplex. With the help of it, I am able to easily plugin reCAPTCHA in my MVC5 web applications.

After adding ReCaptcha.Mvc5.dll in my project, I will need to import its namespace to the Razor view of the page which needs to have reCAPTCHA widget.

@using ReCaptcha.Mvc5;

To render the reCAPTCHA widget in, for example, a form, we will do the following.

< div class="form-group">
    @Html.LabelFor(model => model.recaptcha_response_field, new { @class = "control-label col-md-2" })
    < div class="col-md-10">
        <!--Render the recaptcha-->
        @Html.reCAPTCHA("<public key here>")
    < /div>
 < /div>

The public key can be retrieved from the official reCAPTCHA page after you register your website there.

reCAPTCHA Widget on Website
reCAPTCHA Widget on Website

In the code above, there is a field called recaptcha_response_field, which will be added in our model class as demonstrated below.

public class RegistrationViewModel : ReCaptchaViewModel
{
    ...

    [Display(Name = "Word Verification")]
    public override string recaptcha_response_field { get; set; }
}

To do verification in the controller, we will have the following code to help us.

[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> Register(RegistrationViewModel registrationVM)
{
    ...

    if (!string.IsNullOrEmpty(checkoutVM.recaptcha_response_field))
    {
        // Verify the recaptcha response.
        ReCaptchaResponse response = 
            await this.verifyReCAPTCHA(registrationVM, "<private key here>", true);

        if (response.Success)
        {
            // Yay, the user is human!
        } 
        else 
        {
            ModelState.AddModelError("", "Please enter correct verification word.");
        }
    }
}

The private key can also be found in the official reCAPTCHA page after you have submitted your website.

After doing all these, you are now be able to have a working reCAPTCHA widget in your website.

XSRF: Cross-Site Request Forgery

In the controller code above, there is one attribute called ValidateAntiForgeryToken. The purpose of this attribute is to prevent XSRF by adding anti-forgery tokens in both a hidden form field and the cookies to the server.

I draw a graph for me to better explain about what XSRF is.

XSRF (Cross-Site Request Forgery)
XSRF (Cross-Site Request Forgery)

Steps are as follows.

  1. The user logs in to, for example, a bank website.
  2. The response header from the bank site will contain the user’s authentication cookie. Since authentication cookie is a session cookie, it will only be cleared when the process ends. Thus, until that time, the browser will always include the cookie with each request to the same bank website.
  3. The attacker sends to the user a link and somehow encourage the user to click on it. This causes sending a request to the attacker’s server.
  4. The attacker website has the following form.
    <body onload="document.getElementById('attack-form').submit()">
        <form id="fm1" action="https://bank.com/TransferMoney" method="post">
            <input name="transferTo" value="attackerAccount" />
            <input name="currency" value="USD" />
            <input name="money" value="7,000,000,000" />
        </form>
    </body>
  5. Because of Step 4, the user will be forced to send a request to the bank website to transfer money to attacker’s account with the user’s authentication cookie.

Hence, the attribute ValidateAntiForgeryToken helps to avoid XSRF by checking both the cookie and form have anti-forgery tokens and their values match.

Mass-Assignment Vulnerability and Over-Posting Attack

Few years ago, Github was found to have Mass-Assignment Vulnerability. The vulnerability allows people to perform Over-Posting Attack to the site so that the attackers can modify data items which are not normally allowed to access. Due to the fact that ASP .NET MVC web application is using Model Binding, the same vulnerability can happen in ASP .NET MVC environment as well.

You want to control what is being passed into the binder.
You want to control what is being passed into the binder.

There are two my personal favourite solutions to avoid Over-Posting Attack.

One is using Bind attribute in the controller method. For example, in order to prevent users editing the value of isAdmin when they update their profile, I can do something as follows.

[HttpPost]
public ViewResult Edit([Bind(Exclude = "IsAdmin")] User user)
{
    ...
}

Alternatively, we can also use “Include” to define those fields that should be included in the binding.

Second solution is using view model. For example, the following class will not contain properties such as IsAdmin which are not allowed to be edited in the form post of profile edit.

public class UserProfileUpdateViewModel
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    ...
}

XSS: Cross-Site Scripting

According to OWASP (Open Web Application Security Project), XSS attacks

…are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites… Flaws that allow these attacks are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.

Kirill Saltanov from NUS is explaining to guests about XSS during 5th STePS event.
Kirill Saltanov from NUS is explaining to guests about XSS during 5th STePS event.

Currently, by default ASP .NET will throw exception if potentially dangerous content is detected in the request. In addition, the Razor view engine protect us against most of the XSS attacks by encoding data which is displayed to web page via the @ tag.

In View, we also need to encode any user-generated data that we are putting into our JavaScript code. Starting from ASP .NET 4.0, we can call HttpUtility.JavaScriptStringEncode. HttpUtility.JavaScriptStringEncode helps to encode a string so that it is safe to display and characters are escaped in a way that JavaScript can understand.

In order to avoid our database to have malicious markup and script, we need to encode the user inputs in the Controller as well using Server.HtmlEncode.

[AllowHtml]

There are some cases where our web application should accept HTML tags. For example, we have a <textarea> element in our blogging system where user can write the content of post, then we need to skip the default checking of ASP .NET.

To post HTML back to our Model, we can simply add the [AllowHtml] attribute to the corresponding property in the Model, for example

public class BlogPost {
    [Key]
    public int ID { get; set; }
    ...
    [AllowHtml]
    public string Content { get; set; }
}

Then in the View, we will need to use @Html.Raw to tell Razor not to encode the HTML markup.

@Html.Raw(post.Content)

Wait… Won’t this make XSS attack possible in our website? Yup, of course. So, we must be very careful whenever we are trying to bypass the Razor encoding. The solution will then be using AntiXSS encoding library from Microsoft.

AntiXSS uses a safe list approach to encoding. With its help, we will then able to remove any malicious script from the user input in the Controller, as demonstrated below.

[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult CreatePost(BlogPost post)
{
    if (ModelState.IsValid)
    {
        ...
        post.Content = Sanitizer.GetSafeHtmlFragment(post.Content);
        ...
        db.BlogPosts.Add(post);
        db.SaveChanges();
        return RedirectToAction("Index");
    }
    return View(post);
}

ASP .NET Request Validation

Previously in the discussion of XSS, we know that by default ASP .NET throws exception if potentially dangerous content is detected in the request. This is because of the existence of ASP .NET Request Validation.

However, according to OWASP, Request Validation should not be used as our only method of XSS protection because it does not guarantee to catch every type of invalid input.

HttpOnly Cookies

In order to reduce the risk of XSS, popular modern browsers have added a new attribute to cookie called HttpOnly Cookie. This new attribute specifies that a cookie is not accessible through script. Hence, it prevents the sensitive data contained in the cookie can be sent to attacker’s side via malicious JavaScript in XSS attack.

When a cookie is labelled as HttpOnly, it tells the browser that the cookie should only be accessed by the server. It is very easy to check which cookies are HttpOnly in the developer tool of modern browsers.

Microsoft Edge F12 Developer Tools can tell which are the HttpOnly cookies.
Microsoft Edge F12 Developer Tools can tell which are the HttpOnly cookies.

So, how do we create HttpOnly cookies in ASP .NET web applications? Just add a new line to set HttpOnly attribute of the cookie to true is fine.

HttpCookie myCookie = new HttpCookie("MyHttpOnlyCookie");
myCookie["Message"] = "Hello, world!";
myCookie.Expires = DateTime.Now.AddDays(30);
myCookie.HttpOnly = true;
Response.Cookies.Add(myCookie);

Alternatively, HttpOnly attribute can be set in web.config.

<httpCookies httpOnlyCookies="true" ...>

However, as pointed out in OWASP, if a browser is too old to support HttpOnly cookie, the attribute will be ignored by the browser and thus the cookies will be vulnerable to XSS attack. Also according to MSDN, HttpOnly does not prevent attacker with access to the network channel from accessing the cookie directly, so it recommends the use of SSL in addition of HttpOnly attribute.

HttpOnly Cookie was introduced in 2002 in IE6. Firefox 2.0.0.5 only supported HttpOnly attribute in 2007, 5 years later. However, soon people realized that in Firefox, there was still a bug in the HttpOnly implementation. Firefox allowed attackers to do an XMLHttpRequest to get the cookie values from the HTTP Response headers. 2 years later, in 2009, Mozilla finally fixed the bug. Since then, the XMLHttpRequest can no longer access the Set-Cookie and Set-Cookie2 headers of any response no matter the HttpOnly attribute is set to true or not.

Browserscope provides a good overview about the security functionalities in major browsers.
Browserscope provides a good overview about the security functionalities in major browsers.

SQL Injection and Entity SQL

When I first learned SQL in university, I always thought escaping user inputs helped to prevent SQL Injection. This approach doesn’t work actually. I just read an article written by Steve Friedl regarding how escaping the input strings does not protect our applications from being attacked by SQL Injection. The following is the example Steve gave.

SELECT fieldlist
FROM table
WHERE id = 23 OR 1=1;  -- Boom! Always matches!

When I was working in the Summer Fellowship Programme, I started to use Parameterized SQL.

SqlConnection conn = new SqlConnection(connectionString); 
conn.Open(); 
string sql = "SELECT fieldlist FROM table WHERE id = @id";
SqlCommand cmd = new SqlCommand(sql); 
cmd.Parameters.Add("@id", SqlDbType.Int, id); 
SqlDataReader reader = cmd.ExecuteReader();

This approach provides a huge security performance benefits.

In January, I started to learn Entity Framework. In Entity Framework, there are three types of queries:

  • Native SQL
  • Entity SQL
  • LINQ to Entity

In the first two types, there is a risk of allowing SQL Injection if the developers are not careful enough. Hence, it’s recommended to use parameterized queries. In addition, we can also use Query Builder Methods to safely construct Entity SQL, for example

ObjectQuery<Flight> query =
    context.Flights
    .Where("it.FlightCode = @code",
    new ObjectParameter("code", flightCode));

However, if we choose to use LINQ to Entity, which does not compose queries by using string manipulation and concatenation, we will not have the problem of being attacked by traditional SQL Injection.

JsonResult and JSON Hijacking

Using the MVC JsonResult, we are able to make our controller in ASP .NET MVC application to return Json. However, by default, ASP .NET MVC does not allow us to response to an HTTP GET request with a JSON payload (Book: Professional ASP .NET MVC 5). Hence, if we test the controller by just typing the URL directly in the browser, we will receive the following error message.

This request has been blocked because sensitive information could be disclosed to third party web sites when this is used in a GET request. To allow GET requests, set JsonRequestBehavior to AllowGet.

Since the method only accepts POST requests, unless Cross-Origin Resource Sharing (CORS) is implemented, the browser will be able to protect our data from returning the Json result to other domains.

This is actually a feature introduced by ASP .NET MVC team in order to mitigate a security threat known as JSON Hijacking. JSON Hijacking is an attack similar to XSRF where attacker can access cross-domain JSON data which is returned as array literals.

The reason why “returning JSON data as array” is dangerous is that although browsers nowadays stop us from making cross domain HTTP request via JavaScript, we are still able to use a <script> tag to make the browser load a script from another domain.

<script src="https://www.bank.com/Home/AccountBalance/12"></script>

Due to the fact that a JSON array will be treated as a valid JavaScript script and can thus be executed. So, we need to wrap the JSON result in an object, just like what ASP .NET and WCF do. The ASP.NET AJAX library, for example, automatically wraps JSON data with { d: [] } construct to make the returned value to become an invalid JavaScript statement which cannot be executed:

{"d" : ["balance", "$7,000,000,000.00"] }

So, to avoid JSON Hijacking, we need to

  1. never return JSON array
  2. not allow HTTP GET request to get the sensitive data

Nowadays, even though JSON Hijacking is no longer a known problem in modern browsers, it is still a concern because “you shouldn’t stop plugging a security hole just because it isn’t likely to be exploited“.

By the way, GMail was successfully exploited via JSON Hijacking. =)

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

Razor Learning Notes

Razor is still a new markup language to me, after working on ASP .NET Web Form projects for 3 years. I started to learn Razor in the beginning of this year (2015), so I decide to note down what I find interesting in my Razor learning journey and then share it here.

A Powerful HTML Helper Method: EditorFor

When I first started learning Razor, I was confused by EditorFor and TextboxFor. So, what is the difference between both of them?

TextboxFor is easy to understand. It is a helper method is used to render a TextBox on the web page no matter which type of the data is passed to the method.

EditorFor is more powerful than TextboxFor. Besides rendering a TextBox, it can also render other HTML elements based on the type of the data. For example, if the datatype is boolean, it renders a checkbox. If the datatype is calendar, it will render a textbox with calendar popup (which is not so nice looking as compared to the calendar popup created using XDSoft DateTimePicker).

Even though the datatype is specified to be DateTime, EditorFor does not provide a control for time section.
Even though the datatype is specified to be DateTime, EditorFor does not provide a control for time section.

EditorFor Accepts Custom Templates

By default, the EditorFor will create a textbox with calendar popup for data which is DateTime. We can change it to use another look-and-feel by create our own template.

So let’s say we want the EditorFor to render five dropdown boxes for each of the following components in DateTime: Year, Month, Day, Hour, and Minute.

Firstly, I create a file called DropDownDateTime.cshtml file which is in charge of the look-and-feel of the custom template. This file needs to be put under /Views/Shared/EditorTemplates. Also, please take note that the filename must be same as the corresponding data type.

@model MyProject.Models.DropDownDateTime
@{
    List<SelectListItem> years = new List<SelectListItem>();
    int prevYearCount = Model.PreviousYearCount;
    int nextYearCount = Model.NextYearCount;
    for (int i = Model.DateTime.Year - prevYearCount; i <= Model.DateTime.Year + nextYearCount; i++)
    {
        years.Add(new SelectListItem() { 
            Text = i.ToString(), 
            Value = i.ToString(), 
            Selected = (i == Model.DateTime.Year ? true : false) });
    }

    List<SelectListItem> months = new List<SelectListItem>();
    for (int i = 1; i <= 12; i++)
    {
        months.Add(new SelectListItem() {
            Text = i.ToString("00"), 
            Value = i.ToString(), 
            Selected = (i == Model.DateTime.Month ? true : false) });
    }
 
    List<SelectListItem> days = new List<SelectListItem>();
    for (int i = 1; i <= 31; i++)
    {
        days.Add(new SelectListItem() { 
            Text = i.ToString("00"), 
            Value = i.ToString(), 
            Selected = (i == Model.DateTime.Day ? true : false) });
    }

    List<SelectListItem> hours = new List<SelectListItem>();
    for (int i = 0; i < 24; i++)
    {
        hours.Add(new SelectListItem() { 
            Text = i.ToString("00"), 
            Value = i.ToString(), 
            Selected = (i == Model.DateTime.Hour ? true : false) });
    }

    List<SelectListItem> minutes = new List<SelectListItem>();
    for (int i = 0; i < 60; i += 15)
    {
        minutes.Add(new SelectListItem() { 
            Text = i.ToString("00"), 
            Value = i.ToString(), 
            Selected = (i == Model.DateTime.Minute ? true : false) });
    }
}

@Html.DropDownList("years", years)
@Html.DropDownList("months", months)
@Html.DropDownList("days", days) at 
@Html.DropDownList("hours", hours) : 
@Html.DropDownList("minutes", minutes)

So, with the help of this template, whenever I pass in a data which has DropDownDateTime as its type, EditorFor will automatically render the five dropdown lists, as shown in the screenshot below.

Yup, customized look-and-feel for DateTime picker.
Yup, customized look-and-feel for DateTime picker.

Razor and Content

When I am building apps using ASP .NET Web Forms, I often need to add some server codes within the HTML in .aspx file. For example,

<h2>Animes</h2>

<ul>
    <% foreach (var anime in animeCollection) { %>
        <li><%= anime.Name %></li>
    <% } %>
</ul>

In Razor, we do not need to explicitly denote the start and end of the server blocks within our HTML. The Razor parser is smart enough to implicitly identify when a server block ends by looking for HTML tags. Hence, Razor is able to keep the HTML clean, as shown in the sample coe below.

<h2>Animes</h2>

<ul>
    @foreach (var anime in animeCollection) {
        <li>@anime.Name</li>
    }
</ul>

However, there are sometimes where the Razor parser cannot do the job properly, especially when we need to mix Razor and Javascript code. For example, when we are rendering diagram using Google Charts, as shown in the sample below.

@{ int counter = 0; }


    var data = new google.visualization.DataTable();

    data.addColumn('string', 'Sales Date');
    data.addColumn('number', 'Sales');

    data.addRows([
        @foreach (var record in dailySalesSummaryRecords)
        {
            counter++;
            if (counter == dailySalesSummaryRecords.Count())
            {
                @:['@record.Key', @record.TotalSales.ToString("0.00")]
            }
            else
            {
                @:['@record.Key', @record.TotalSales.ToString("0.00")],
            }
        }]
    );
    
    ...

The two lines highlighted above are Razor code within JS. Both of them are actually doing the same thing. Just that one of them has no trailing comma which is to avoid browsers like Internet Explorer 8 to throw errors.

The @: Character Sequence is used to explicitly tell Razor to interpret the following line of content as content. However, @: can only work on single line. So for scenarios with multiple lines of content, we need to do something as follows.

...
@: Line one of content
@: Line two of content
@: Line three of content
...

Alternatively, we can just use <text> Tag to effectively mark the start and end of content.

...
<text>
    Line one of content
    Line two of content
    Line three of content
</text>
...

To read more about @: Character Sequence and <text> Tag, please refer to a detailed blog post about them on ScottGu’s Blog.

Razor Encodes String by Default

Yes, Razor encodes string by default. So what should we do if we want rendering without encoding? Well, we can do something as follows.

@Html.Raw(" alert('Hello World!'); ")

<text> Tag, Html.Raw, and Encoding

There is a very interesting discussion on Stack Overflow about how to correctly doing encoding/decoding in Razor with JavaScript. It starts with a question on how to avoid the apostrophe characters in s.Name to be rendered as ‘.

    $(function () { 
        $('#calendar').fullCalendar({
        header: { left: '', center: 'title', right: 'month,agendaWeek,agendaDay' },
        month: 5,
        year: 2011,
        editable: false,
        events: [
            @foreach (var s in ViewBag.Sessions)
            {
                @:{
                @:title: '@s.Name',
                @:start: new Date(@s.Starts.Year, @s.Starts.Month-1, @s.Starts.Day),
                @:end: new Date(@s.Ends.Year, @s.Ends.Month-1, @s.Ends.Day)
                @:},
            }
        ]});
    });

First of all, we change the code to use <text> because the content is multiple lines. Using @: repeatedly seems a bit strange to me.

<text>
    {
        title: '@s.Name'
        start: new Date(@s.Starts.Year, @s.Starts.Month-1, @s.Starts.Day),
        end: new Date(@s.Ends.Year, @s.Ends.Month-1, @s.Ends.Day)
    }
</text>

Next, we will apply Html.Raw so that apostrophes won’t be encoded as ‘.

<text>
    {
        title: '@Html.Raw(s.Name)'
        start: new Date(@s.Starts.Year, @s.Starts.Month-1, @s.Starts.Day),
        end: new Date(@s.Ends.Year, @s.Ends.Month-1, @s.Ends.Day)
    }
</text>

However, without encoding apostrophes, we may break the JavaScript code with the existence of apostrophes characters, such as

title: 'Jiahao's Birthday'

So, we need to still encode it using JavaScriptStringEncode.

<text>
    {
        title: '@Html.Raw(HttpUtility.JavaScriptStringEncode(s.Name))'
        start: new Date(@s.Starts.Year, @s.Starts.Month-1, @s.Starts.Day),
        end: new Date(@s.Ends.Year, @s.Ends.Month-1, @s.Ends.Day)
    }
</text>

Wait… We don’t want apostrophes to be encoded as ‘ in the first place? Why do we doing encoding now?

This is because JavaScriptStringEncode, a newly introduced method in .NET 4, will encode apostrophes not as &#39 which is something not human-friendly, but it will encode it as \’. So yup, this solve the problem.

JavaScriptStringEncode is a great feature which helps us to handle encoding C# string to a JavaScript string. It is able to escape not only apostrophes, but also double quotes (“), question marks (?), backslash (\), and ampersand (&).

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner