MS SQL on AWS: Amazon RDS

amazon-rds-ms-sql-server

There are some startups and SMEs hosting their databases on AWS. However, most of them choose to use Amazon EC2 because doing so is similar to running a SQL Server on-premise at data centres. So, to them, it’s something that they are familiar with back in the old days. However, doing so actually increases their cost of hosting services on AWS. The companies also need to hire experts to do database administration such as database backup and recovery and OS patching.

Hence, if I’m given the opportunity, I usually recommend the small companies with limited resources to consider Amazon RDS (or Azure SQL) first. Amazon RDS is a fully managed service which provides cost-efficient and resizable capacity while automating time-consuming database administration tasks.

Multi-AZ Deployments for MS SQL Server

Starting from May 2014, Amazon RDS also provides a highly available database solution with the synchronous Multi-AZ replication for MS SQL. Multi-AZ deployments for MS SQL database instances use SQL Server Mirroring.

Currently, Amazon RDS only supports Standard Edition and Enterprise Edition of SQL Server 2008 R2, 2012, 2014, and 2016. Amazon RDS also does not support Multi-AZ with Mirroring for the following regions yet:

  • US West (N. California);
  • Asia Pacific (Singapore);
  • European Union (Frankfurt);
  • AWS GovCloud (US);
  • Asia Pacific (Sdyney): Supported for DB instances in VPCs only;
  • Asia Pacific (Tokyo): Supported for DB instances in VPCs only;
  • South America (São Paulo): Supported for all DB instance classes except m1/m2.

It’s quite unfortunate that Singapore Region is one of them.

use-multi-az-deployment-for-production-sql-server-se.png
In N. Virginia Region, we’re able to specify to use Multi-AZ Deployment in Production SQL Server SE.

DB Instance Class

We can specify the DB Instance Class that allocates the computational, network, and memory capacity required by planned workload of the database instance.

available-instance-class-for-ms-sql.png
DB Instance Classes available in MS SQL 2016 on AWS.

Standard (db.m4) instances offer a balance of compute, memory, and network resources, and are a good choice for many applications.

Memory Optimized (db.r3) instances are designed to deliver fast performance for workloads that process large data sets in memory. The instances are well suited for the applications, such as high performance relational databases, in-memory analytics, and enterprise applications (for example, Microsoft SharePoint).

Burst Capable (db.t2) instances are instances that provide baseline performance level with the ability to burst to full CPU usage.

Storage Types

Most of the Amazon RDS are using Amazon EBS (Elastic Block Store) volumes for database and log storage. There are currently two main Storage Types available when setting up MS SQL database instances, as listed below.

General Purpose (SSD) storage, aka gp2, offers cost-effective storage which is suitable for a broad range of database workloads. Hence, it’s ideal for small to medium-sized databases. It provides baseline of 3 IOPS/GB and ability to burst to 3,000 IOPS for extended periods of time. Its volume can range from 20GB to 4TB for MS SQL database instances. However, provisioning less than 100 GB of General Purpose (SSD) storage for high throughput workloads could result in higher latencies upon exhaustion of the initial General Purpose (SSD) I/O Credit balance.

Provisioned IOPS (SSD) storage, aka io1, is suitable for I/O intensive database workloads which pay attention to storage performance and consistency in random access I/O throughput. It provides flexibility to provision I/O ranging from 1,000 to 30,000 IOPS. MS SQL can have provisioned IOPS volumes between 100GB (Express/Web edition) or 200GB (Standard/Enterprise edition) and 4TB.

amazon-ebs-pricing.png
Amazon Elastic Block Store (EBS) Pricing for Singapore region.

Allocated Storage and I/O Credits

General Purpose (SSD) storage performance is controlled by the volume size. Larger volumes have higher base performance levels and can accumulate I/O Credits faster. The more storage, the greater the base performance is and the faster it replenishes the credit balance.

For General Purpose (SSD) storage, the DB instance has an initial I/O Credits balance of 5.4 million. When the storage requires more than the base performance I/O level, it uses I/O credits in the credit balance to burst to the required performance level, up to a maximum of 3,000 IOPS. If the storage uses all of its I/O credit balance, its maximum performance will remain at the base performance level until I/O demand drops below the base level and unused credits are added to the I/O credit balance at the baseline performance rate of 3 IOPS/GB of volume size. Hence, we can use the formula below to calculate the Burst Duration.

burst-duration-formula.png

burst-duration-tabular.png

Thus, for production application that requires fast and consistent I/O performance, it’s recommended to use Provisioned IOPS (SSD) storage that is optimized for I/O intensive, online transaction processing workloads that have consistent performance requirements. Note that we cannot decrease storage allocated for a DB instance.

For MS SQL Server, Amazon RDS does not currently support increasing storage. Hence, we need to provision storage based on anticipated future storage growth. If we predict it wrongly, then we need to increase the storage of an existing SQL Server DB instance by first exporting the data, creating a new database instance with increased storage, and then importing the data into the new database instance.

Specifying Database Instance Specification

After understanding key concepts above, we can then proceed to setup our database instance.

specifying-db-instance-specifications.png
Although there is Free Tier available but allocating storage > 20GB or adding provisioned IOPS will disqualify the databse instance from being eligible for the Free Tier.

Network and Security: VPC (Virtual Private Cloud)

Amazon RDS database instances can be hosted on either EC2-VPC platform or the legacy EC2-Classic platform, the original platform used by Amazon RDS. Amazon VPC launches AWS resources, such as database instances, into a virtual private cloud.

Nowadays, if we are creating a database instance in a region that we have not used before, we normally are already on the EC2-VPC platform.

rds-supported-platforms.png
We are already on EC2-VPC platform.

There are many scenarios for accessing a database instance in a VPC. Today, I will only focus on having an EC2 web server to access the database instance in the same VPC.

web-server-and-db-instance-in-the-same-vpc.png
A database instance in a VPC accessed by an EC2 instance in the same VPC (Source: AWS Documentation)

In such scenario, Amazon RDS database instance normally needs to be available to the web server, and not to the public Internet. Hence, we can create a VPC with both public and private subnets. The web server will be hosted in the public subnet so that it is accessible by the public. The database instance is hosted in the private subnet so that it won’t be available to the public Internet, providing greater security.

The Security Group used to restrict access to the database instances can have a custom rule that allows TCP access using the port 1433 and an IP address we will use to access the database instance for development or other purposes. In addition, we also need to set the Public Accessible option to Yes first (It is recommended to set the option to No for production database instance to limit the potential thread with no public routes).

Encryption of Database Instances using Key Management Service (KMS)

Amazon RDS for MS SQL supports the encryption of database instances with encryption keys managed in AWS KMS. Once the data is encrypted, Amazon RDS handles authentication of access and decryption of the data transparently without having the need to change our database client applications.

enable-database-encryption.png
Currently, encryption of database instances (Data-in-Rest Protection) is not available for those which are running SQL Server Express Edition.

Backup and Maintenance

Amazon RDS automatically backup our database instances. It creates a storage volume snapshot of our database instance, backing up the entire database instance and not just individual databases. We can setup and modify our preferred Backup Window from time to time. During the automatic backup window, storage I/O might be suspended briefly while the backup process initializes (typically under a few seconds). For SQL Server, I/O activity is suspended briefly during backup for Multi-AZ deployments.

By default, Amazon RDS has a 30-minute backup window randomly selected from an 8-hour block (Singapore region will be 14:00–22:00 UTC).

Periodically, Amazon RDS also automatically does maintenance work such as, updating the databse instance’s or database cluster’s OS. We can choose to manually apply maintenance, or wait for the automatic maintenance process initiated during our preferred maintenance window. There is one thing to take note is that the maintenance window determines when pending operations start, but does not limit the total execution time of these operations.

By default, Amazon RDS also has a 30-minute maintenance window randomly selected from an 8-hour block (Singapore region will be 14:00–22:00 UTC).

maintenance-window-collide-with-backup-window.png
We’re not allowed to make the maintenance window and the backup window overlap.

CloudWatch

Amazon RDS sends metrics to CloudWatch for each active database instance every minute. Detailed monitoring is enabled by default.

cloudwatch.png
Amazon RDS Metrics

When setting up the database instance, there is an option for us to specify whether to enable Enhanced Monitoring or not. Enhanced Monitoring is not exactly like CloudWatch. CloudWatch gathers metrics about CPU utilization from the hypervisor for a database instance, and Enhanced Monitoring gathers its metrics from an agent on the instance.

enable-enhanced-monitoring.png
Enhanced monitoring requires permission to act on our behalf to send OS metric information to CloudWatch Logs.

Conclusion

It’s true that AWS allows us to deploy our MS SQL Server database on either Amazon RDS and Amazon EC2. However, it’s very crucial to analyze our needs and our application before deciding which one to use. In general, it is still recommended to consider Amazon RDS first so that developers can focus on high-level tasks and business logic implementation.

That’s all for my first trip to Amazon RDS. As a frequent user of Microsoft Azure, I never host MS SQL Server on AWS platform. So, if there is any mistake made in this article, kindly feedback to me. Thanks in advance!

Further Reading

Deploying Microsoft SQL Server on Amazon Web Services

Journey to ASP .NET MVC 5 (Episode 2)

ASP .NET MVC - Google Search - Automapper - Excel - Amazon SES

Previous Episode: https://cuteprogramming.wordpress.com/2015/03/01/journey-to-asp-net-mvc-5/

I first said hi to ASP .NET MVC in the beginning of this year. On 28th January, I attended the .NET Developers Singapore meetup and listened to Nguyen Quy Hy’s talk about ASP .NET MVC. After that, I have been learning ASP .NET MVC and applying this new knowledge in both my work and personal projects.

After 6 months of learning ASP .NET MVC, I decided to again write down some new things that I have learnt so far.

URL in ASP .NET MVC and Google Recommendation

According to Google recommendation on URLs, it’s good to have URLs to be as simple as possible and human-readable. This can be easily done with the default URL mapping in ASP .NET MVC. For example, the following code allows to have human-readable URL such as http://www.example.com/Ticket/Singapore-Airlines.

routes.MapRoute(
    name: "Customized",
    url: "Ticket/{airlineName}",
    defaults: new { controller = "Booking", action = "Details", airlineName = UrlParameter.Optional }
);

In addition, Google also encourages us to use hyphens instead of underscores in our URLs as punctuation to separate the words. However, by default, ASP .NET MVC doesn’t support hyphens. One of the easy solutions is to extend the MvcRouteHandler to automatically replace underscores with hyphens.

public class HyphenatedRouteHandler : MvcRouteHandler
{
    protected override IHttpHandler GetHttpHandler(RequestContext requestContext)
    {
        requestContext.RouteData.Values["controller"] =
        requestContext.RouteData.Values["controller"].ToString().Replace("-", "_");

        requestContext.RouteData.Values["action"] =
        requestContext.RouteData.Values["action"].ToString().Replace("-", "_");
 
        return base.GetHttpHandler(requestContext);
    }
}

Then in the RouteConfig.cs, we will replace the default route map to the following mapping.

routes.Add(
    new Route("{controller}/{action}/{id}",
    new RouteValueDictionary(
        new { controller = "Home", action = "Index", id = UrlParameter.Optional }),
        new HyphenatedRouteHandler())
);

By doing this, we can name our controllers and actions using underscores and then we set all the hyperlinks and links in sitemap to use hyphens.

There are actually many discussions about this online. I have listed below some of the online discussions that I found to be interesting.

  1. Allow Dashes Within URLs using ASP.NET MVC 4
  2. ASP .NET MVC Support for URL’s with Hyphens
  3. Asp.Net MVC: How Do I Enable Dashes in My URLs?
  4. Automate MVC Routing

MVC-ViewModel

Previously when I was working on WPF projects, I learnt the MVVM design pattern. So, it confused me when there was also a “View Model” in MVC. I thought with the use of View Model in ASP .NET MVC, I would be using MVVM too. It later turns out to be not the case.

In MVC, the View Model is only a class and is still considered part of the M (Model). The reason of having ViewModel is for the V (View) to have a single object to render. With the help of ViewModel, there won’t be too much of UI logic code in the V and thus the job of the V is just to render that single object. Finally, there will also be a cleaner separation of concerns.

Why is ViewModel able to provide the V a single object? This is because ViewModel can shape multiple entities from different data models into a single object.

public class CartViewModel
{
    ...

    public List<CartItems> items { get; set; }
 
    public UserProfile user { get; set; }
}

Besides, what I like about ViewModel is that it contains only fields that are needed in the V. Imagine the following model Song, we need to create a form to edit everything but the lyrics, what should we do?

The Song model.
The Song model.

Wait a minute. Why do we need to care about this? Can’t we just remove the Lyrics field from the edit form? Well, we can. However, generally we do not want to expose domain entities to the V.

If people manage to do a form post directly to your server, then they can add in the Lyrics field themselves and your server will happily accept the new Lyrics value. There will be a bigger problem if we are not talking about Lyrics, but something more critical, for example price, access rights, etc.

You want to control what is being passed into the binder.
You want to control what is being passed into the binder. (Image Credit: Microsoft Virtual Academy)

Please take note that the default model binder in ASP .NET MVC automatically binds all inbound properties.

The first simple solution is to use the bind attribute to indicate which properties to bind.

Edit([Bind(Include = "SongID,Title,Length")] Song song)

I don’t like this approach because it’s just a string. There are many mistakes can happen just because of having typo in a string.

So the second solution that I use often is creating a ViewModel which we can use to define only the fields that are needed in the edit form (V).

Same as M (Model), ViewModel also has validation rules using data annotation or IDataErrorInfo.

AutoMapper

By using ViewModel, we need to having mapping code to map between the view model and the domain model. However, writing mapping code is very troublesome especially when there are many properties involved.

Luckily, there is AutoMapper. AutoMapper performs object-object mapping by transforming an input object of one type into an output object of another type.

Mapper.CreateMap<Location, LocationViewModel>();

AutoMapper has a smart way to map the properties from view model and the domain model. If there is a property called “LocationName” in the domain model, AutoMapper will automatically map to a property with the same name “LocationName” in the view model.

Session, ViewData, ViewBag, and TempData

In my first e-commerce project which is using ASP .NET, Session is widely used. From small things like referral URL to huge cart table, all are stored in Session. Everyone in the team was satisfied with using Session until the day we realized we had to do load balancing.

There is a very interesting discussion on Stack Overflow about the use of Session in ASP .NET web applications. I like how one of them described Session as follows.

Fundamentally, session pollutes HTTP. It makes requests (often containing their own state) dependent on the internal state of the receiving server.

In the e-commerce project, we are using In-Process Session State. That means the session has “affinity” with the server. So in order to use load balancing in Microsoft Azure, we have to use Source IP Affinity to make sure the connections initiated from the same client computer goes to the same Datacenter IP endpoint. However, that will cause an imbalanced distribution of traffic load.

Another problem of using In-Process Session State is that once there is a restart on IIS or the server itself, the session variables stored on the server will be gone. Hence, for every deploy to the server, the customers will be automatically logged out from the e-commerce website.

Then you might wonder why we didn’t store session state in a database. Well, this won’t work because we store inserialisable objects in session variables, such as HtmlTable. Actually, there is another interesting mode for Session State, called StateServer. I will talk more about it in my another post about Azure load balancing.

Source IP Affinity
Source IP Affinity

When I was learning ASP .NET MVC in the beginning, I always found creating view model to be not intuitive. So, I used ViewBag and ViewData a lot. However, this caused headaches for code maintenance. Hence, in the end, I started to use ViewModel in MVC projects to provide better Separation of Concern and easily maintainable code. Nevertheless, I am still using ViewBag and ViewData to provide extra data from controller to view.

So what is ViewData? ViewData is a property allowing data to be passed from a controller to a view using a dynamic-bound dictionary API. In MVC3, a new dynamic property called ViewBag was introduced. ViewBag enables developers to use simpler syntax to do what ViewData can do. For example, instead of writing

ViewData["ErrorMessage"] = "Please enter your name";

, we can now write

 ViewBag.ErrorMessage = "Please enter your name";

.

ViewData and ViewBag help to pass data from a controller to a view. What if we want to pass data from a controller to another controller, i.e. redirection. Both ViewData and ViewBag will contain null values once the controller redirects. However, this is not the case for TempData.

There is one important feature in TempData is that anything stored in it will be discarded after it is accessed in the next request. So, it is useful to pass data from a controller to another controller. Unfortunately, TempData is backed by Session in ASP .NET MVC. So, we need to be careful when to use TempData as well and how it will behave in load balancing servers.

JsonResult

Sometimes, I need to return JSON-formatted content to the response. To do so, I will use JsonResult class, for example

[AllowCrossSiteJson]
public JsonResult GetAllMovies()
{
    Response.CacheControl = "no-cache";
    try
    {
        using (var db = new ApplicationDbContext())
        {
            var availableMovies = db.Movies.Where(m => m.Status).ToList();
            
            return Json(new 
            { 
                success = true, 
                data = availableMovies
            }, 
            JsonRequestBehavior.AllowGet);
        }
    }
    catch (Exception ex)
    {
        return Json(new 
        { 
            success = false, 
            message = ex.Message 
        }, 
        JsonRequestBehavior.AllowGet);
    }
}

There are a few new things here.

(1) [AllowCrossSiteJson]

This is my custom attribute to give access to requests coming from different domains. The following code shows how I define the class.

public class AllowCrossSiteJsonAttribute : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext filterContext)
    {
        filterContext.RequestContext.HttpContext.Response.AddHeader(
            "Access-Control-Allow-Origin", "*");
       
        base.OnActionExecuting(filterContext);
    }
}

(2) Response.CacheControl = “no-cache”;

This is to prevent caching to the action. There is a great post on Stack Overflow which provides more alternatives to prevent caching.

(3) return Json()

This is to return an instance of the JsonResult class.

(4) success

If you are calling the GetAllMovies() through AJAX, probably you can do something as follows to check if there is any exception or error thrown.

$.ajax({
    url: '/GetAllMovies',
    success: function(data) {
        // No problem
    },
    error: function(XMLHttpRequest, textStatus, errorThrown) {
        var obj = JSON.parse(jqXHR.responseText);
        alert(obj.error);
    }
});

The error callback above will only be triggered when the server returns non-200 status code. I thus introduced another status field to tell the caller more info, for example an exception raised in C# code or any invalid value being passed to GetAllMovies method through AJAX. Hence, in the AJAX call, we just need to update it to

$.ajax({
    url: '/GetAllMovies',
    success: function(data) {
        if (data.success) {
            // No problem
        } else {
            alert(data.message);
        }
    },
    error: function(XMLHttpRequest, textStatus, errorThrown) {
        var obj = JSON.parse(jqXHR.responseText);
        alert(obj.error);
    }
});

(5) JsonRequestBehavior.AllowGet

To give permission to GET request for GetAllMovies method. This has thing to do with JSON Hijacking which will be discussed in my another post.

ActionResult

Other than JsonResult, there are many other ActionResult classes which represent the result of an action method and their respective helper methods.

Currently, I use the following frequently.

  1. ViewResult and View: Render a view as a web page;
  2. RedirectToRouteResult and RedirectToAction: Redirect to another action (TempData is normally used here);
  3. JsonResult and Json: Explained above;
  4. EmptyResult and null: Allow action method to return null.

Export Report to Excel

Two years ago, I wrote a post about how to export report to Excel in ASP .NET Web Form project. So, how do we export report to Excel in MVC project? There are two ways available.

First one can be done using normal ViewResult, as suggested in a discussion on Stack Overflow.

public ActionResult ExportToExcel()
{
    var sales = new System.Data.DataTable("Sales Report");
    sales.Columns.Add("col1", typeof(int));
    sales.Columns.Add("col2", typeof(string));

    sales.Rows.Add(1, "Sales 1");
    sales.Rows.Add(2, "Sales 2");
    sales.Rows.Add(3, "Sales 3");
    sales.Rows.Add(4, "Sales 4");

    var grid = new GridView();
    grid.DataSource = sales;
    grid.DataBind();

    Response.ClearContent();
    Response.Buffer = true;
    Response.AddHeader("content-disposition", "attachment; filename=Report.xls");
    Response.ContentType = "application/ms-excel";
    Response.Charset = "";
 
    StringWriter sw = new StringWriter();
    HtmlTextWriter htw = new HtmlTextWriter(sw);
    grid.RenderControl(htw);

    Response.Output.Write(sw.ToString());
    Response.Flush();
    Response.End();

    return View("Index");
}

Second way will be using FileResult, as suggested in another discussion thread on Stack Overflow. I simplified the code by removing the styling related codes.

public sealed class ExcelFileResult : FileResult
{
    private DataTable dtReport;

    public ExcelFileResult(DataTable dt) : base("application/ms-excel")
    {
        dtReport = dt;
    }

    protected override void  WriteFile(HttpResponseBase response)
    {
        // Create HtmlTextWriter
        StringWriter sw = new StringWriter();
        HtmlTextWriter tw = new HtmlTextWriter(sw);

        tw.RenderBeginTag(HtmlTextWriterTag.Table);

        // Create Header Row
        tw.RenderBeginTag(HtmlTextWriterTag.Tr);
        DataColumn col = null;
        for (int i = 0; i < dtReport.Columns.Count; i++)
        {
            col = dtReport.Columns[i];
            tw.RenderBeginTag(HtmlTextWriterTag.Th);
            tw.RenderBeginTag(HtmlTextWriterTag.Strong);
            tw.WriteLineNoTabs(col.ColumnName);
            tw.RenderEndTag();
            tw.RenderEndTag();
        }
        tw.RenderEndTag();

        // Create Data Rows
        foreach (DataRow row in dtReport.Rows)
        {
            tw.RenderBeginTag(HtmlTextWriterTag.Tr);
            for (int i = 0; i <= dtReport.Columns.Count - 1; i++)
            {
                tw.RenderBeginTag(HtmlTextWriterTag.Td);
                tw.WriteLineNoTabs(HttpUtility.HtmlEncode(row[i]));
                tw.RenderEndTag();
            }
            tw.RenderEndTag();
        }

        tw.RenderEndTag();

        // Write result to output-stream
        Stream outputStream = response.OutputStream;
        byte[] byteArray = Encoding.Default.GetBytes(sw.ToString());
        response.OutputStream.Write(byteArray, 0, byteArray.GetLength(0));
    }
}

To use the code above, we just need to do the following in our controller.

public ExcelFileResult ExportToExcel()
{
    ...
    ExcelFileResult actionResult = new ExcelFileResult(dtSales) 
    { 
        FileDownloadName = "Report.xls" 
    };

    return actionResult;
}

Sending Email

To send email from my MVC project, I have the following code to help me out. It can accept multiple attachments too. So I also use it to send email with report generated using the code above attached. =)

In the code below, I am using Amazon Simple Email Service (SES) SMTP.

public Task SendEmail(
    string sentTo, string sentCC, string sentBCC,  string subject, string body, 
    string[] attachments = null) 
{
    // Credentials:
    var credentialUserName = "<username provided by Amazon SES>;
    var sentFrom = "no-reply@mydomain.com";
    var pwd = "<password provided by Amazon SES>";

    // Configure the client:
    System.Net.Mail.SmtpClient client = 
        new System.Net.Mail.SmtpClient("email-smtp.us-west-2.amazonaws.com");
    client.Port = 25;
    client.DeliveryMethod = System.Net.Mail.SmtpDeliveryMethod.Network;
    client.UseDefaultCredentials = false;

    // Create the credentials:
    System.Net.NetworkCredential credentials = 
        new System.Net.NetworkCredential(credentialUserName, pwd);
    client.EnableSsl = true;
    client.Credentials = credentials;

    // Create the message:
    var mail = new System.Net.Mail.MailMessage(sentFrom, sentTo);
    string[] ccAccounts = sentCC.Split(new char[] { ';' }, StringSplitOptions.RemoveEmptyEntries);
 
    foreach (string ccEmail in additionalCcAccounts)
    {
        mail.CC.Add(ccEmail);
    }
    
    string[] bccAccounts = sentBCC.Split(new char[] { ';' }, StringSplitOptions.RemoveEmptyEntries);

    foreach (string bccEmail in additionalBccAccounts) 
    {
        mail.Bcc.Add(bccEmail); 
    }
    
    mail.Subject = subject;
    mail.Body = body;
    mail.IsBodyHtml = true;

    if (attachments != null) 
    {
        for (int i = 0; i < attachments.Length; i++)
        {
            mail.Attachments.Add(new System.Net.Mail.Attachment(attachments[i]));
        }
    }

    client.SendComplete += (s, e) => client.Dispose();
    return client.SendMailAsync(mail);
}

To send an email without attachment, I just need to do the following in action method.

var emailClient = new Email();
await emailClient.SendEmail(
    "to@mydomain.com", "cc1@mydomain.com;cc2@domain.com", "bcc@mydomain.com", 
    "Email Subject", "Email Body");

To send email with attachment, I will then use the following code.

string[] attachmentPaths = new string[1];

var reportServerPath = Server.MapPath("~/report");

attachmentPaths[0] = reportServerPath + "\\Report.xls";

var emailClient = new Email();
await emailClient.SendEmail(
    "admin@mydomain.com", "", "", 
    "Email Subject", "Email Body", attachmentPaths);

Yup, that’s all what I have learnt so far in my MVC projects. I know this post is very, very long. However, I am still new to MVC and thus I am happy to be able to share with you what I learn in the projects. Please correct me if you find anything wrong in the post. Thanks! =)

Summer 2015 Self-Learning Project

This article is part of my Self-Learning in this summer. To read the other topics in this project, please click here to visit the project overview page.

Summer Self-Learning Banner

AWSome Day – Learning AWS from Experts and IAM

AWS + IAM

It’s fortunate to work in a company which encourages employees to attend courses, workshops, and training to expand their skill set. Last month, when I told my boss about AWSome Day, a training event hold by AWS expert technical instructors, my boss immediately gave me one day leave (without deducting my annual leave) to attend the event. In addition, I’m glad to have awesome teammates who helped me to handle my work on that day so that I could concentrate during the event. Thus, I would like to write a series of blog posts to share about what I’ve learnt in AWSome Day.

Amazon AWSome Day

This is the second time the AWSome Day was organized in Singapore. Based on last year AWS Summit attendees, a lot of them were looking for more professional training from AWS, and thus AWSome Day once again came to Singapore. This year, the event is at Raffles City Convention Centre, which is just a 5-minute walk from my office. Oh my tian, that is so convenient!

AWSome Day, Awesome Place - Raffles City Convention Centre
AWSome Day, Awesome Place – Raffles City Convention Centre

The registration started at 8am. After that, Richard Harshman, the Head of AWS ASEAN, gave an opening keynote. He shared with us how AWS had removed barrier of entry to start a business online and to increase innovation. My friend who worked in MNC once told me that he was given access to powerful servers to do crazy stuff. I am not as lucky as him. I am working in a startup which does not have sufficient financial capability for that. Hence, I agreed with Richard that AWS (and other cloud computing services as well) does reduce the cost of innovation and experimentation.

Richard also shared with us a story how with the help of AWS, some startup in Malaysia managed to get a few million of visits monthly without an in-house system admin. Yup, our company also does not have a sysadmin. Normally, the work of sysadmin is done by the developers. Hence, we are always looking for a way to reduce the time used on sysadmin tasks so that developers have more time to focus on improving the applications to serve our customers better. So, cloud computing infrastructure with board and deep services to support online workload helps high volume and low margin businesses like ours.

Currently, our company is using both AWS and Microsoft Azure. So, when Richard shared a graph how both AWS and Microsoft are now leaders in cloud computing service, I was glad that we made a right choice to use services from both of them.

After the opening keynote, we had a short coffee break and then we began the 6-hour journey of AWS training which was done by Denny Daniel, Technical Trainer at AWS. Since the training covers many interesting topics, I will not blog all of them here because most of the readers will just tl;dr. I will only write what I learnt and I found useful in my career. So, if you are interested in the event, why not join the future training offered by AWS Singapore? =)

Episode 01: Who am I? I am, I am… I am Identity and Access Management (IAM)!

One of the main concerns about hosting our applications on clouds is security. One of the security tools provided by AWS is called Identity and Access Management, or IAM. It enables the system admin to manage users and their access rights in AWS. Hence, in AWS, each user accessing AWS will have their own security credentials and individual permissions to each AWS service and resource.

Create User
Create User

After users have been created, we will be given a one-time opportunity to download and keep the user security credentials (Access Key ID and Secret Access Key). Since the keys are displayed only for one time, once the secret key is lost, we must delete the access key and then create a new key.

IAM is secured by default. It means that, by default, IAM users do not have permission to create or modify Amazon EC2 resources. Hence, an IAM policy, which is just a JSON document specifying the rules, is needed.

Besides creating users, we are able to create groups. Thus, instead of assigning each similar user a same set of access control policies, we can also assign the users to a group and then bind the access control policies to the group. This undoubtedly eases the user management. In addition, AWS even allows us to customize the permissions based on a given template!

There are many, many permission templates available when creating a user group.
There are many, many permission templates available when creating a user group.

Another thing that I find interesting is how IAM works with tags.

In order to  manage Amazon EC2 resources effectively, we can now tag the resources ourselves with a combination of a key and a value. For example, we can tag our instances in EC2 by owner. So, we can have one instance tagged with “Environment=Production” and another instance tagged with “Environment=Test”. After that, we then can grant IAM user permission to the instances by using the tag with condition key ec2:ResourceTag/Environment.

Finally, in the event, Denny also shared with us a YouTube video about the best practices of using IAM. I am not sure if I got the one he was referring to. Anyway, the following video is what I found on YouTube.

The video is a bit long. So for those who say tl;dw, I summarize the 10 tips below.

  1. Create individual users. Do not just use root credential. Do not have one user account where everybody in the team uses to do everything;
  2. Manage permissions with groups so that only one change needed to update permissions for multiple users. Even now you only have one user in the team, it’s encourage to create a group for that user because at some point there will be new users who are going to need the same permissions;
  3. Grant leas privilege. Only grant the permissions that are required by the users to do their jobs. Less chance of people making mistakes. Avoid assigning asterisk (*) policy for permissions which means full access unless the account is for admin;
  4. Use a policy to force users having a strong password;

    Password Policy
    Password Policy
  5. Enable Multi-Factor Authentication (MFA) for privileged users;

    Enable MFA.
    Enable MFA.
  6. Use IAM roles for Amazon EC2 instances;
  7. Use IAM roles to share access without the need to share security credentials;
  8. Rotate security credentials regularly. Access keys need to be rotated. Make sure the old access keys have been deleted after the rotation;
  9. Restrict privileged access further with conditions. There are 2 types of conditions. One is AWS common condition, such as date, time, MFA, secure transport (allow traffic coming over SSL only), source of IP, etc. Another one is service-specified condition. Some services provide hundreds of conditions that we can control;
  10. Reduce or remove the use of root account.
"What? You are always using root credential?" The best practice of all: Don't use root access.
“What? You are always using root credential?” The best practice of all: Don’t use root access. (Image Credit: Is the Order a Rabbit?)

Next Episode

There are many topics about AWS covered during the event. IAM is just a small part of it. However, with just IAM alone, I already feel that there are too many areas in IAM waiting for me to discover. Hence, I will continue to write more about what I’ve learned in the future blog posts.

Also, due to the fact that I am new to AWS, if you spot anything wrong in my posts, feel free to correct me in the comment section below. =)

Using Amazon SES SMTP to Send Email

In December 2011, Amazon Web Services added a new feature to help sending email in a easy and cost saving way through the Amazon SES (Simple Email Service). They provided the SMTP interface to allow users to directly use their existing SMTP to do mass emailing without the need to change the users’ existing programs. So, I decided to try it out.

SES: Email sending service from Amazon
SES: Email sending service from Amazon

Amazon SES can be found in the AWS Management Console. If this is the first use, there will be message saying the SES account currently only had “sandbox” access. Although full access to the Amazon SES API is available in the sandbox mode, only 200 emails, at most, to be sent out each day. Also, the email addresses of sender and recipients can only be those from the verified email addresses and domains. Thus, there is a need to request a production access to the Amazon SES.

To apply for the production access, we need to submit a registration form to Amazon. After that, their team will review the application before approving it. For my previous application, they approved it the day after I submitted the form. Thus, the reviewing process is actually very fast. The registration form is simple. We only need to provide some user information as well as the types of emails that will be sent using Amazon SES, such as marketing, subscription, transactional, and system notifications.

After the application is approved, we have to create SMTP credentials to start sending emails. The credentials will be used when we connect to the Amazon SES SMTP interface later. To do so, just click on the “SMTP Settings” tab located at the left hand side of the web page. The SMTP credentials created can all be found in the AWS Identity and Access Management (IAM) page.

After all these have been done, we just need to use the SMTP credential in our existing programs to send emails.

Create SMTP Credentials
Create SMTP Credentials

The good thing about Amazon SES is that the sending quota is 10,000 for each day. In addition, Amazon SES will automatically increase the sending limits as we continue to send greater quantities of email. The maximum of message size is 10MB per message, including the attachments in the email. Unfortunately, the maximum number of recipients per email is only 50, unlike Google Apps for Business allowing up to 99 addresses in To, Cc, and Bcc fields of a single email. For more details about the sending limits in Amazon SES, please visit its official documentation page.

Finally, there are graphs available to understand the statistics regarding the number of emails that are sent successfully, rejected, bounced back or marked as complaints. There is a thing that needs to be taken note is that if there are too many bounces and complaints, our Amazon SES account would be terminated. Thus, it is necessary to keep monitoring the bounce and complaint rates and keep them as low as possible. Currently, the average bounce rate of one of my SES accounts is around 0.5% and the average complaint rate is less than 0.5%. It should still be fine, I guess?

Bounces and Complaints Graphs
Bounces and Complaints Graphs

So, why are there bounces and complaints? As stated on Amazon SES FAQs, bounces are usually caused by attempting to send a nonexistent recipient. For complaints, they arise when our emails go into recipients’ spam box. That means the recipients indicate that they do not want to receive our message. Normally, a notification email will be sent from Amazon (complaints@email-abuse.amazonses.com) to tell us to look into the problem and recommend us to stop emailing to those email accounts.

Yup, this concludes what I have learnt so far about the Amazon SES. Besides smtp.gmail.com, now there is another option to choose to use as SMTP.

Long Weekend Activity #3: Moving to the Cloud above Amazon

One day before the end of my long weekend, I decided to learn setting up Windows Server 2012 instance on Amazon EC2. Also, I noted down the setup steps for my future reference.

After signing up at Amazon Web Service website, I visited the EC2 Dashboard from the AWS Management Console. Since I’d like to setup one instance in Singapore, I had to choose the region from the drop-down list at the top-right corner of the website.

Choosing region for the instance.
Choosing region for the instance.

After the region was chosen, I clicked on the blue “Launch Instance” button located at the middle of the web page to launch my first virtual server on EC2. Normally I chose the Classic Wizard so that some configurations could be changed before the setup.

Create a new instance.
Create a new instance.

The following step would be choosing an Amazon Machine Image (AMI). Somehow the Root Device Size was 0 GB which I had no idea why so. Due to the fact that I only wanted to try out AWS, I chose the one with Free Usage Tier, i.e. the Microsoft Windows Server 2012 Base.

Choose an AMI.
Choose an AMI.

In the following steps, there were options for me to set the number of instances required, instance type (set to Micro to enjoy free usage tier), subnet, network interfaces, etc. After all these, there would be a section to set the root volume size. By default, it’s 0 GB. So the instance wouldn’t be launched if the value was left default. I set it to 35 GB.

Set the volume size of the root to be 35GB.
Set the volume size of the root to be 35GB.

After providing the instance details, the next step would be creating key pair which would be used to decrypt the RDP password in the later stage. Thus, the key pair needed to be downloaded and stored safely on the computer.

Create a key pair.
Create a key pair.

There was also another section to set which ports would be open/blocked on the instance.

Set up security group to determine whether a network port is open or blocked on the instance.
Set up security group to determine whether a network port is open or blocked on the instance.

Finally, after reviewing all the details, I just clicked on the “Launch” button to launch the instance.

Review the information provided earlier before the launch of the instance.
Review the information provided earlier before the launch of the instance.

Right after the button was clicked, there was a new record added to the Instances table and its State immediately changed to “running”.

The new instance is successfully added.
The new instance is successfully added.

By right-clicking on the instance and choosing the item “Get Windows Password”, I received the default Windows Administrator password which would be used to access the instance remotely via RDP.

Retrieve the Windows Administrator password.
Retrieve the Windows Administrator password.

Yup, finally I can start playing with Windows Server 2012. =D

Yesh, successfully access the new Windows Server 2012!
Yesh, successfully access the new Windows Server 2012!