Create a Docker Image from CentOS Minimal ISO


When we are dockerizing an ASP .NET Core application, there will be a file called Dockerfile. For example, the Dockerfile in my previous project, Changshi, has the following content.

FROM microsoft/aspnetcore:2.0
ARG source
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "changshi.dll"]

The Dockerfile basically is a set of instructions for Docker to build images automatically. The FROM instruction in the first line initializes a new build stage and sets the Parent Image for subsequent instructions. In the Dockerfile above, it is using microsoft/aspnetcore, the official image for running compiled ASP .NET Core apps, as the Parent Image.

If we need to control the contents of the image, then one way that we can do is to create a Base Image. So, in this post, I’m going to share about my journey of creating a Docker image from CentOS Minimal ISO.

Step 1: Setting up Virtual Machine on VirtualBox

We can easily get the minimal ISO of CentOS on their official website.


Minimal ISO is available on CentOS Download Page.

After successfully downloading the minimal ISO, we need to proceed to launch the Oracle VM VirtualBox (Download here if you don’t have one).


Switching off Hyper-V.

For Windows users who have Hyper-V enabled because of Docker for Windows, please disable it first otherwise you will either not able to start a VM with 64-bit guest OS even though your host OS is 64-bit Windows 10 or simply encounter a BSOD.


Please switch off Hyper-V before running CentOS 64-bit OS on VirtualBox.

Funny thing is that after switching off Hyper-V, Docker for Windows will make noise saying that it needs Hyper-V to be enabled to work properly. So currently I have to keep switching on and off the Hyper-V feature option depends on which tool I’m going to use.


VirtualBox vs. Docker for Windows. Pick one.

There is one important step on running CentOS on the VM. We need to remember to configure the Network of the VM to use network adapter attached to “Bridged Adapter”. This is to connect the VM through the host to whatever is our default network device that allocates IP addresses for our physical network. Doing so will help us to retrieve the Docker image tar file via SCP later.

Then in the Network & Host Name section of the installation, we shall see the IP address allocated to the VM.


The IP Address should be available when Ethernet is connected.

To verify whether it works or not, we simply need to use the following command to check if an IP address is successfully allocated to the VM or not. In the minimal installation of CentOS 7, the command ifconfig is already not in use.

# ip a

We then can get the IP Address which is allocated to the VM. Sometimes, I need to wait for about 5 minutes before it can display the IP address successfully.


The IP address!

Step 2: Installing Docker on VM

After we get the IP address of the VM, we then can SSH into it. On Windows, I use PuTTY, a free SSH client for Windows, to easily SSH to the VM.


SSH to the VM with the IP address using PuTTY.

We proceed to install EPEL repository before we can install Docker on the VM.

Since we are going to use wget to retrieve EPEL, we first need to install wget as following.

# yum install wget

Then we can use the wget command to download EPEL repository on the VM.

# wget

The file will be downloaded to the temp folder. So, to install it we will do the following.

# cd /tmp
# sudo yum install epel-release-latest-7.noarch.rpm

After the installation is done, there should be a success message as following showing on the console.

    epel-release.noarch 0:7-11

Now if we head to /etc/yum.repos.d, we will see the following files.

CentOS-Base.repo        CentOS-fasttrack.repo       CentOS-Vault.repo
CentOS-CR.repo          CentOS-Media.repo           epel.repo
CentOS-Debuginfo.repo   CentOS-Sources.repo         epel-testing.repo

In the CentOS-Base.repo, we need to enable the CentOS Plus repository which is by default disabled. To do so, we simply change the value of enabled to 1 under [centosplus] section.

Then we can proceed to install docker on the VM using yum.

# yum install docker

Step 3: Start Docker

Once docker is installed, we can then start the docker service with the following command.

# service docker start

So now if we list the images and containers inside the docker, the results should be 0 image and 0 container, as shown in the screenshot below.

docker-installed-without-images-and-containers (2)

No image and no container.

Step 4: Building First Docker Image

Thanks to the people in Moby Project, a collaborative project for the container ecosystem to assemble container-based systems, we have a script to create a base CentOS Docker image using yum.

The script is now available on Moby Project Github repository.

We now need to create a folder called scripts in the root and then create a file called in the folder. This step can be summarized as the following commands.

# mkdir scripts
# cd scripts
# vim

We then need to copy-and-paste the script from Moby Project to

After that, we need to make executable with the following command.

# chmod +x

To run this script now, we need to do as follows, where centos7base is the name of the image file.

# ./ centos7base

After it is done, we will see the centos7base image added in docker. The image is very, very small with only 271MB as its size.


First docker image!

Step 5: Add Something (.NET Core SDK) to Container

Since now we have our first Docker image, then we can proceed to create a container with the following command.

# docker run -i -t  /bin/bash

We will be brought into the container. So now we can simply add something, such as the .NET Core SDK to the container by following the .NET Core installation steps for CentOS 7.1 (64-bit) which can be summarized as the following commands to execute.

# sudo rpm --import

# sudo sh -c 'echo -e "[packages-microsoft-com-prod]\nname=packages-microsoft-com-prod \nbaseurl=\nenabled=1\ngpgcheck=1\ngpgkey=" > /etc/yum.repos.d/dotnetdev.repo'

# sudo yum update
# sudo yum install libunwind libicu
# sudo yum install dotnet-sdk-2.0.0

# export PATH=$PATH:$HOME/dotnet

We then can create a new image from the changes we have done on the container using the following command where the centos_netcore is the repository name and 1.0 is its tag.

docker commit  [centos_netcore:1.0]

We will then realize the new image container will be quite big with 1.7GB as its size. Thanks to .NET Core SDK.

Step 6: Moving the New Image to PC

The next step that we are going to do is exporting the new image as a .tar file using the following command.

docker save  > /tmp/centos_netcore.tar

Now, we need to launch WinSCP to retrieve the .tar file via SCP (Secure Copy Protocol) to local host.


Ready to access the VM via SCP.

Step 7: Load Docker Image

So now we can shutdown the VM and enable back the Hyper-V because the subsequent steps will need Docker for Windows to work.

After restarting our local computer with Hyper-V enabled, we can launch Docker for Windows. After that, we load the image to the Docker using the following command in the directory where we keep the .tar file in local host.

docker load < centos_netcore.tar

Step 8: Running ASP .NET Core Web App on the Docker Image

Now, we can change the Dockerfile to use the new image we created.

FROM centos_netcore:1.0
ARG source
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "changshi.dll"]

When we hit F5 to make it run in Docker, yup, we will get back the website.

No, just kidding. We will actually get an error message that says localhost doesn’t send any data.


Localhost did not send any data. Why?

So if we read the messages in Visual Studio Output Window, we will see one line of message saying that it’s unable to bind to http://localhost:5000 on the IPv6 loopback interface.



According to Cesar Blum Silveira, Software Engineer from Microsoft ASP .NET Core Team, this problem is because “localhost will attempt to bind to both the IPv4 and IPv6 loopback interfaces. If IPv6 is not available or fails to bind for some reason, you will see that warning.


Explanation of Error -99 EADDRNOTAVAIL by Microsoft engineer. (Link)

Then I switch to view the output from Docker on the Output Window.


Output from Docker

It turns out that the port on docker is port 80. So I tried to add the following line in Program.cs.

public static IWebHost BuildWebHost(string[] args) =>
    .UseUrls("") // Added this line

Now, it works again with the beautiful web page.



Containers, Containers Everywhere

The whole concept of Docker images, containers, micro-services are still very new to me. Hence, if you spot any problem in my post, feel free to point out. Thanks in advance!



Load Balancing Azure Web Apps with Nginx


This morning, my friend messaged me a Chinese article about how to do clustering with Linux + .NET Core + Nginx. As we are geek first, we are going to try it out with different approaches. While my friend was going to set up on RaspberryPi, as a developer who loves playing with Microsoft Azure, I proceed to do load balancing of Azure Web Apps in different regions with Nginx.

Setup Two Azure Web Apps

Firstly, I deployed the same ASP .NET Core 2 web app to two different Azure App Services. One of them is deployed at Australia East; another one is deployed at South India (Huuray, Microsoft opens Azure India to the world in April 2017!).

The homepage of my web app, Index.cshtml, is as follows to display the information in Request.Headers.



Since WordPress cannot show the HTML code properly, I show the code as an image here.


In the code above, Request.Headers[“X-Forwarded-For”] is used to get the actual visitor’s IP address instead of the IP address of the Nginx load balancer. To allow this to work, we need to have the following codes added in Startup.cs.

app.UseForwardedHeaders(new ForwardedHeadersOptions
    ForwardedHeaders = 
        ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto

In this article, we will set up load balancer in Singapore for websites hosting in India and Australia.

Configure Linux Virtual Machine on Azure

Secondly, as described in the Chinese article mentioned above, the Nginx needs to be set up on a Linux server. The OS used in my case is Ubuntu 17.04.


Creating a new Ubuntu server running on Microsoft Azure virtual machine.

The Authentication Type that was chosen is the SSH Public Key option. Hence, we need to create public and private keys using OpenSSL tool. There is a tutorial from Microsoft showing steps on how to generate the keys using Git Bash and Putty.

Installing Nginx

After that, I installed Nginx by using the following command.

sudo apt-get install nginx

After installing it, in order to test whether Nginx is installed properly, I visited the public IP address of the virtual machine. However, it turns out that I couldn’t visit the server because the port 80 by default is not opened on the virtual machine.

Hence, the next step I need to do is opening port using Azure Portal by adding a new inbound security rule for the port 80 and then associate it to the subnet of the virtual network of the virtual machine.

Then when I revisited the public IP of the server, I could finally see the “Welcome to Nginx” success page.


Nginx is now successfully running on our Ubuntu server!

Mission: Load Balancing Azure Web Apps with Nginx

As the success page mentioned, further configuration is required. So, we need to edit the configuration file by first opening it up with the following command.

sudo nano /etc/nginx/sites-available/default

The first section that I added is the Cache Configuration.

# Cache configuration
proxy_temp_path /var/www/proxy_tmp;
proxy_cache_path /var/www/proxy_cache levels=1:2 keys_zone=my_cache:20m inactive=60m max_size=500m;

The proxy_temp_path is the path to the directory where the temporary files should be stored at when the response from the upstream server cannot fit into the configured buffers.

The proxy_cache_path is about in which directory the cache should be stored at. The levels=1:2 means that the cache will be stored in a single-character directory with a two-character subdirectory. The keys_zone parameter defines a my_cache cache zone which can store 20MB of keys at most but with the maximum size of the actual data to be 500MB. The inactive=60m means the maximum inactive time cache can be stored, which is 60 minutes in this case.

Next, upstream needs to be defined as follows.

# Cluster sites configuration
upstream backend {
    server fail_timeout=30s;
    server fail_timeout=30s;

For the default server configuration, we need to make a few modifications to it.

# Default server configuration
server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name localhost;
    location / {
        proxy_pass http://backend;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        try_files $uri $uri/ =404;

Now, we just need to restart the Nginx with the following command.

sudo service nginx restart

Then when we visit the Ubuntu server again, we will realize that we sort of able to reach Azure Web Apps but not really so because it says 404!


Oops, the Nginx routes the visitor to 404 land.

Troubleshooting 404 Error

According to another article which is written by Issac Lázaro, he said this was due to the fact that Azure App Service uses cookies to do ARR (Application Request Routing), hence we need to have the Ubuntu server to pass the header to the web apps by modifying our Nginx configuration to the following.

# Cluster sites configuration
upstream backend {
    server localhost:8001 fail_timeout=30s;
    server localhost:8002 fail_timeout=30s;

server {
    listen 8001;
    server_name web01;

    location / {
        proxy_set_header Host;

server {
    listen 8002;
    server_name web02;
    location / {
        proxy_set_header Host;

Then when we refresh the page, we shall see the website is loaded correctly with the content will be delivered from either web01 or web02.


Yay, we make it!

Yup, that’s all about setting up a simple Nginx to load balance multiple Azure Web Apps. You can refer to the following articles for more information about Nginx and load balancing.


  1. How to open ports to a virtual machine with the Azure portal
  2. Can’t start Nginx – Job for nginx.service failed
  3. Linux+.NetCore+Nginx搭建集群
  4. Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching
  5. Module ngx_http_upstream_module
  6. How To Set Up Nginx Load Balancing with SSL Termination


Burger and Cheese


As a web developer, I don’t have many chances to play with mobile app projects. So rather than limit myself to just one field, I love to explore other technologies, especially mobile app development.

Burger Project: My First Xamarin App

Last month, I attended a Xamarin talk at Microsoft Singapore office with my colleague. The talk was about authentication and authorization with social networks such as Facebook and Twitter via Azure App Service: Mobile App.

Ben Ishiyama-Levy is talking about how Xamarin and Microsoft Azure works together.

Ben Ishiyama-Levy is talking about how Xamarin and Microsoft Azure works together.

The speaker is Ben Ishiyama-Levy, a Xamarin evangelist. His talk inspired me to further explore how I could retrieve user info from social network after authenticating the users.

Because I am geek-first and I really want to find out more, so I continue to read more about this topic. With the help from my colleague, I developed a simple Xamarin.Android app to demonstrate the Authentication and logged-in user’s info retrieval.

The demo app is called Burger and it can be found on my Github repository:

Challenges in Burger Project

Retrieving user's info from social network.

Retrieving user’s info from social network.

In Burger project, the first big challenge is to understand how Azure App Service: Mobile App works in Xamarin. Luckily, with the material and tutorial given in the Xamarin talk from Ben, I was able to get a quick start on this.

My colleague also shared another tutorial which is about getting authenticated user’s personal details on Universal Windows Platform (UWP). It helps me a lot to understand about how mobile app and Azure App Service can work together.

My second challenge in this project is to understand Facebook Graph API. I still remember that I spent quite some time finding out why I could not retrieve the friend list of a logged-in Facebook user. With the introduction of the Facebook Graph API 2.0, access to a user’s friends list via /me/friends is limited to just friends using the same app. Hence after reading a few other online tutorials, I finally somehow able to get another subset of a user’s friends via /me/taggable_friends.

In this project, it’s also the first time I apply Reflection in my personal project. It helps me easily get the according social network login class with a neat and organized code.


Microsoft Developer Day at NUS, Singapore in May 2016

Cheese Project: When Google Speech Meets MS LUIS on Android

Few months ago, I’m fortunate to represent my company to attend Microsoft Developer Day 2016 in National University of Singapore (NUS).

The day is the first time Microsoft CEO Satya Nadella comes to Singapore. It’s also my first time learn about the powerful Cognitive Services and LUIS (Language Understanding Intelligence Service) in Microsoft Azure in Riza’s talk.


Riza’s presentation about Microsoft Cognitive APIs during Microsoft Developer Day.

Challenges in Cheese Project

Everyday, it takes about one hour for me to reach home from office. Hence, I will only have two to three hours every night to work on personal projects and learning. During weekends, when people are having fun out there, I will spend time on researching about some exciting new technologies.

There are many advance topics in LUIS. I still remember that when I was learning how LUIS works, my friend was actually playing the Rise of the Tomb Raider beside me. So while he was there phew-phew-phew, I was doing data training on LUIS web interface.


Microsoft LUIS (Language Understanding Intelligence Service) and Intents

Currently, I only worked on some simple intents, such as returning me current date and time as well as understanding which language I want to translate to.

My first idea in Cheese project is to build an Android app such that if I say “Please translate blah-blah to xxx language”, the app will understand and do the translation accordingly. This can be quite easily done with the help of both LUIS and Google Translate.

After showing this app to my colleagues, we realized one problem in the app. It’s too troublesome for users to keep saying “Please translate blah-blah to xxx language” every time they need to translate something. Hence, recently I have changed it to use GUI to provide language selection. This, however, reduces the role played by LUIS in this project.


VoiceText provides a range of speakers and voices with emotions!

To make the project even more fun, I implemented the VoiceText Web API from Japanese in the Android app. The cool thing about this TTS (Text-To-Speech) API is that it allows developers to specify the mood and characteristic of the voice. The challenge, of course, is to read the API written in Japanese. =P

Oh ya, this is the link to my Cheese repository on Github: I will continue to work on this project while exploring more about LUIS. Stay tuned.

languagelist    googlespeech    SuccessfullyTranslated.png

After-Work Personal Projects

There are still more things in mobile app development for me to learn. Even though most of the time I feel exhausted after long work day, working on new and exciting technologies helps me getting energized again in the evening.

I’m not as hardworking as my friends who are willing to sacrifice their sleep for their hobby projects and learning, hence the progress of my personal project development is kind of slow. Oh well, at least now I have my little app to help me talking to people when I travel to Hong Kong and Japan next year!

Export Scheduled Report in an Excel Spreadsheet as an Email Attachment

People love reports. I do not know why but most of the time, the systems that I am working on always have this report module. The requirements of the report are usually given by the administrative staff. So normally the admin will always give me a sample of existing report as a reference during the development of the report module.

Previously, the admin was happy with just one report module giving them the ability to view reports by using their login id and password. After that, they wanted the function to export the report to Excel so that they could immediately work on the data analysis. Soon, they realized that logging in to the system just to view the report was a bit stupid. Thus, they required an email to be sent to them in midnight with the report in Excel format attached in the email.

Admin Loves Reading Reports (Photo Credit: Kono Aozora ni Yakusoku o)

Admin loves reading reports. Image Credits: Kono Aozora ni Yakusoku o

Although there is this cool stuff called Excel Interactive View which can generate Excel table and charts of  an HTML table on the fly, I don’t really like it. The generated Excel table looks very complicated with colourful bar appearing at the background of the cells containing numbers.

Also, as shown in the following screenshot, the Excel Interactive View does not do a good job because contact number and tutorial class number are wrongly taken as numerical data used to generate the charts. Hence, Excel Interactive View is great and convenient but it does not work in all kinds of reports.

The look-and-feel of Excel Interactive View can be simpler.

The four charts shown at the right are meaningless already.

In ASP.NET, I can have my own “Export to Excel” button by adding in the following codes in the Page_Load method of a web page.

Response.ContentType = "application/";
Response.AddHeader("Content-Disposition", "attachment; filename=\"Sales_Report.xls\"");
string excelBody = "";

excelBody +=
 "<head>" +
 "<meta http-equiv=Content-Type content=\"text/html; charset=utf-8\">" +
 "<style>" +
 "<!--table" +
 "br {mso-data-placement:same-cell;}" +
 "tr {vertical-align:top;}" +
 "-->" +
 "</style>" +
excelBody += "<body>" + <The HTML table goes here...> + "</body>";
excelBody += "</html>";

Then, I just need to redirect the user to this page when the “Export to Excel” button is pressed. This is the code used when admin staff was satisfied with just the functionality to export their reports to Excel.

Soon after that, I found another great library to help generating Excel spreadsheet in C#. It is called excellibrary, which is able to be downloaded on Google Code. Thanks to the library, I am able to do a system which will automatically send out an email attached with Excel report.

To do that, firstly, I need to generate the report in Excel format with the help of excellibrary.

Workbook workbook = new Workbook();
Worksheet wsReport = new Worksheet("Sales Report");
int startingRow = 0;
wsReport.Cells[startingRow, 0] = new Cell("Column 1 Row 1");
wsReport.Cells[startingRow, 1] = new Cell("Column 2 Row 2");
for (int i = startingRow + 1; i < 200; i++)
    wsReport.Cells[i, 0] = new Cell(" "); // Some dummy empty cells

The reason for adding some dummy empty cells in the end is because Excel will complain that it found unreadable content in Sales_Report.xls when the number of cells containing real data is too small. This is a reported issue in the excellibrary project and one workaround suggested by the users is to increase the file size by adding more rows and columns with a space.

Secondly, I send out the email by using the following code.

string sSmtpServer = "";
MailMessage myMessage = new MailMessage();
myMessage.Body = "The report in Excel format is attached in this email.";
myMessage.Subject = "Sales Report in Excel!";
myMessage.To = "...";
// Add the file attachment to this e-mail message.
myMessage.Attachments.Add(new MailAttachment("C:\\ExcelOutputs\\Sales_Report.xls"));
myMessage.Fields[""] = 1;
myMessage.Fields[""] = 2;
myMessage.Fields[""] = "...";
myMessage.Fields[""] = "...";
myMessage.Fields[""] = "465";
myMessage.Fields[""] = "true";
myMessage.From = "...";
myMessage.BodyFormat = MailFormat.Html;
SmtpMail.SmtpServer = sSmtpServer;

Finally, I just need to create a scheduled task to run this little program daily.

Yup, this is how my automatic report exporting and emailing system is done in C#.