Unit Testing with Golang

Continue from the previous topic

Unit Testing is a level of automated software testing that units which can be modular parts of the program are tested. Normally, the “unit” refers to a function, but it doesn’t necessary always be so. A unit typically takes in data and returns an output. Correspondingly, a unit test case passes data into the unit and check the resultant output to see if they meet the expectations.

Unit Testing Files

In Golang, unit test cases are written in <module>_test.go files, grouped according to their functionality. In our case, when we do unit testing for the videos web services, we will have the unit test cases written in video_test.go. Also, the test files need to be in the same package as tested functions.

Necessary Packages

In the beginning, we need to import the “testing” package. In each of our unit test function, we will take in a parameter t which is a pointer to testing.T struct. It is the main struct that we will be using to call out any failure or error.

In our code video_test.go, we use only the function Error in testing.T to log the errors and to mark the test function fails. In fact, Error function is a convenient function in the package that combines calling of Log function and then the Fail function. The Fail function marks the test case has failed but it still allows the execution of the rest of the test case. There is another similar function called FailNow. The FailNow function is stricter and exits the test case once it’s encountered. So, if FailNow function is what you need, you have to call the Fatal function which is another convenient function that combines Log and FailNow instead of the Error function.

Besides the “testing” package, there is another package that we need in order to do unit testing for Golang web applications. It is the “net/http/httptest” package. It allows us to use the client functions of the “net/http” package to send an HTTP request and capturing the HTTP response.

Test Doubles, Mock, and Dependency Injection

Before proceeding to writing unit test functions, we need to get ready with Test Doubles. Test Double is a generic term for any case where we replace a production object for testing purposes. There are several different types of Test Double, of which a Mock is one. Using Test Doubles helps making the unit test cases more independent.

In video_test.go, we apply the Dependency Injection in the design of Test Doubles. Dependency Injection is a design pattern that decouples the layer dependencies in our program. This is done through passing a dependency to the called object, structure, or function. This dependency is used to perform the action instead of the object, structure, or function.

Currently, the handleVideoRequests handler function uses a global sql.DB struct to open a database connection to our PostgreSQL database to perform the CRUD. For unit testing, we should not depend on database connection so much and thus the dependency on sql.DB should be removed. The dependency on sql.DB then should be injected into the process flow from the main program.

To do so, firstly, we need to introduce a new interface called IVideo.

type IVideo interface {

GetVideo(userID string, id int) (err error)
GetAllVideos(userID string) (videos []Video, err error)
CreateVideo(userID string) (err error)
UpdateVideo(userID string) (err error)
DeleteVideo() (err error)

}

Secondly, we make our Video struct to implement the new interface and let one of the fields in the Video struct a pointer to sql.DB. Unlike in C#, we have to specify which interface the class is implementing, in Golang, as long as the Video struct implements all the methods that IVideo has (which is already does), then Video struct is implementing the IVideo interface. So now our Video struct looks as following.

type Video struct {
Db *sql.DB
ID int `json:"id"`
Name string `json:"videoTitle"`
URL string `json:"url"`
YoutubeVideoID string `json:"youtubeVideoId"`
}

As you can see, we added a new field called Db which is a pointer to sql.DB.

Now, we can create a Test Double called FakeVideo which implements IVideo interface to be used in unit testing.

// FakeVideo is a record of favourite video for unit test
type FakeVideo struct {
ID int `json:"id"`
Name string `json:"videoTitle"`
URL string `json:"url"`
YoutubeVideoID string `json:"youtubeVideoId"`
CreatedBy string `json:"createdBy"`
}


// GetVideo returns one single video record based on id
func (video *FakeVideo) GetVideo(userID string, id int) (err error) {
jsonFile, err := os.Open("testdata/fake_videos.json")
if err != nil {
return
}

defer jsonFile.Close()

jsonData, err := ioutil.ReadAll(jsonFile)
if err != nil {
return
}

var fakeVideos []FakeVideo
json.Unmarshal(jsonData, &fakeVideos)

for _, fakeVideo := range fakeVideos {
if fakeVideo.ID == id && fakeVideo.CreatedBy == userID {
video.ID = fakeVideo.ID
video.Name = fakeVideo.Name
video.URL = fakeVideo.URL
video.YoutubeVideoID = fakeVideo.YoutubeVideoID

return
}
}

err = errors.New("no corresponding video found")

return
}
...

So instead of reading the info from the PostgreSQL database, we read mock data from a JSON file which is stored in testdata folder. The testdata folder is a special folder where Golang will ignores when it builds the project. Hence, with this folder, we can easily read our test data from JSON file fake_videos.json through relative path from video_test.go.

Since now the Video struct is updated, we need to update our handleVideoAPIRequests method to be as follows.

func handleVideoAPIRequests(video models.IVideo) http.HandlerFunc {
    return func(writer http.ResponseWriter, request *http.Request) {
        var err error

       ...

        switch request.Method {
        case "GET":
            err = handleVideoAPIGet(writer, request, video, user)
        case "POST":
            err = handleVideoAPIPost(writer, request, video, user)
        case "PUT":
            err = handleVideoAPIPut(writer, request, video, user)
        case "DELETE":
            err = handleVideoAPIDelete(writer, request, video, user)
        }

        if err != nil {
            util.CheckError(err)
            return
        }
    }
}

So now we pass an instance of the Video struct directly into the handleVideoAPIRequests. The various Video methods will use the sql.DB that is a field in the struct instead. At this point of time, handleVideoAPIRequests no longer follows the ServeHTTP method signature and is no longer a handler function.

Thus, in the main function, instead of attaching a handler function to the URL, we call the handleVideoAPIRequests function as follows.

func main() {
...

mux.HandleFunc("/api/video/",
handleRequestWithLog(handleVideoAPIRequests(&models.Video{Db: db})))

...
}

Writing Unit Test Cases for Web Services

Now we are good to write unit test cases in video_test.go. Instead of passing a Video struct like in server.go, this time we pass in the FakeVideo struct, as highlighted in one of the test cases below.

func TestHandleGetAllVideos(t *testing.T) {
    mux = http.NewServeMux()
    mux.HandleFunc("/api/video/", handleVideoAPIRequests(&models.FakeVideo{}))
    writer = httptest.NewRecorder()

    request, _ := http.NewRequest("GET", "/api/video/", nil)
    mux.ServeHTTP(writer, request)

   if writer.Code != 200 {
        t.Errorf("Response code is %v", writer.Code)
    }

    var videos []models.Video
    json.Unmarshal(writer.Body.Bytes(), &videos)

    if len(videos) != 2 {
        t.Errorf("The list of videos is retrieved wrongly")
    }
}

By doing this, instead of fetching videos from the PostgreSQL database, now it will get from the fake_videos.json in testdata.

Testing with Mock User Info

Now, since we have implemented user authentication, how do we make it works in unit testing also. To do so, in auth.go, we introduce a flag called isTesting which is false as follows.

// This flag is for the use of unit testing to do fake login
var isTesting bool

Then in the TestMain function, which is provided in testing package to do setup or teardown, we will set this to be true.

So how do we use this information? In auth.go, there is this function profileFromSession which retrieves the Google user information stored in the session. For unit testing, we won’t have this kind of user information. Hence, we need to mock this data too as shown below.

if isTesting {
        return &Profile{
            ID: "154226945598527500122",
            DisplayName: "Chun Lin",
            ImageURL: "https://avatars1.githubusercontent.com/u/8535306?s=460&v=4",
        }
    }

With this, then we can test whether the functions, for example, are retrieving correct videos of the specified user.

Running Unit Test Locally and on Azure DevOps

Finally, to run the test cases, we simply use the command below.

go test -v

Alternatively, Visual Studio Code allows us to run specified test case by clicking on the “Run Test” link above the test case.

Running test on VS Code.

We can then continue to add the testing as one of the steps in Azure DevOps Build pipeline, as shown below.

Added the go test task in Azure DevOps Build pipeline.

By doing this, if any of the test cases fails, there won’t be a build made and thus our system becomes more stable now.

Advertisements

#azure, #devops, #golang

Deploy Golang App to Azure Web Apps with CI/CD on DevOps

Continue from the previous topic

After we have our code on Github repository, now it’s time to automate our builds and deployments so that our Golang application will always be updated whenever there is a new change to our code on Github.

Sample Golang Web App DevOps Pipelines

To do that, we will use Azure DevOps and its Pipelines module. We can easily create a DevOps project in Azure Portal for our Golang application because there is a template available.

Golang is one of the supported languages in Azure DevOps.

As a start, we will focus on “Windows Web App” instead of containers. After that, we just need to configure basic information of the web app, such as its name, location, resource group, pricing tier, and application insights.

We can configure Application Insights while creating the DevOps project.

After that, we shall be able to see a new DevOps project created with the following two folders, Application and ArmTemplates, in Repos. Application folder contains a sample Golang application.

However, why is there an ArmTemplates folder? This is because by default when we create a new Azure DevOps project for Golang application using the steps above, it will also automatically create a web app for us. Hence, this is the ARM (Azure Resource Manager) template Azure uses to do that.

Content of ArtTemplate which is used to create/update the Azure web app.

With this pipeline setup, we can simply update the default Golang code in the Repos to launch our Golang application on Azure. However, what if we want to link Azure DevOps with the codes we already have on our Github repo?

Connecting DevOps with Github

To do that, let’s start again by creating a new project on Azure DevOps, instead of Azure portal. Here, I will make the DevOps project to be Public so that you can access it while reading this article.

Creating a new public DevOps project.

Once the project is created, we can proceed to the Project Settings page of the project to disable some modules that we don’t need, i.e. Boards and Repos.

We need to hide both Boards and Repos because Github provides us similar features.

Setting up Build Pipeline

After this, we then can proceed to create our Build pipeline by first a connecting to our Github repo.

If our code is neither on DevOps or Github, we can click “Use the visual designer” to proceed.

Before continuing to choose the corresponding Github repo, we need to have a azure-pipelines.yml. To understand the guidelines to write proper Azure DevOps Pipelines YAML, we can refer to the official guide. For Golang, there is another specific documentation on how to build and test Golang projects with Azure DevOps Pipelines.

For our case, we will have the following pipeline YAML file.

# Go 
# Build your Go project.

resources:
- repo: self

pool:
vmImage: 'vs2017-win2016'

steps:
- task: GoTool@0
inputs:
version: 1.11.5
displayName: 'Use Go 1.11.5'
- task: Go@0
displayName: 'go get'
inputs:
arguments: '-d'
workingDirectory: '$(System.DefaultWorkingDirectory)'
- task: Go@0
displayName: 'go build'
inputs:
command: build
arguments: '-o "$(System.TeamProject).exe"'
workingDirectory: '$(System.DefaultWorkingDirectory)'
- task: ArchiveFiles@2
displayName: 'Archive Files'
inputs:
rootFolderOrFile: '$(Build.Repository.LocalPath)'
includeRootFolder: False
- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact'
inputs:
artifactName: drop

There are a few virtual machine images from Microsoft-hosted agent pool. We choose the “Visual Studio 2017 on Windows Server 2016 (vs2017-win2016)” image because I normally use Visual Studio 2017 for development.

The first task is the Go Tool Installer task. It will find and download a specific version of the Go tool into the tool cache and add it to the PATH. Here we will use the latest version of Golang which is 1.11.5 at the point of writing this article.

The subsequent step will be running go get. This command will download the packages along with their dependencies. Since the -d argument is present, it will only download them but not install them.

After that, it will run go build. This step compiles the packages along with their dependencies, but it does not install the results. By default, the build command will write the resulting executable to an output file named after the first source file (or the source code directory). However, with the -o flag here, it forces build to write the resulting executable to the output file named $(System.TeamProject).exe, i.e. GoLab.exe.

Next we use the Archive Files task to create an archive file from a source folder. Finally, we use the Publish Build Artifacts task to publish build artifact to DevOps pipelines. With Archive Files task, it will generate a zip file called as such D:\a\1\a\54.zip where 54 is the build id. Publish Build Artifacts task will then upload the zip file to file container called drop.

Details of the Archive Files task.

To find out what is inside the file container drop, we can download it from the Summary page of the build. It is actually a folder containing all the files of our Golang application.

We can download the drop from the Summary page of the build.

Setting up Release Pipeline

Now we can proceed to create our Release pipeline. Luckily, there is already a template available to help us kick starting the Release pipeline.

The “Deploy a Go app to Azure App Service” pipeline is what we need here.

After selecting the template, we will need to specify the artifact, as shown below. There is version that we can choose, for example, the latest version from a specific branch with tags. Here we choose Latest so that our latest code change will always get deployed to Azure Web Apps.

Adding artifact.

Next, we need to enable the CD trigger as shown in the following screenshot so that a new release will be created every time a new build is available.

Enabling CD trigger.

Now we are at Pipeline tab. What we need to next is to move on to the Tasks tab, which is now having a red exclamation mark. We just need to authorize the Release pipeline to our Azure subscription and then connect it to the Azure Web App in the subscription.

Completing tasks.

Now, as you can see, the agent basically does three steps:

  • Stop the Azure Web App;
  • Deploy our code to Web App;
  • Start the Web App.

What interests us here is the second step. The reason why we need to generate a zip file in Build pipeline is also because in the second step, we need to specify the file path to the zip files to deploy.

Default configuration of second step.

Finally, we can just Save the pipeline and rename the “New release pipeline” to another friendlier name.

Now we can manually create a Release to test it out.

Create a new release manually.

Since we trigger this release manually, we also need to click in to deploy it manually.

Deploying to Azure App Service in progress.

After the deployment is done, we can view its summary as shown below.

The deployment process of the agent.

Conclusion

That’s all for setting up simple build and release pipelines on Azure DevOps to deploy our Golang web app to Azure Web Apps.

#devops, #github, #golang, #microsoft-azure

TCP Listener on Microsoft Azure with Service Fabric

azure-service-fabric-load-balancer.png

Getting TCP listener to run on Microsoft Azure is always an interesting topic to work on. Previously, I did one experimental TCP listener on Azure Cloud Service and it works quite well.

Today, I’d like to share with you my another experiment which is hosting a TCP Listener on Microsoft Azure with Service Fabric.

Step 0: Installing Service Fabric SDK

Most of the time, it’s better to run the Visual Studio 2017 in Administrator mode otherwise debugging and deployment of Service Fabric applications may have errors.

Before we can start a new Service Fabric application project on Visual Studio, we first need to make sure Service Fabric SDK is installed.

service-fabric-sdk-must-be-installed.png

Visual Studio will prompt us to install Service Fabric SDK.

The template that I use is Stateless Service under .NET Core 2.0. This project template is to create a stateless reliable service with .NET Core.

Step 1: Add TCP Endpoint

In the ServiceManifest.xml of the PackageRoot folder of the application project, we need to specify an endpoint that our TCP Listener will be listening to. In my case, I am using port 9005. So I need to add an endpoint as shown below in the ServiceManifest.xml.

<Endpoint Name="TcpEndpoint" Protocol="tcp" Port="9005"/>

Step 2: Create Listeners

In the StatelessService class, there is a CreateServiceInstanceListeners method that we can override to create TCP listeners with the following codes.

protected override IEnumerable CreateServiceInstanceListeners()
{
    var endpoints = Context.CodePackageActivationContext.GetEndpoints()
        .Where(endpoint => endpoint.Protocol == EndpointProtocol.Tcp)
        .Select(endpoint => endpoint.Name);

    return endpoints.Select(endpoint => new ServiceInstanceListener(
        serviceContext => new TcpCommunicationListener(serviceContext, ServiceEventSource.Current, endpoint), endpoint));
}

Then in the RunAsync method, which is the main entry point for our service instance, we can simply include the code for TCP Listener to receive and send message to the clients.runasync.png

Step 3: Create Service Fabric Cluster

 

There are a few simple steps for us to follow in order to create a new Service Fabric cluster on Microsoft Azure.

Firstly, we need to specify some basic information, such as cluster name, OS, and default VM credentials.

service-fabric-step1-configure-basic-settings.png

Configure basic settings for a new Azure Service Fabric cluster.

Secondly, we need to define Node Types. Node types can be seen as equivalent to the roles in Cloud Service. Node types define the VM sizes, the number of VMs, and their properties. Every node type that is defined in a Service Fabric cluster maps to a virtual machine scale set.

We can start with only one node type. The portal will then prompt us to select one VM size. By default, it only shows three recommended sizes. If you would like to find out more other specs with lower price, please click on “View All”.

I once use A0 (which coasted USD 14.88) for experimental purpose. However, it turns out that the newly created service fabric cluster is totally not connectable with a status saying “Upgrade service unreachable”. The funny thing is that the status was only shown after everything in the resource group is setup successfully which strangely took about one hour plus to finish. So I wasted about one hour for that. Hence, please use at least the recommended size for the VM.

service-fabric-step2-configure-cluster.png

We need to specify the VM spec for each of the node type.

A very interesting point to take note is that, there is a checkbox for us to configure advance settings for node type, as shown in the following screenshot. The default values here will affect things such as the Service Fabric dashboard URL we use later. It’s fine to leave them as default.

service-fabric-step2-configure-cluster-advance-settings.png

Default values in the advanced settings of node type.

Thirdly, we need to configure the security settings by specifying which Key Vault to use. If you don’t have any suitable key vault, then it will take about one minute to create a new key vault for you. After the new key vault is created, you may be prompted with an error stopping you to proceed, as shown in the following screenshot.

service-fabric-step3-configure-security-settings-error.png

New key vault created here by default is not enabled for deployment.

To fix the error, we first need to visit the Key Vaults page. After that, we need to find out the key vault we just created above. Then we proceed to tick the corresponding checkbox to enable the key vault access to Azure VM for deployment, as shown in the following screenshot.

service-fabric-step3-configure-key-vault.png

Enable it so that Azure VM can retrieve certificates stored as secret from the key vault.

Now, if we got back to the Step 3 of the service fabric cluster setup, we can get rid of the error message by re-selecting the key vault. After keying a certificate name, we will need to wait for 30 seconds for validation. Then we will be given a link to download our certificate for later use.

service-fabric-step4-download-cert.png

Let’s download the cert from here!

This marks the end of our service fabric cluster setup. What we need to do is just to click on the “Create” button.

The creation process took about 40 minutes to complete. It actually went through many stages which are better described in the article “Azure Service Fabric Cluster – Deployment Issues”, written by Cosmin Muscalu.

Step 4: Publish App from Visual Studio

After the service fabric cluster is done, we can proceed to publish our application to it.

In the Solution Explorer, we simply need to right-click on the Service Fabric project and choose Publish, as shown in the following image.

solution-explorer

Solution Explorer

A window will popup and prompt us that the Connection Endpoint is not valid, as shown below.

cannot-publish-to-server.png

Failed to connect to server and thus we cannot publish the app to Azure.

Now, according to the article from the link “How to configure secure connection”, we have to install the certificate that we downloaded earlier from Azure Portal in Step 2.

Since there is no password for the pfx file, we simply need to accept all default settings while importing the certificate.

Now if we go back to the Publish window, we will see a green tick icon appearing at the side of the Connection Endpoint. Now, we are good to proceed to do a publish. The deployment of a simple TCP Listener normally takes less than one minute to finish.

Step 5: Open Port Access

After the deployment is done, we need to open up the 9005 port that we specified above in Step 2. To do so, we need to visit the Load Balancer used by the service fabric cluster to add a new rule for the port 9005 to be accessible from public.

add-load-balancing-rule.png

Add a new load balancing rule for the service fabric.

The process of adding a new rule normally takes about three minutes to complete.

Please take note that we need to note down the Public IP Address of our load balancer as well.

load-balancer-public-ip-address.png

The Public IP Address of a load balancer can be found in its Overview panel.

Step 6: Open Up Service Fabric Explorer

Finally, we need to open up the Explorer for our service fabric cluster. To do so, we can retrieve the dashboard URL in the Overview panel of the service fabric cluster.

service-fabric-admin-dashboard.png

The Service Fabric Explorer URL is here.

To access the Explorer, we first need to select a certificate that we downloaded earlier to authenticate ourselves to the Explorer, as shown in the screenshot below.

select-certificate.png

Selecting a certificate on Google Chrome.

Step 7: Communicate with TCP Listener

Now, if we build a simple TCP client to talk to the server at the IP address of the load balancer that we noted down earlier, we can easily send and receive response from the server, as shown in the screenshot below.

tcp-client.png

Hooray, we receive the response from the application on Azure Service Fabric!

So yup, that’s all for a very simple TCP Listener which is hosted on Microsoft Azure.

I will continue to research more about this topic with my teammates so that I can find out more about this cool technology.

Setup Ubuntu Server at Tokyo and Transform it to Desktop with RDP Installed

vultr-ubuntu-xrdp-xfce

While waiting for lunch, it’s nice to do some warmups. Setting a server overseas seems a pretty cool warmup to do for developers, right? Recently, my friend recommended me to try out Vultr which provide cloud servers. So today, I’m going to share how I deploy a Ubuntu server which is located in Tokyo, a city far away from where I am now.

Step 1: Choosing Server Location

Vultr is currently available in many cities in popular countries such as Japan, Singapore, Germany, United States, Australia, etc.

server-location.png

Step 2: Choosing Server Type and Size

Subsequently, we will be asked to select the type and size for the server. Here, I choose 60 GB SSD server with Ubuntu 16.04 x64 installed. I tried with Ubuntu 17.10 x64 before but I couldn’t successfully RDP into it. Then the latest Ubuntu 18.04 x64 is not yet tried by me. So ya, we will stick to using Ubuntu 16.04 x64 in this article.

vultr-ubuntu-pricing.png

Step 3: Uploading SSH Key

Vultr is friendly to provide us a tutorial about generating SSH Keys on Windows and Linux.

The steps for creating SSH key on Windows with PuTTYgen is as follows.

Firstly, we need to click on the “Generate” button on PuTTYgen.

generate-key-pair.png

Secondly, once the Public Key is generated, we need to enter a key passphrase for additional security.

Thirdly, we click on the “Save Private Key” button to save the private key on somewhere safe.

Fourthly, we copy all of the text in the Public Key field and paste it to the textbox in Vultr under the “Add SSH Key” section.

adding-ssh-key.png

Step 4: Naming and Deployment

Before we can deploy the server, we need to key in the hostname for the new server.

After we have done that, then we can instruct Vultr to deploy the server by clicking on the “Deploy Now” button at the bottom of the page.

Within 5 minutes, the server should finish installing and booting up.

Step 5: Getting IP Address, Username, and Password

In order to get the user credential to access the server, we need to click on the “Server Details” to view the IP address, username, and password.

Step 6: Updating Root Password

The default password is not user-friendly. Hence, once we login to the server via PuTTY, we need to immediately update the root password using the command below for our own good.

# passwd

Step 7: Installing Ubuntu Desktop

Firstly, let’s do some updating for the packages via the following commands.

# sudo apt-get update
# sudo apt-get upgrade

This will take about 2 minutes to finish.

Then we can proceed to install the default desktop using the following command.

# sudo apt-get install ubuntu-desktop

This will take about 4 minutes to finish. Take note that at this point of time Unity will be the desktop environment.

After that, we update the packages again.

# sudo apt-get update

Step 8: Installing Text Editor

We are going to change some configurations later, so we will need to use a text editor. Here, I’ll use the Nano Text Editor by installing it first.

# sudo apt-get install nano

Step 9: Installing xrdp

xrdp is an open source Remote Desktop Protocol (RDP) server which provides a graphical login to remote machines. This helps us to connect to the server using Microsoft Remote Desktop Client.

sudo apt-get install xrdp

Step 10: Changing to Use Xfce Desktop Environment

We will then proceed to install Xfce which is a lightweight desktop environment for UNIX-like operating systems.

sudo apt-get install xfce4

After it is installed successfully, please run the following command. This is to tell the Ubuntu server to know that Xfce has been chosen to replace Unity as desktop environment.

echo xfce4-session >~/.xsession

Step 11: Inspect xrdp Settings

We need to configure the xrdp settings by editing the startwm.sh in Nano Text Editor.

nano /etc/xrdp/startwm.sh

We need to edit the file by changing entire of the file content to be as follows.

if [ -r /etc/default/locale ]; then
 . /etc/default.locale
 export LANG LANGUAGE
fi

startxfce4

Then we need to restart xrdp.

# sudo service xrdp restart

After that, we restart the server.

# reboot now

Step 12: Connecting with Remote Desktop Client

After the server has been restarted, we can access the server with Windows Remote Desktop Client.

rdp

At this point of time, some of you may encounter error when logging in via RDP. The error will be saying things as follows.

Connecting to sesman IP 127.0.0.1 port 3350
sesman connect ok
sending login info to session manager, please wait...
xrdp_mm_process_login_response:login successful for display
started connecting
connecting to 127.0.0.1 5910
error-problem connecting

problem-connecting.png

Problem of connecting via xrdp.

As pointed out in one of the discussion threads on Ask Ubuntu, the problem seems to be xrdp, vnc4server, and tightvncserver are installed in the wrong order. So in order to fix that, we just need to remove them and re-install them in a correct order with the following set of commands.

# sudo apt-get remove xrdp vnc4server tightvncserver
# sudo apt-get install tightvncserver
# sudo apt-get install xrdp
# sudo service xrdp restart

After the server is restarted, we should have no problem accessing our server via RDP client on Windows.

success.png

References

 

 

#rdp, #technology, #ubuntu

[KOSD Series] Code Review and VSTS

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

kosd-vsts-azure.png

Code reviews are a best practice for software development projects but it’s normally ignored in startups and SMEs because

  • the top management doesn’t understand the value of doing so;
  • the developers have no time to do code reviews and even unit testing.

So, in order to improve our code quality and management standards, we decided to introduce the idea of code reviewing by enforcing pull requests creating in our deployment procedure, even though our team is very small and we are working in a startup environment.

Firstly, we set up two websites on Azure App Service, one for UAT and another for the Production. We enabled Continuous Deployment feature for two of them by configuring Azure App Service integration with our Git repository on Visual Studio Team Services (VSTS).

Secondly, we have two branches in the Git repository of the project, i.e. master and development-deployment. Changes pushed to the branches will automatically be deployed to the Production and the UAT websites, respectively.

In order to prevent that our codes are being deployed to even the UAT site without code reviews, we created a new branch known as the development branch. The development branch allows all the relevant developers (in the example below, we call them Alvin and Bryan) to pull/push their local changes freely from/to it.

git-flow-on-vsts.png

Once any of the developers is confident with his/her changes, he/she can create a new pull request on VSTS.

creating-pull-request.png

Creating a new pull request on VSTS.

We then proceed to make use of the new capability on VSTS, which is to set policies for the branches. In the policy setting, we checked the option “Require a minimum number of reviewers” to prevent direct pushes to both master and development-deployment branches.

branch-policies.png

Enabled the code review requirement in each pull request to protect the branch.

So for every deployment to our UAT and Production websites, the checking step is in place to make sure that the deployments are all properly reviewed and approved. This is not just to protect the system but also to protect the developers by having a standardized quality checking across the development team.

This is the end of this episode of KOSD series. If you have any comment or suggestion about this article, please shout out. Hope you enjoy this cup of electronic Kopi-O Siew Dai. =)

#devops, #git, #kopi-o-siew-dai, #microsoft-azure, #visual-studio-online