Make the Most of Life in Haulio System Development Team

Haulio is a technology startup founded in 2017 which aims to provide the simplest and most reliable way for businesses to get their containers moved. Yes, we are talking about the physical containers here not the Docker containers.

In Haulio, we ask our employees to give their best in work, and we’re committed to doing the same. It’s why we offer great opportunities to empower each of us in the team.

Maximizing Engineering Velocity

As a startup, it’s crucial for us to focus on getting a high-quality product to market quickly. Hence, we have been putting in a lot of efforts to organize our System Development Team in Haulio.

Developing products for B2B business like Haulio is challenging in the sense that making fatal technology decisions need to be avoided in all cost. Hence, in the early stage of Haulio, we hire seniors who are very self-directed, which allows me as the team lead to spend time on other stuff.

Since Haulio is still very young, engineers in Haulio are able to contribute their ideas to the product development and their decisions all have a profound long-term impact on the company. Every week, after our CPO’s discussion with product designers and UX researchers, he will approach our engineers to spend many hours in discussions about our data model, architecture, timeline, and implementation approach.

Our CPO, Sebastian Shen, and the engineers are having a great chat.

Organized Product Development Stages

With Azure DevOps, the System Development Team is able to work on rapid product development and deployment in a well managed manner. The DevOps process in the cloud lets us release and iterate quickly, even several times a day with the help of continuous deployment.

Sebastian is invited to give a talk in Microsoft about how Haulio uses Azure DevOps.

In the early stage of a user story or bug, we will have a discussion about the story points and areas before we assign the tasks to the engineers. After the feature is done or the bug is fixed, a pull request will be created and then it will go through code review process which is done by our senior engineers. Once the pull request is approved, the staging server will automatically be updated with the latest changes. Our Product Team and QA Team will then join the testing process before the changes are deployed.

We host all our projects on Azure DevOps so that all our engineers can easily contribute to the projects. With just a team of seven full-time engineers and one intern across two countries, last year we celebrated the 1,000 pull requests. This shows a successful cumulative effort by a cross-nation team of engineers.

Applying New Knowledge

In June 2018, Microsoft announced that .NET Core 2.0 would soon reach its end of life in October 2018. As Haulio products are built using .NET Core 2.0 framework, our System Development Team react to the issue quickly by first learning the new frameworks such as .NET Core 2.1 and 2.2. Then after a few months, we successfully migrated all our codes to be using .NET Core 2.1 with almost no downtime to our online systems.

Our engineers are talking about the plan to upgrade our codes from .NET Core 2.0 to 2.1.

In December 2018, our team also made use of Pusher and Handsontable to build a cloud-connected spreadsheet right in our system to enable our accountants to easily collaborate with each other. After the project is done, we’re so exciting that we actually did a small Google Spreadsheet alike product.

One of the ways we keep our team members to always stay up to date is through a culture called Continuous Learning Culture.

Continuous Learning Culture and Buddy System

We are firm believers of self learning and knowledge sharing. Hence, in the team, we have this continuous learning culture that we share knowledge with each other frequently through many channels.

On Microsoft Teams, we have a channel called “System Development Knowledge Sharing” dedicated for this continuous learning culture.

Our CEO, Alvin Ea, is participating in the sharing as well to talk about cloud architecture.

Beside this, we also have a Buddy System where we help each other, especially seniors helping juniors and university interns to catch up by guiding them in learning new technology skills. Hence, unlike most of the companies, our interns actually have the chance to get their hands dirty to build something real for our business.

Code review feedback that our interns will receive on weekly basis.

Networking and Talks

In the System Development Team of Haulio, we can attend or help organize technical sharing sessions. As a startup supporting by Microsoft BizSpark, we make use of Microsoft technologies to drive our business. Thus, the technical talks we organize are mainly related to .NET and Azure technology.

During the sessions, we also encourage our fellow engineers to network with other technical professionals by exchanging their ideas during the events. This is one of my favorite perks because the topics discussed are always very interesting, compelling, and eye-opening.

Marvin is sharing about how he builds a solution with Microsoft Custom Vision API.

Contributing to Development of World-Class System

As a startup, Haulio is a place where each of us has loads of responsibilities and we are able to work on many different types of projects. This means that there will be tons of opportunities for learning and growth abound. Founders and employees work together; there’s no middle management, so we learn from the best.

The whole team share in the birth, growth, and success of the company. Everyone of us wants to belong to something big, something special, and most importantly, something useful to the society. Hence, we all have the pride in our work.

The developers are learning how to drive a prime mover.

Caring and Love

Since most of us are young and on average we are only 29 years old, we not only work closely with each other, but also help each other outside of the work.

In order to promote healthy lifestyle, almost every month we have outing in Singapore, Malaysia, Indonesia, and even Myanmar! So, we work hard and we play hard.

Together, we eat better.

Join Us!

If you would like to find out more about the our System Development Team, please pay us a visit at PSA Unboxed or visit our homepage at haulio.io.

Advertisements

Handwritten Text Recognition, OCR, and Key Vault

Recently, I am glad to have help from Marvin Heng, the Microsoft MVP in Artificial Intelligence category, to work with me on building an experiment tool, FutureNow, to recognize handwritten texts as well as apply OCR technology to automate forms processing.

In January 2019, we also successfully presented our solution during the Singapore .NET Developers Community meetup. Taking the opportunity, I also presented how Azure Key Vault is used in our project to centralize our key and secret management.

Marvin is sharing with the audience about Custom Vision during the meetup.

Hence, in this article, I’d like to share about this project in terms of how we use Cognitive Services and Key Vault.

Code Repository

The code of our project is available in both Azure DevOps and Github. I will update both places to make sure the codes are updated.

The reason I have my codes in both places because the project is originally collaborated in Azure DevOps. However, during meetup, I realized majority of the audience still prefer us to have our codes on Github. Well…

Azure DevOps: https://dev.azure.com/gohchunlin/JobCreationAutomation
Github: https://github.com/sg-dotnet/text-recognition-ocr

Our “FutureNow” tool where user can use it to analyze text on images.

Custom Vision

What Marvin has contributed fully is to implement a function to detect and identify the handwritten texts in the uploaded image.

To do so, he first created a project in Custom Vision to train the model. In the project, he uploaded many images of paper documents and then labelled the handwritten texts found on the paper.

The part where the system analyzes the uploaded image and finds the handwriting part is in the TagAndAnalyzeService.cs.

In the AnalyzeImageAsync method, we first use the Custom Vision API which is linked to Marvin’s project to identify which parts in the image are “probably” handwritten.

At this point of time, the system still cannot be hundred-percent sure the parts it identifies as handwritten text really contain handwritten text. Hence, the result returns from the API contains a probability value. That’s why we have a percentage bar on our front-end to control the threshold for this probability value to accept only those results having a higher probability value will be accepted.

Handwritten Text Extraction with Computer Vision

After the previous step is done, then we will crop those filtered sections out from the uploaded image and then send each of the smaller image to the text recognition API in Cognitive Service to process the image and to extract out the text.

Hence in the code, the HandwrittenRecognitionService will be called to perform the image processing with the Computer Vision API version 1.0 recognizeText method.

There is an interesting do…while loop in the method. The loop is basically used to wait for the API to return the image processing results. It turns out that most of the time, the API will not directly return the result. Instead, it will return a JSON object telling us that it’s still processing the image. Only when it returns the JSON object with status set to “Succeeded”, then we know that the analysis result is returned together in the JSON object.

do
{
var textOperation = response.Headers.GetValues("Operation-Location").FirstOrDefault();

var result = await client.GetAsync(textOperation);

string jsonResponse = result.Content.ReadAsStringAsync().Result;

var handwrittenAnalyzeResult = JsonConvert.DeserializeObject(jsonResponse);

isAnalizying = handwrittenAnalyzeResult.Status != "Succeeded";

if (!isAnalizying)
{
return handwrittenAnalyzeResult;
}
} while (isAnalizying);

In order to display to the user in front-end the results, we will store the cropped images in Azure Blob Storage and then display both the images and their corresponding extracted texts on the web page.

Unfortunately, the reading of handwritten text from images is a technology which is still currently in preview and is only available for English text. Hence, we need to wait a while until we can deploy it for production use.

OCR with Computer Vision

Using Computer Vision to perform OCR can better detect and extract text in an image especially when the image is a screenshot of a computer generated PDF file.

In OpticalCharacterRecognitionService, we simply call the Computer Vision API OCR method with the uploaded image and language set to English by default, then we can easily get the result of the OCR back in JSON format.

Key Vault

Key Vault in this project is mainly for managing the keys and connection string to the Azure Blob Storage.

Secrets of the FutureNow project in the Azure Key Vault.

To retrieve any of the secrets, we simply make use of the Microsoft.Azure.KeyVault Nuget package, as shown below.

var azureServiceTokenProvider = new AzureServiceTokenProvider();

var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));

var secret = await keyVaultClient.GetSecretAsync($"https://futurenow.vault.azure.net/secrets/{ secretName }").ConfigureAwait(false);

According to Microsoft Azure documentation, there are service limits in Key Vault to ensure quality of service provided. Hence, when a service threshold is exceeded, any further requests from the client will not get successful response from Key Vault. Instead, HTTP status code 429 (Too many requests) will be returned.

There is an official guidance to handle Key Vault throttling. Currently, the code sample provided in the sample is not correct because the retry and waitTime variables are not used.

Incorrect sample code provided in Microsoft Docs.

Regarding this problem, I have raised issues (#22859 and #22860) and submitted a pull request to Microsoft on Github. Currently the PR is not yet approved but both Bryan Lamos and Prashanth Yerramilli have agreed that the code is indeed incorrect. Anyway, in our KeyVaultService class, the code has already been corrected.

EDIT (26 January 2019): The pull request has been approved. =)

Conclusion

Even though this project is just an experimental project for us to understand more about the power of Custom Vision and Computer Vision, I am glad that through this project, I manage to learn additional knowledge about Blob Storage, Azure DevOps, Key Vault, etc. and then later share it with the Singapore .NET Developers Community members.

Special thanks to Marvin for helping me in this project.

Authenticate an Azure Function with Azure Active Directory

Today is the first working day of a new year. I thus decided to work on a question raised by our senior developer on the eve of the new year: How do we authenticate an Azure Function?

The authentication tool that I am using is the Azure Active Directory (Azure AD). Azure AD provides an identity platform with enhanced security, access management, scalability, and reliability for connecting users with all our apps.

The Azure Function that I’m discussing here is the HTTP triggered function with .NET runtime stack and Windows as the OS.

Webhook + API: Creating a function which runs when it receives an HTTP request.

Once the trigger is created, we can proceed to setup authentication for the Function.

Firstly, under the Integrate section, we need to change the Authorization level of the Function to be “anonymous”, as shown in the screenshot below. This is because for both “function” and “admin” levels, they are using keys. What we need here is user-based authentication, hence we need to choose “anonymous” instead.

By default, new Functions have “function” as their authorization level.

After that, we need to turn on App Service Authentication under Platform Features of the Function.

Configuring platform features of the Function.

Besides that, we also need to specify “log in with Azure Active Directory” as the action to be taken when the request is not authenticate, as illustrated below.

Turning on App Service Authentication.

Then by clicking on the Azure AD which is listed as one of the Authentication Providers below the page, we can proceed to configure it. Here, we choose the Express mode as management mode. Then we can proceed to create a new Azure AD.

After this, the Azure AD will be labelled as “Configure (Express Mode: Create)”. We can then proceed to save the changes.

After the setting is saved, we can refresh the page and realizing the Azure AD is now labelled as “Configure (Express: Existing App)”. That means the Azure AD app has been created successfully.

The status of Azure AD for the Function is updated.

Now, click in to the Azure AD under the Authentication Providers list again. We will be brought to the section where we specified the management node earlier. Instead of choosing Express mode, now we can proceed to choose the Advanced mode.

We will then be shown with Client ID, Issuer Url, and Client Secret. According to Ben Martens’ advise, we have to add one more record, which is the domain URL of the Function, to the “Allowed Token Audiences” list to make Azure AD work with this Function, as shown in the following screenshot.

After making the change, we can proceed to save it.

What we need to do next is to create an Azure AD app for our client. To do so, we need to leave Azure Function page and head to Azure Active Directory page to add New Application Registration, as shown below.

Adding new application registration.

We then need to fill up a simple form to specify “Web app / API” as our Application Type and our Function URL as the Sign-on URL. Then we can click on the “Create” button and be brought to the page of the new app.

Next we need to give delegated permission for the client to access our Function with the steps highlighted in the following screenshot. Take note of the Application ID, we are going to use it later (Step A).

Giving permission.
Searching for the Function.

Then in the next step, we need to explicitly grant the access to the Function, as shown below.

Check the box to give delegated permission to access our Function.

After saving the changes, we also need to generate a key under the Azure AD, as shown in the following screenshot. Please note down the key in somewhere secured because it will only appear once (Step B).

Creating a never expired key.

Finally, in the Azure AD home page, we need to search for the app registration for the app that was created earlier in the Express mode for Function. Then we can retrieve the App ID URI which is a unique URL that Microsoft Azure AD can use for the app, as shown below (Step C).

Looking for the App ID URI.

Before we proceed to test our setup on Postman, we need to make sure we have the following:

Item 01 – tenantId: This is actually the Directory ID of our Azure AD.

Locating the Directory ID. (Image source: https://stackoverflow.com/a/41028320/1177328)

Item 02 – clientId: This is basically the Application ID that we got in Step A.

Item 03 – clientSecret: This is the key we generated in Step B.

Item 04 – resource: This is the App ID URI we found in Step C.

With these info, we then can proceed to create a new environment in Postman to try out.

Setting up the variables to be used in the environment.

After that, we will make a POST request to the following URL:
https://login.microsoftonline.com/{{tenantId}}/oauth2/token to retrieve the access token, as shown in the following screenshot.

This step is to get the access token.

With the access token now available, we then can make request to our Function, as shown below.

Yay, we can call the Function successfully.

If we don’t provide the bearer token, then we will not be able to access the Function. There will be an error message saying “You do not have permission to view this directory or page.”

Users with no correct credentials are blocked.

That’s all it takes to setup a simple authentication for Azure Function with Azure AD. If you find anything wrong above, feel free to correct me by leaving a message in the comment section. Thanks!

References

#azure, #azure-functions, #postman, #serverless-architecture

When WordPress Meets Azure SSL

In the afternoon, I received a message from my colleague in Marketing Team asking whether we could purchase an SSL certificate for the company blog which is powered by WordPress on Azure. There is almost no complete online tutorial on how to do this, hence I decided to write one.

Purchasing SSL Certificate and Binding it to Azure Web App

We can now easily purchase a SSL certificate from Azure Portal with less than USD 70 and enjoy auto renewal by default. By following the steps I documented on my Github page, we can easily bind the certificate to the WordPress site which is running as Azure Web App.

After that, we need to set the HTTPS Only option to be “On” so that all HTTP traffic will be redirected to HTTPS.

Updating WordPress Address and Site Address

After that, we need to proceed to the wp-admin to update the addresses. By default, for WordPress sites running as Azure Web Apps, the two fields, i.e. WordPress Address and Site Address, will be greyed out, as shown in the following screenshot.

We have no choice but to update HTTP to HTTPS in the URLs in the wp-config.php in wwwroot directory that we can download via FTP. The two lines that we need to update to use HTTPS are stated below.

//Relative URLs for swapping across app service deployment slots define('WP_HOME', 'https://'. filter_input(INPUT_SERVER, 'HTTP_HOST', FILTER_SANITIZE_STRING));
define('WP_SITEURL', 'https://'. filter_input(INPUT_SERVER, 'HTTP_HOST', FILTER_SANITIZE_STRING));

Updating wp-config.php

At this point of time, we will realize we can no longer enter the wp-admin web page. There will be saying our site is being redirected too many times or there is a redirect loop, as shown in the following image.

Oh no…

What we need to do, as recommended by thaevok on WordPress StackExchange, is we still need to add $_SERVER[‘HTTPS’] = ‘on’ as shown in the following code.

define('FORCE_SSL_ADMIN', true);
// in some setups HTTP_X_FORWARDED_PROTO might contain
// a comma-separated list e.g. http,https
// so check for https existence
if (strpos($_SERVER['HTTP_X_FORWARDED_PROTO'], 'https') !== false)
$_SERVER['HTTPS']='on';

Yup, after doing all these, we have our blog secured.

Haulio blog is up!

#https, #php, #ssl, #wordpress

First Step into Orchard Core

This afternoon, I decided to take a look at Orchard Core, an open-source CMS (Content Management System) built on top of an ASP .NET Core application framework.

Since it is open-source, I easily forked its repository from Github and then checked out its dev branch.

After waiting for less than one minute to get all the Nuget packages restored in the project, I set OrchardCore.Cms.Web as the default project. Then I tried to run it but it failed with tons of errors. One of the major errors is “Assembly location for Razor SDK Tasks was not specified”. According to online discussion, it turns out that .NET Core 2.2 is needed.

After downloading the correct SDK, the projects are now successfully built with the following web page pops out as a result.

Take note that, as shown in the screenshot above, when I fill in Table Prefix, it will throw me exception saying that “SqlException: Invalid object name ‘OrchardroadDocument’” during the setup stage, as shown in the following screenshot.

Hence, the best way to proceed is to not enter anything to the Table Prefix textbox. Then we will be able to setup our CMS successfully. Once we log in to the system as Super User, we can proceed to configure the CMS.

Yup, this concludes my first attempt with the new Orchard Core CMS. =)

#cms, #open-source, #orchard, #technology

[KOSD Series] Increase Memory Allocated to PHP in WordPress hosted on Microsoft Azure App Services on Linux

kosd-azure-app-service-filezilla-wordpress.png

“It became clear that we needed to support the Linux operating system, and we had already taken some rudimentary steps towards that with Azure.”

This is what Satya Nadella, Microsoft CEO, said in his book Hit Refresh. With the change he announced, today we can host a WordPress site easily on Microsoft Azure with the App Service on Linux option. Currently, my team has made use of this function on Azure to host our WordPress sites.

microsoft-loves-linux.png

Satya Nadella announcing the partnership. (Image Credit: The Verge)

This morning, I received a message from teammate with the following screenshot asking how to get rid of the following red messages.

memory-issues.png

Memory issues on WordPress!

This only happened after we installed a new theme called G5Theme for our WordPress site. The theme that we are using is called G5Plus Mowasalat.

So how do we approach this problem. Even though the three red lines are linked to the same “Increasing memory allocated to PHP“, there are fundamentally two places that we need to change.

Firstly, we need to add the following line to increase the WP_MEMORY_LIMIT to 128M in wp-config.php.

define('WP_MEMORY_LIMIT', '128M');
Released with WordPress 2.5, the WP_MEMORY_LIMIT option allows us to specify the maximum amount of memory that can be consumed by PHP.
The file is located under /site/wwwroot directory, as shown in the FTP screenshot below.

ftp-wp-config.png

This is where wp-config.php is located.

Changing this will only remove the first two red lines.

For the issue highlighted by the third red line, we need to update the max_input_vars value in .htaccess file which is located at the same directory with the following line.

php_value max_input_vars 3000

This max_input_vars is one of the PHP runtime configurations that is introduced since PHP 5.3.9 with default value of 1,000. What it means is simply the maximum number of input variables can be accepted in for example $_GET and $_POST.

Adding this will remove the final red line and everything will be shown green.

success

Hola! All are green.

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

Connecting Azure VM with Singtel Meg@POP

singtel-expressroute-vnet-virtualnetworkgateway-vm

Singtel Meg@POP IP VPN is a new service provided by Singtel, the largest mobile network operators in Singapore. According to its official website, it is designed for retail businesses with multi-sites and it can provide a more secure network as compared to Internet VPN. It leverages MPLS (Multi-Protocol Label Switching) technology, which bypasses the Internet and reduces exposure to cyberthreats.

One thing that I’d like to highlight here is that Singtel Meg@POP also offers connection to major cloud providers, such as Alibaba Cloud, Amazon Web Services, and Microsoft Azure, via their Cloud Gateway. Hence, if we have our services hosted on the cloud and our systems would like to talk to the applications running behind Singtel Meg@POP, we need to understand how to configure our cloud infrastructure to connect to the Singtel Meg@POP.

megapop-and-clouds.png

How Meg@POP works with the public clouds. (Source: Singtel Meg@POP)

In this article, I will be sharing my journey of setting up our VM on Microsoft Azure to link with Singtel Meg@POP via ExpressRoute.

Step 1: Subscribing ExpressRoute Service

Azure ExpressRoute is for us to create private connections between Azure datacentres and on-premise infrastructure. One good thing about ExpressRoute is that it does not go over the public Internet and thus it is able to offer a more reliable and faster Internet connection.

Hence, to connect with Singtel Meg@POP, Singtel staff recommended us to subscribe to the ExpressRoute on Microsoft Azure before they could provision the Meg@POP service.

It is better to consult with Singtel side before we proceed to subscribe ExpressRoute. In the first step of subscribing, we need to provide information such as Provider and Peering Location. After discussing with the friendly Singtel sales manager from the Business Segment, we managed to get the correct values to setup the ExpressRoute circuit.

setting-expressroute.png

Creating new ExpressRoute circuit on Azure Portal to connect to Singtel Meg@POP.

Step 2: Provisioning Meg@POP

Once the circuit is created successfully, we need to provide the Service Key of the circuit to Singtel staff. The Service Key can be found in the Overview section of the circuit, as shown in the screenshot below.

expressroute-service-key.png

Service Key of ExpressRoute circuit.

After we emailed the Service Key to Singtel, we needed to wait for them to provision Meg@POP. The whole process took about 21 days for our case. Finally we received a confirmation email from them saying that Singtel had commissioned the service and we could proceed to link our virtual network in Microsoft Azure to the ExpressRoute circuit.

Now, under the Peerings section of the ExpressRoute circuit, we shall see something as follows.

expressroute-peerings.png

Primary and secondary subnets are provisioned for Azure private peering.

Step 3: Creating Virtual Network on Azure

This is a step that we need to be careful. Before we proceed to create the VNet, we need to understand from the service provider that we are connecting to whether they only provision a certain subnet for us to use to connect.

For our case, the service provider that we are connecting to told us to use 10.10.1.0/24 subnet. Hence, when we are creating VNet, we need to use that as Address Space.

Also, please take note that the Address Range for the subnet that we are going to put our virtual machine in later needs to be smaller than the Address Space of the VNet specified above. Otherwise later we will not have address left for the Virtual Network Gateway. Hence, in our case, I choose 10.10.1.0/25.

creating-vnet.png

Creating VNet with a subnet having only 128 addresses.

Step 4: Creating Virtual Machine

Next, we proceed to create a new VM. In the Networking tab, we are required to configure VNet for the VM.

In this step, we need to choose the VNet and Subnet that we just created in Step 3. After that, for the convenience of direct RDP into the VM, we also need to set a Public IP and make sure Public inbound ports include RDP 3389 port

configuring-vnet-for-vm.png

Configuring the network interface of a VM.

Step 5: Opening Inbound and Outbound Ports

After the VM is setup successfully, we then need to proceed to configure the inbound and outbound port rules for the VM. This step is only necessary if we are asked to use certain ports to communicate with service hosted behind the Meg@POP.

This step can be easily done in the Network Security Group of the VM.

network-security-group-of-vm.png

Inbound and outbound security rules applied for a VM.

Step 6: Creating Virtual Network Gateway

We now need to create the Virtual Network Gateway with its subnet in one go.

A Virtual Network Gateway has two or more VMs which are deployed to the Gateway Subnet. The VMs are configured to contain routing tables and gateway services specific to the gateway. Thus, we are not allowed to directly configure the VMs and we are advised to never deployed additional resources to the Gateway Subnet.

There is one important step where we need to make sure we choose “ExpressRoute” as the Gateway Type, as shown in the screenshot below.

creating-virtual-network-gateway.png

Remember to choose ExpressRoute as the Gateway Type!

For the Gateway SKU, we are given three options: Standard, High Performance, Ultra Performance. As a start, we choose the Standard SKU which costs the least among three.

gateway-skus.png

Estimated performances by different gateway SKUs. (Source: Azure ExpressRoute)

Finally after choosing the VNet for the gateway, we will be prompted to specify the Address Range for the Gateway Subnet. In our case, I make it to be a bit smaller, which is 10.10.1.0/28.

Step 7: Creating Connection between ExpressRoute and VNet

Finally, we have to link up our VNet with the ExpressRoute.

To do so, we simply head to the Connections section of the ExpressRoute circuit to add the Virtual Network Gateway to it.

add-connection-to-expressroute-circuit.png

The table shows one connection successfully added to the circuit.

Conclusion

results.png

End results.

Yes, that’s all. This learning process took me about two weeks to learn. Hence, if you spot any mistakes in my article, please let me know. Thank you in advance.

If you would like to learn more about this, there is a very good tutorial video on Channel 9 about this too which they talk about Hybrid Networking! I learnt most of the knowledge from that tutorial video so I hope you find it useful as well. =)

Together, we learn faster!