Connecting Android App with IdentityServer4

android-identity-server-appauth.png

For those ASP .NET web developers, Identity Server should be quite familiar to them especially they are looking for SSO solution.

After successfully integrating Identity Server in our ASP .NET Core MVC web applications, it is now time for us to research about how our mobile app can be integrating with IdentityServer4 too.

Background

We have two types of users. The admin will be logging in to the system via our web application. The normal staff will log in to the system via mobile app. Different sets of features are provided for both web and mobile apps.

Setting up Client on Identity Server

To begin, we need to add new client to the MemoryClients of Identity Server.

According to the sample code done by Hadi Dbouk, we setup the new client as shown in the following code.

using IdentityServer4.Models;
...
public class ClientStore : IClientStore {
    ...
    var availableClients = new List();
    ...
    availableClients.Add(new Client 
    {
        ClientId = "my-awesome-app",
        ClientName = "My Awesome App",
        AllowedGrantTypes = GrantTypes.Code,
        RequirePkce = true,
        RequireConsent = false,
        ClientSecrets = 
        {
            new Secret("my-secret".Sha256())
        },
        RefreshTokenUsage = TokenUsage.ReUse,
        RedirectUris = { "gclprojects.chunlin.myapp:/oauth2callback" },
        AllowedScopes = 
        {
            StandardScopes.OpenId,
            StandardScopes.Profile,
            StandardScopes.Email,
            StandardScopes.OfflineAccess
        },
        AllowOfflineAccess = true
    }
    );
}

For mobile apps, there are two grant types recommended, i.e. Authorization Code and Hybrid. However, as when this post is written, the support of Hybrid is still not mature in AppAuth for Android, so we decided to use GrantTypes.Code instead.

However, OAuth2.0 clients using authorization codes can be attacked. In the attack, the authorization code returned from an authorization endpoint is intercepted within a communication path that is not protected by TLS. To mitigate the attack, PKCE (Proof Key for Code Exchange) is required.

We don’t have consent screen for our apps, so we set RequireConsent to false.

For the RefreshTokenUsage, there are two possible values, i.e. ReUse and OneTime. The only difference is that ReUse will make the refresh token handle to stay the same when refreshing tokens. OneTime will update the refresh token handle once the tokens are refreshed.

Once the authorization flow is completed, users will be redirected to a URI. As documented in AppAuth for Android Readme, custom scheme based redirect URI (i.e. those of form “my.scheme:/path”) should be used for the authorization redirect because it is the most widely supported across many Android versions.

By setting AllowOfflineAccess to be true and give the client access to the offline_access scope, we allow requesting refresh tokens for long lived API access.

Android Setup: Installation of AppAuth

The version of AppAuth for Android is v0.7.0 at the point of time this post is written. To install it for our app, we first need to set it in build.gradle (Module: app).

apply plugin: 'com.android.application'

android {
    ...    defaultConfig {
        ...
        minSdkVersion 21
        targetSdkVersion 26
        ...
        manifestPlaceholders = [
            'appAuthRedirectScheme': 'gclprojects.chunlin.myapp'
        ]
    }
    ...
}

dependencies {
    ...
    compile 'com.android.support:appcompat-v7:26.+'
    compile 'com.android.support:design:26.+'
    compile "com.android.support:customtabs:26.0.0-alpha1"
    compile 'net.openid:appauth:0.7.0'
    ...
}

 

appauth-code-flow.png

AppAuth for Android authorization code flow. (Reference: The proper way to use OAuth in a native app.)

Android Setup: Updating Manifest

In the AndroidManifest.xml, we need to add the redirect URI to the RedirectUriReceiverActivity, as shown in the following code.

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="gclprojects.chunlin.myapp">
    ...
    <application...>
        <activity
            android:name="net.openid.appauth.RedirectUriReceiverActivity"
            android:theme="@style/Theme.AppCompact.NoActionBar">
        <intent-filter>
            <action android:name="android.intent.action.VIEW"/>

            <category android:name="android.intent.category.DEFAULT"/>
            <category android:name="android.intent.category.BROWSABLE"/>

            <data android:scheme="gclprojects.chunlin.myapp"/>
        </intent-filter>
    </application>
    ...
</manifest>

Android Setup: Authorizing Users

On the Android app, we will have one “Login” button.

<Button
    android:onClick="Login"
    android:layout_width="wrap_content"
    android:layout_height="wrap_content"
    android:text="Login"
    android:layout_centerInParent="true"/>

By clicking on it, the authorization steps will begin.

public void Login(View view) {
    AuthManager authManager = AuthManager.getInstance(this);
    AuthorizationService authService = authManager.getAuthService();

    AuthorizationRequest.Builder authRequestBuilder = new AuthorizationRequest
            .Builder(
            authManager.getAuthConfig(),
            "my-awesome-app",
            "code",
            Uri.parse("gclprojects.chunlin.myapp:/oauth2callback"))
            .setScope("openid profile email offline_access");

    String codeVerifier = CodeVerifierUtil.generateRandomCodeVerifier();
    SharedPreferencesRepository sharedPreferencesRepository = new SharedPreferencesRepository(this);
    sharedPreferencesRepository.saveCodeVerifier(codeVerifier);

    authRequestBuilder.setCodeVerifier(codeVerifier);

    AuthorizationRequest authRequest = authRequestBuilder.build();

    Intent authIntent = new Intent(this, LoginAuthActivity.class);
    PendingIntent pendingIntent = PendingIntent.getActivity(this, authRequest.hashCode(), authIntent, 0);

    authService.performAuthorizationRequest(
            authRequest,
            pendingIntent);
}

The code above uses some other classes and interact with other activity. I won’t talk about them here because the codes can be found on my Github repository which is forked from Hadi Dbouk’s.

Android Setup: Post Authorization and Refresh Token

According to the code in the LoginAuthActivity.java, if the login fails, the user will be brought back to the Login Activity. However, if it succeeds, the user can then reach to another activities in the app which require user to login first. We can also then get Access Token, Refresh Token, and ID Token from authManager. With the Access Token, we then can access our backend APIs.

Since access tokens have finite lifetimes, refresh tokens allow requesting new access tokens without user interaction. In order to have the client to request Refresh Token, we need to authorize it by setting AllowOfflineAccess to true. When we make a request to our APIs, we need to check if the Access Token is expired, if it is so, we need to make a new request with the Refresh Token to the IdentityServer to have a new Access Token.

The way how we can retrieve new Access Token with a Refresh Token in AppAuth is shown in the TokenTimer class in TokenService.java using createTokenRefreshRequest.

private class TokenTimer extends TimerTask {
    ...

    @Override
    public void run() {

        if(MyApp.Token == null)
            return;

        final AuthManager authManager = AuthManager.getInstance(TokenService.this);

        final AuthState authState = authManager.getAuthState();


        if(authState.getNeedsTokenRefresh()) {
            //Get New Token

            ClientSecretPost clientSecretPost = new ClientSecretPost("driver008!");
            final TokenRequest request = authState.createTokenRefreshRequest();
            final AuthorizationService authService = authManager.getAuthService();

            authService.performTokenRequest(request, clientSecretPost, new AuthorizationService.TokenResponseCallback() {
                @Override
                public void onTokenRequestCompleted(@Nullable TokenResponse response, @Nullable AuthorizationException ex) {
                    if(ex != null){
                        ex.printStackTrace();
                        return;
                    }
                    authManager.updateAuthState(response,ex);
                    MyApp.Token = authState.getIdToken();
                }
            });

        }

    }
}

Conclusion

Yup, that’s all for integrating the Identity Server in an Android App to provide a seamless login experience to our users. If you find any mistake in this article, kindly let me know in the comment section. Thanks in advance!

References

 

Advertisements

Setup Ubuntu Server at Tokyo and Transform it to Desktop with RDP Installed

vultr-ubuntu-xrdp-xfce

While waiting for lunch, it’s nice to do some warmups. Setting a server overseas seems a pretty cool warmup to do for developers, right? Recently, my friend recommended me to try out Vultr which provide cloud servers. So today, I’m going to share how I deploy a Ubuntu server which is located in Tokyo, a city far away from where I am now.

Step 1: Choosing Server Location

Vultr is currently available in many cities in popular countries such as Japan, Singapore, Germany, United States, Australia, etc.

server-location.png

Step 2: Choosing Server Type and Size

Subsequently, we will be asked to select the type and size for the server. Here, I choose 60 GB SSD server with Ubuntu 16.04 x64 installed. I tried with Ubuntu 17.10 x64 before but I couldn’t successfully RDP into it. Then the latest Ubuntu 18.04 x64 is not yet tried by me. So ya, we will stick to using Ubuntu 16.04 x64 in this article.

vultr-ubuntu-pricing.png

Step 3: Uploading SSH Key

Vultr is friendly to provide us a tutorial about generating SSH Keys on Windows and Linux.

The steps for creating SSH key on Windows with PuTTYgen is as follows.

Firstly, we need to click on the “Generate” button on PuTTYgen.

generate-key-pair.png

Secondly, once the Public Key is generated, we need to enter a key passphrase for additional security.

Thirdly, we click on the “Save Private Key” button to save the private key on somewhere safe.

Fourthly, we copy all of the text in the Public Key field and paste it to the textbox in Vultr under the “Add SSH Key” section.

adding-ssh-key.png

Step 4: Naming and Deployment

Before we can deploy the server, we need to key in the hostname for the new server.

After we have done that, then we can instruct Vultr to deploy the server by clicking on the “Deploy Now” button at the bottom of the page.

Within 5 minutes, the server should finish installing and booting up.

Step 5: Getting IP Address, Username, and Password

In order to get the user credential to access the server, we need to click on the “Server Details” to view the IP address, username, and password.

Step 6: Updating Root Password

The default password is not user-friendly. Hence, once we login to the server via PuTTY, we need to immediately update the root password using the command below for our own good.

# passwd

Step 7: Installing Ubuntu Desktop

Firstly, let’s do some updating for the packages via the following commands.

# sudo apt-get update
# sudo apt-get upgrade

This will take about 2 minutes to finish.

Then we can proceed to install the default desktop using the following command.

# sudo apt-get install ubuntu-desktop

This will take about 4 minutes to finish. Take note that at this point of time Unity will be the desktop environment.

After that, we update the packages again.

# sudo apt-get update

Step 8: Installing Text Editor

We are going to change some configurations later, so we will need to use a text editor. Here, I’ll use the Nano Text Editor by installing it first.

# sudo apt-get install nano

Step 9: Installing xrdp

xrdp is an open source Remote Desktop Protocol (RDP) server which provides a graphical login to remote machines. This helps us to connect to the server using Microsoft Remote Desktop Client.

sudo apt-get install xrdp

Step 10: Changing to Use Xfce Desktop Environment

We will then proceed to install Xfce which is a lightweight desktop environment for UNIX-like operating systems.

sudo apt-get install xfce4

After it is installed successfully, please run the following command. This is to tell the Ubuntu server to know that Xfce has been chosen to replace Unity as desktop environment.

echo xfce4-session >~/.xsession

Step 11: Inspect xrdp Settings

We need to configure the xrdp settings by editing the startwm.sh in Nano Text Editor.

nano /etc/xrdp/startwm.sh

We need to edit the file by changing entire of the file content to be as follows.

if [ -r /etc/default/locale ]; then
 . /etc/default.locale
 export LANG LANGUAGE
fi

startxfce4

Then we need to restart xrdp.

# sudo service xrdp restart

After that, we restart the server.

# reboot now

Step 12: Connecting with Remote Desktop Client

After the server has been restarted, we can access the server with Windows Remote Desktop Client.

rdp

At this point of time, some of you may encounter error when logging in via RDP. The error will be saying things as follows.

Connecting to sesman IP 127.0.0.1 port 3350
sesman connect ok
sending login info to session manager, please wait...
xrdp_mm_process_login_response:login successful for display
started connecting
connecting to 127.0.0.1 5910
error-problem connecting
problem-connecting.png

Problem of connecting via xrdp.

As pointed out in one of the discussion threads on Ask Ubuntu, the problem seems to be xrdp, vnc4server, and tightvncserver are installed in the wrong order. So in order to fix that, we just need to remove them and re-install them in a correct order with the following set of commands.

# sudo apt-get remove xrdp vnc4server tightvncserver
# sudo apt-get install tightvncserver
# sudo apt-get install xrdp
# sudo service xrdp restart

After the server is restarted, we should have no problem accessing our server via RDP client on Windows.

success.png

References

 

 

[KOSD Series] Ready ML Tutorial One

kosd-azure-machine-learning.png

During the Labour Day holiday, I had a great evening chat with Marvin, my friend who had researched a lot about Artificial Intelligence and Machine Learning (ML). He guided me through steps setting up a simple ML experiment. Hence, I decided to note down what I had learned on that day.

The tool that we’re using is Azure Machine Learning Studio. What I had learned from Marvin is basically creating a ML experiment through drag-and-dropping modules and connecting them together. It may sound simple but for a beginner like me, it is still important to understand some key concepts and steps before continuing further in the ML field.

Azure ML Studio

Azure ML Studio is a tool for us to build, test, and deploy predictive analytics on our data. There is a detailed diagram about the capability of the tool, which can be downloaded here.

ml_studio_overview_v1.1.png

Capability of Azure ML Studio (Credits: Microsoft Azure Docs)

Step 0: Defining Problem

Before we began, we need to understand why we are using ML for?

Here, I’m helping a watermelon stall to predict how many watermelon they can sell this year based on last year sales data.

Step 1: Preparing Data

As shown in the diagram above, the first step is to import the data into the experiment. So, before we can even start, we need to make sure that we have at least a handful of data points.

data.png

Daily sales of the watermelon stall and the weather of the day.

Step 2: Importing Data to ML Studio

With the data points we now have, we then can import them to ML Studio as a Dataset.

datasets.png

Datasets available in Azure ML Studio.

Step 3: Preprocessing Data

Firstly, we need to perform a cleaning operation so that missing data can be handled properly without affecting our results later.

Secondly, we need to “Select Columns in Dataset” so that only selected columns will be used in the subsequent operations.

Step 4: Splitting Data

This step is to help us to separate data into training and testing sets.

Step 5: Choosing Learning Algorithm

Since we are now using the model to predict number of watermelons the stall can sell, which is a number, we’ll use Linear Regression algorithm, as recommended. There is a cheat sheet from Microsoft telling us which algorithm we need to choose based on different scenarios. You can also download it here.

machine-learning-algorithm-cheat-sheet-small_v_0_6-01.png

Learning algorithm cheat sheet. (Image Credits: Microsoft Docs)

Step 6: Partitioning and Sampling

Sampling is an important tool in machine learning because it reduces the size of a dataset while maintaining the same ratio of values. If we have a lot of data, we might want to use only the first n rows while setting up the experiment, and then switch to using the full dataset when you build our model.

Step 7: Training

After choosing the learning algorithm, it’s time for us to train the data.

Since we are going to predict the number of watermelons sold, we will select the column, as shown in the following screenshot.

train.png

Select the one column that we need to predict in Train Model module.

Step 8: Scoring

Do you still remember that we split our data into two sets in Step 4 above? Now, we need to connect output from Split Data module and output from Train Data module to the Score module as inputs. Doing this step is to score prediction for our regression model.

Step 9: Evaluating

We finally have to generate scores over our training data, and evaluate the model based on the scores.

Step 10: Deploying

Now that we’ve completed the experiment set up, we can deploy it as a predictive web service.

predictive-experiment.png

Generated predictive experiment.

With that deployed, we then can easily predict how many watermelons can be sold on a future date, as shown in the screenshot below.

testing.png

Yes, we can sell 25 watermelons on 7th May if the temperature is 32 degrees!

Conclusion

 

This is just the very beginning of setting up a ML experiment on Azure ML Studio. I am still very new to this AI and ML stuff. If you spot any problem in my notes above, please let me know. Thanks in advance!

References:

 

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

[KOSD Series] Azure App Service Diagnostics

kosd-app-service-diagnose.png

Last week, one of our web apps seemed to be running slow. Thus, we decided to diagnose the web app which was hosted on Azure App Service. Fortunately, there is a smart chatbot in Azure App Service that helps us to troubleshoot our web app.

The diagnostics chatbot can be found under “Diagnose and solve problems” page of the web app. The chatbot suggested that it can help us with checking the following issues.

  1. Web App Down;
  2. Web App Slow;
  3. High CPU Usage;
  4. High Memory Usage;
  5. Web App Restarted;
  6. TCP Connections.
diagnose-and-solve-problems

“Diagnose and solve problems” option is available under each Azure App Service.

In addition, it also provides a set of Diagnostic Tools for famous software stacks on Azure App Services, such as ASP .NET Core, ASP .NET, Java, and PHP.

diagnostic-tools.png

Available Diagnostics Tools for each of the software stack of the web app.

Health Checkup

The chatbot says quite many things in the first run. So if we continue to scroll down to the bottom, we will realize that the Diagnostics chatbot also recommends us to run a Health Checkup on our web app first to give us a summary about its requests, errors, performance, CPU usage, and memory usage.

For example, the following graph shows that my web app was experiencing HTTP server errors where a report about the errors was attached. In the report, it listed down errors happened during that period of time with affected URLs.

Sometimes, if there is a common solution available to the problems, troubleshooting steps will be listed down to help us to better fix the errors too.

requests-and-errors

The “App Performance” diagram basically shows us how much the server took to respond in the period of time. If the web app is performing slow, sometimes it will recommend us to collect a memory dump to identify the root cause of the issue.

The “CPU Usage” has diagram showing the overall CPU usage per instance. If there is any high CPU detected in the last 24 hours, there will be a warning displayed too.

For “Memory Usage” tab, it will provide us diagrams to show the following numbers.

  • Page Operations: A rate at which the disk was read to resolve hard page faults per second.
  • Overall Percent Physical Memory Usage: The overall percent memory in use by both system and the application on each instance.
  • Application Percent Physical Memory Usage: The percent physical memory usage of each application on an instance.
  • Committed Memory Usage: Amount of committed memory, in MB. Committed memory is the physical memory which has space reserved on the disk paging file(s).

TCP Connections Analysis

TCP Connections Analysis is one of the analysis that are not part of the Health Checkup. We can find it under “Availability & Performance” in the chatbot. It basically provides charts showing number of outbound TCP connections per instance in the period of time.

Is My App Restarted?

If we would like to find out if our web app was restarted in a period of time, we can click on the “Web App Restarted” button in the chatbot to find out when and why our web app is restarted.

reasons-for-your-web-app-restart.png

So yes, changing application settings will cause the web app to restart.

Connection Strings Checking

 

There is one more feature that I’d like to highlight is the diagnostic tool that helps to validate all the connection strings configured in our web app. It helps us to identify success vs. failing connection from the instance.

Conclusion

There are still more features available in the Azure App Service Diagnostics chatbot. I only list down features and tools that I use most in my daily development life. So, if you are also on your way to be DevOps, feel free to discover more yourself!

Reference

 

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

[KOSD Series] Read-only Users for Azure SQL Databases

kosd-azure-sql-ms-sql-server-management-studio.png

It’s quite common that Business Analyst will always ask for the permission to access the databases of our systems to do data analysis. However, most of the time we will only give them read-only access. With on-premise MS SQL Server and SQL Management Studio, it is quite easily done. However, how about for those databases hosted on Azure SQL?

Login as Server Admin

To make things simple, we will first login to the Azure SQL Server as Server admin on SQL Management Studio. The Server Admin name can be found easily on Azure Portal, as shown in the screenshot below. Its password will be the password we use when we create the SQL Server.

sql-server-admin.png

Identifying the Server Admin of an Azure SQL Server. (Source: Microsoft Azure Docs)

Create New Login

By default, the master database will be the default database in Azure SQL Server. So, once we have logged in, we simply create the read-only login using the following command.

CREATE LOGIN <new-login-id-here>
    WITH PASSWORD = '<password-for-the-new-login>' 
GO

Alternatively, we can also right-click on the “Logins” folder under “Security” then choose “New Login…”, as shown in the screenshot below. The same CREATE LOGIN command will be displayed.

new-login.png

Adding new login to the Azure SQL Server.

Create User

After the new login is created, we need to create a new user which is associated with it. The user needs to be created and granted read-only permission in each of the databases that the new login is allowed to access.

Firstly, we need to expand the “Databases” in the Object Explorer and then look for the databases that we would like to grant the new login the access to. After that, we right-click on the database and then choose “New Query”. This shall open up a new blank query window, as shown in the screenshot below.

new-query-to-create-user.png

Opening new query window for one of our databases.

Then we simply need to run the following query for the selected database in the query window.

CREATE USER <new-user-name-here> FROM LOGIN <new-login-id-here>;

Please remember to run this for the master database too. Otherwise we will not be able to login via SQL Management Studio at all with the new login because the master database is the default database.

Grant Read-only Permission

Now for this new user in the database, we need to give it a read-only permission. This can be done with the following command.

EXEC sp_addrolemember 'db_datareader', '<new-user-name-here>';

Conclusion

Repeat the two steps above for the remaining databases that we want the new login to have access to. Finally we will have a new login that can read from only selective databases on Azure SQL Server.

References

 

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

[KOSD Series] Certificate for Signing JWT on IdentityServer

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

kosd-identity-server-dotnet-core-openssl-app-service

Last year, Riza shared a very interesting topic twice during Singapore .NET Developers Community in Microsoft office. For those who attended the meetups, do you still remember? Yes, it’s about IdentityServer.

IdentityServer 4 is a middleware, an OpenID Connect provider built to spec, which provides user identity and access control in ASP .NET Core applications.

In my example, I will start with the simplest setup where there will be one Authenticate Server and one Application Server. Both of them in my example will be using ASP .NET Core.

jwt-with-signin-and-verification-flow.png

How an application uses JWT to authenticate a user.

In the Authenticate Server, I register the minimum required dependencies in ConfigureServices method of its Startup.cs as follows.

services.AddIdentityServer()
     .AddDeveloperSigningCredential()
     .AddInMemoryIdentityResources(...)
     .AddInMemoryApiResources(...)
     .AddInMemoryClients(...)
     .AddAspNetIdentity();

I won’t be talking about how IdentityServer works here. Instead, I will be focusing on the “AddDeveloperSigningCredential” method here.

JSON Web Token (JWT)

By default, IdentityServer issues access tokens in the JWT format. According to the abstract definition in RCF 7519 from Internet Engineering Task Force (IETF) , JWT is a compact, URL-safe means of representing claims between two parties where claims are encoded as JSON objects which can be digitally signed or encrypted.

In the diagram above, the Application Server receives the secret key used in signing the JWT from the Authentication Server when the app sets up its authentication process. Hence, the app can verify whether the JWT comes from an authentic source using the secret key.

AddDeveloperSigningCredential

IdentityServer uses an asymmetric key pair to sign and validate JWT. We can use AddDeveloperSigningCredential to do so. In the previous version of IdentityServer, this method is actually called AddTemporarySigningCredential.

During development, we normally don’t have cert prepared yet. Hence, AddTemporarySigningCredential can be used to auto-generate certificate to sign JWT. However, this method has a disadvantage. Every time the IdentityServer is restarted, the certificate will change. Hence, all tokens that have been signed with the previous certificate will fail to validate.

This situation is fixed when AddDeveloperSigningCredential is introduced to replace the AddTemporarySigningCredential method. This new method will still create temporary certificate at startup time. However, it will now be able to persists the key to the file system so that it stays stable between IdentityServer restarts.

Anyway, as documented, we are only allowed to use AddDeveloperSigningCredential in development environments. In addition, AddDeveloperSigningCredential can only be used when we host IdentityServer on single machine. What should we do when we are going to deploy our code to the production environment? We need a signing key service that will provide the specified certificate to the various token creation and validation services. Thus now we need to change to use AddSigningCredential method.

Production Code

For production, we need to change the code earlier to be as follows.

X509Certificate2 cert = null;
using (X509Store certStore = new X509Store(StoreName.My, StoreLocation.CurrentUser))
{
    certStore.Open(OpenFlags.ReadOnly);
    var certCollection = certStore.Certificates.Find(
        X509FindType.FindByThumbprint,
        Configuration["AppSettings:IdentityServerCertificateThumbprint"],
        false);
 
    // Get the first cert with the thumbprint
    if (certCollection.Count > 0)
    {
        cert = certCollection[0];
    }
}

services.AddIdentityServer()
     .AddSigningCredential(cert)
     .AddInMemoryIdentityResources(...)
     .AddInMemoryApiResources(...)
     .AddInMemoryClients(...)
     .AddAspNetIdentity();

We use AddSigningCredential to replace the AddDeveloperSigningCredential method. Now, AddSigningCredential requires a X509Certificate2 cert as parameter.

Creation of Certificate with OpenSSL on Windows

It’s quite challenging to install OpenSSL on Windows. Luckily, Ben Cull, solution architect from Belgium, has shared a tutorial on how to do this easily with a tool called Win32 OpenSSL.

His tutorial can be summarized into 5 steps as follows.

  1. Install the Win32 OpenSSL and add its binaries to PATH;
  2. Create a new certificate and private key;
    openssl req -x509 -newkey rsa:4096 -sha256 -nodes -keyout cuteprogramming.key -out cuteprogramming.crt -subj "/CN=cuteprogramming.com" -days 3650
  3. Convert the certificate and private key into .pfx;
    openssl pkcs12 -export -out cuteprogramming.pfx -inkey cuteprogramming.key -in cuteprogramming.crt -certfile cuteprogramming.crt
  4. Key-in and remember the password for the private key;
  5. Import the certificate to the Current User Certificate Store on developer’s local machine by double-clicking on the newly generated .pfx file. We will be asked to key in the password used in Step 4 above again.
certificate-import-wizard.png

Importing certificate.

Now, we need to find out the Thumbprint of it. This is because in our production code above, we are using Thumbprint to look for the cert.

Thumbprint and Microsoft Management Console (MMC)

To retrieve the Thumbprint of a certificate, we need help from a tool called MMC.

adding-certificate-snap-ins.png

Using MMC to view certificates in the local machine store for current user account.

We will then be able to find the new certificate that we have just created and imported. To retrieve its Thumbprint, we first need to open it, as shown in the screenshot below.

open-new-cert.png

Open the new cert in MMC.

A popup window called Certificate will appear. Simply copy the value of the Thumbprint under the Details tab.

thumbprint.png

Thumbprint!

After keeping the value of the cert thumbprint in the appsettings.Development.json of the IdentityServer project, we can now build and run the project on localhost without any problem.

Deployment to Microsoft Azure Web App

Before we talk about how to deploy the IdentityServer project to Microsoft Azure Web App, do you realize how come in the code above, we are looking cert only My/Personal store of the CurrentUser registry, i.e. “StoreName.My, StoreLocation.CurrentUser”? This is because this is the place where Azure will load the certificate from.

So now, we will first proceed to upload the certificate as Private Certificate that we self-sign above to Azure Web App. After selecting the .pfx file generated above and keying-in the password, the cert will appear as one of the Private Certificates of the Web App.

uploading-certificate-to-azure.png

To upload the cert, we can do it in “SSL certificates” settings of our Web App on Azure Portal.

Last but not least, in order to make the cert to be available to the app, we need to have the following setting added under “Application settings” of the Web App.

application-settings-for-cert.png

WEBSITE_LOAD_CERTIFICATES setting is needed to make the cert to be available to the app.

As shown in the screenshot above, we set WEBSITE_LOAD_CERTIFICATES to have * as its value. This will make all the certificates in the Web App being loaded to the personal certification store of the app. Alternatively, we can also let it load selective certificates by keying in comma-separated thumbprints of the certificates.

Two Certificates

There is an interesting discussion on IdentityServer3 Issues about the certificates used in IdentityServer project. IdentityServer requires two certificates: one for SSL and another for signing JWT.

In the discussion, according to Brock Allen, the co-author of IdentityServer framework, we should never use the same cert for both purposes and it is okay to use a self-signed cert to be the signing cert.

Brock also provided a link in the discussion to his blog post on how to create signing cert using makecert instead of OpenSSL as discussed earlier. In fact, during Riza’s presentation, he was using makecert to self-sign his cert too. Hence, if you are interested about how to use makecert to do that, please read his post here: https://brockallen.com/2015/06/01/makecert-and-creating-ssl-or-signing-certificates/.

Conclusion

This episode of KOSD series is a bit long such that drinking a large cup of hot KOSD while reading this post seems to be a better idea. Anyway, I think this post will help me and other beginners who are using IdentityServer in their projects to understand more about the framework bit by bit.

There are too many things that we can learn in the IdentityServer project and I hope to share what I’ve learnt about this fantastic framework in my future posts. Stay tuned.

References

[KOSD Series] Discussion about Cosmos DB Performance

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

kosd-cosmos-db.png

During a late dinner with my friend on 12 January last month, he commented that he encountered a very serious performance problem in retrieving data from Cosmos DB (pka DocumentDB). It’s quite strange because, in our IoT project which also stores millions of data in Cosmos DB, we never had this problem.

Two weeks later, on 27 January, he happily showed me his improved version of the code which could query the data in about one to two seconds.

Yesterday, after having a discussion, we further improved the code. Hence, I’d like to write down this learning experience here.

Preparation

Due to the fact that we couldn’t demonstrate using the real project code, I thus created a sample project getting data from database and collection on my personal Azure Cosmos DB account. The database contains one collection which has 23,967 records of Student data.

The Student class and the BaseEntity class that it inherits from are as follows.

public class Student : BaseEntity
{
    public string Name { get; set; }

    public int Age { get; set; }

    public string Description { get; set; }
}
public abstract class BaseEntity
{
    [JsonProperty(PropertyName = "id")]
    public string Id { get; set; }

    public string Type { get; set; }

    public DateTime CreatedAt { get; set; } = DateTime.Now;
}

You may wonder why I have Type defined.

Type and Cost Saving

The reason of having Type is that, before DocumentDB was rebranded as Cosmos DB in May 2017, the DocumentDB pricing is based on collections. Hence, the more collection we have in the database, the more we need to pay.

confused-about-documentdb-pricing.png

DocumentDB was billed per collection in the past. (Source: Stack Overflow)

To overcome that, we squeeze the different types of entities in the same collection. So, in the example above, let’s say we have three classes — Students, Classroom, Teacher that inherit from BaseEntity, then we will put the data of the three classes in the same collection.

Then here comes a problem: How do we know which document in the collection is Student, Classroom or Teacher? There is where the property Type will help us. So in our example above, the possible value for Type will be Student, Classroom, and Teacher.

Hence, when we add a new document through repository design pattern, we have the following method.

public async Task<T> AddAsync(T entity)
{
    ...

    entity.Type = typeof(T).Name;

    var resourceResponse = await _documentDbClient.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(_databaseId, _collectionId), entity);

    return resourceResponse.StatusCode == HttpStatusCode.Created ? (dynamic)resourceResponse.Resource : null;
}

Original Version of Query

We used the following code to retrieve data of a class from the collection.

public async Task<IEnumerable<T>> GetAllAsync(Expression<Func<T, bool>> predicate = null)
{
    var query = _documentDbClient.CreateDocumentQuery<T>(UriFactory.CreateDocumentCollectionUri(_databaseId, _collectionId));

    var documentQuery = (predicate != null) ?
        (query.Where(predicate)).AsDocumentQuery():
        query.AsDocumentQuery();

    var results = new List<T>();
    while (documentQuery.HasMoreResults)
    {
        results.AddRange(await documentQuery.ExecuteNextAsync<T>());
    }

    return results.Where(x => x.Type == typeof(T).Name).ToList();
}

This query will run very slow because the line where it filters the class is after querying data from the collection. Hence, in the documentQuery, it may already contain data of three classes (Student, Classroom, and Teacher).

Improved Version of Query

So one obvious way is to move the line of filtering by Type above. The improved version of code now looks as such.

public async Task<IEnumerable<T>> GetAllAsync(Expression<Func<T, bool>> predicate = null)
{
    var query = _documentDbClient
        .CreateDocumentQuery<T>(UriFactory.CreateDocumentCollectionUri(_databaseId, _collectionId))
        .Where(x => x.Type == typeof(T).Name);

    var documentQuery = (predicate != null) ?
        (query.Where(predicate)).AsDocumentQuery():
        query.AsDocumentQuery();

    var results = new List<T>();
    while (documentQuery.HasMoreResults)
    {
        results.AddRange(await documentQuery.ExecuteNextAsync<T>());
    }

    return results;
}

By doing so, we managed to reduce the query time significantly because all the actual filtering will be done at Cosmos DB side. For example, there was one query I managed to reduce the query time of it from 1.38 minutes to 3.42 seconds using the 23,967 records of Student data.

Multiple Predicates

The code above however has a disadvantage. It cannot accept multiple predicates.

I thus changed it to be as follows so that it returns IQueryable.

public IQueryable<T> GetAll()
{
    return _documentDbClient
        .CreateDocumentQuery<T>(UriFactory.CreateDocumentCollectionUri(_databaseId, _collectionId))
        .Where(x => x.Type == typeof(T).Name);
}

This has another inconvenience is there whenever I call GetAll, I need to remember to load the data with HasMoreResults as shown in the code below.

var studentDocuments = _repoDocumentDb.GetAll()
    .Where(s => s.Age == 8)
    .Where(s => s.Name.Contains("Ahmad"))
    .AsDocumentQuery();

var results = new List<T>();
while (studentDocuments.HasMoreResults)
{
    results.AddRange(await studentDocuments.ExecuteNextAsync<T>());
}

Conclusion

This is just an after-dinner discussion about Cosmos DB between my friend and me. If you have any better idea of designing repository for Cosmos DB (pka DocumentDB), please let us know. =)