Emerging Technologies

CosmosDb – Connection Policy – Setting Connection Mode and Connection Protocol

May 13, 2018 .NET, Azure, CosmosDB, Microsoft, PaaS, VisualStudio, Windows, Windows Azure Development No comments , , ,

Recently I have been trying multiple ways to optimize CosmosDb SQL.NET SDK integration calls from my web application that sits within a VNET.

After carefully analyzing different options available within Cosmos Db SQL API’s have realized there are different aspects we could optimize in achieving minimal turn around time. In this article I am going to discuss about one such useful find, that is to use Cosmos Db SQL SDK connection policy to use diferent networking options to improve the latency between web application and cosmos db API calls.

Connection Policy:

Performance of an client application has important implication based on – how SQL .NET SDK  connects to Azure Cosmos DB , because of expected client-side latency due to networking conditions. There are two key configuration settings available for configuring client Connection Policy – the connection mode and the connection protocol.

There are two connection mode options provides by Cosmos Db SQL.NET SDK:

  • Gateway Mode(which is default): This mode is the default option being used and works with all Cosmos DB SDK versions.  Since it is only accessible over HTTPS/TCP, it is more secure and best choice for applications that run on a constrained secure corporate network. If you are using the .NET Framework version of the CosmosDb SQL.NET SDK, then proably this is the only connection mode that would work for you. 

  • Connection Protocol – TCP:  443 is the CosmosDb port, 10255 is the MongoDB API port.   
  • Connection Protocol – HTTPS: Default 443
  • Direct Mode:  This is a new mode which will work only on .NET Standard 2.0 onwards. It provides you an ability to choose between TCP or HTTPS more efficiently.  Only caveat is that you would need .NET Standard 2.0 as target framework for your client application.
    • Connection Protocol – TCP: TCP would be more faster when client and db are in same VNET.  Since TCP within the same network would be more faster, you would be amazed by the latency improvements by your client application. It would respond faster to you cosmos Db requests.  NB In TCP mode apart from 443 and 10255 mentioned in Gateway more, we also need to ensure  port range between 10000 and 20000 is open in your firewall configuration,  because Azure Cosmos DB uses dynamic TCP ports.
    • Connection Protocol – HTTPS: Since client application and cosmosDb are in same network limits, you could see that HTTPS option is also a reliable, secure and faster access channel for you, but not highly performing as TCP.

    A simplified diagram below :

    image

    Sample Code:

     string cosmosDbEndpoint = new Uri("https://mycosmosDbinstance.documents.net");
     string authKey ="cosmosDb-apiKey";
     DocumentClient client = new DocumentClient(cosmosDbEndpoint, authKey,
     new ConnectionPolicy
     {
        ConnectionMode = ConnectionMode.Direct,
        ConnectionProtocol = Protocol.Tcp
     });
     

    Refer more :

    You can find the completed sample here: AzureContrib/CosmosDB-DotNet-Quickstart-With-ConnectionPolicy

    Blazer – The new experimental web framework from Microsoft

    May 2, 2018 .NET, .NET Core, .NET Core 2.0, C#.NET, Emerging Technologies, Microsoft, Razor No comments

    In this world of multiple Web frameworks Microsoft would not want to stop experimenting with new frameworks for Web development. Innovation is a key to Microsoft, doesn’t matter the start later than the React(Facebook) and Angular(Google) , but Microsoft has proven most of the times they are good in developing cutting edge frameworks.  That’s how Blazer has born.

    Blazer = Browser + Razer

    As a ASP.net MVC developer I always loved Razer syntax that was shipped with ASP.NET MVC 3.0. Since then Microsoft has improved the Razor framework with async/await patterns and fluent syntaxes etc.

    Concept is simple, use .NET for building browser based apps. Your familiar C# and Razor syntax can add lots of improvements in the way you build browser apps as a modern day web developer.

    Why use .NET?

    To simplify this question, quoting an excerpt  from Microsoft ASP.NET team blog:  “Web development has improved in many ways over the years but building modern web applications still poses challenges. Using .NET in the browser offers many advantages that can help make web development easier and more productive:

    • Stable and consistent: .NET offers standard APIs, tools, and build infrastructure across all .NET platforms that are stable, feature rich, and easy to use.
    • Modern innovative languages: .NET languages like C# and F# make programming a joy and keep getting better with innovative new language features.
    • Industry leading tools: The Visual Studio product family provides a great .NET development experience on Windows, Linux, and macOS.
    • Fast and scalable: .NET has a long history of performance, reliability, and security for web development on the server. Using .NET as a full-stack solution makes it easier to build fast, reliable and secure applications.

    Blazor will have all the features of a modern web framework including:

    • A component model for building composable UI
    • Routing
    • Layouts
    • Forms and validation
    • Dependency injection
    • JavaScript interop
    • Live reloading in the browser during development
    • Server-side rendering
    • Full .NET debugging both in browsers and in the IDE
    • Rich IntelliSense and tooling
    • Ability to run on older (non-WebAssembly) browsers via asm.js
    • Publishing and app size trimming

    Now the usual question arises? How is that possible? Running .NET in a Browser?

    It is all started with WebAssembly, a new web standard for a “portable, size- and load-time-efficient format suitable for compilation to the web.

    • WebAssembly enables fundamentally new ways to write web apps. Code compiled to WebAssembly can run in any browser at native speeds.
    • WebAssembly is the foundational framework needed to build a .NET runtime that can run in the browser.
    • No plugins or extensions required.

    Getting Started with Blazer:

    Latest version of blazer framework available is 0.3.0 released on 02/05/2018.

    Steps to setup Blazor 0.3.0:

    1. Install the .NET Core 2.1 SDK (2.1.300-preview2-008533 or later).
    2. Install Visual Studio 2017 (15.7 Preview 5 or later) with the ASP.NET and web development workload selected.
    3. Install the latest Blazor Language Services extension from the Visual Studio Marketplace.

    Install the Blazor templates using command-line:

    dotnet new -i Microsoft.AspNetCore.Blazor.Templates
    

    Additional References:

    Setting up Local NPM repository to Speedup Dev/CI Builds

    April 29, 2018 Emerging Technologies, JavaScript, JavaScript, Modern Web Development, TypeScript, Web No comments , , ,

    As a modern day JavaScript developer working with Node.js and NPM, it has been always any developer’s case to clean up local node modules sometimes when local build is broken. It is a tedious tasks to cleanup %appData%\npm-cache  to do a fresh install of all the modules again. Depending on the number of modules your project have, you will get stuck up for few minutes to hours to complete npm module installation depending on your Internet bandwidth.

    Another scenario we can think of it on a build server or CI server, where we need to cleanup the entire modules during each build process, and ‘npm install’ would be like a fresh start, would take longer time to have your build complete.

    What if we have a simple way of caching these packages locally, so that we do not have to download again from Internet every-time.  I will help you with a simple solution, that once setup will resolve some of these problems effectively.

    Introducing Local-NPM


    local-npm is a Node server that acts as a local npm registry. It serves modules, caches them, and updates them whenever they change. Basically it’s a local mirror, but without having to replicate the entire npm registry.

    This allows your npm install commands to (mostly) work off-line. Also, your NPM modules  get faster and faster over time, as commonly-installed modules are aggressively cached.

    local-npm acts as a proxy between you and the main npm registry. You run npm install commands like normal, but under the hood, all requests are sent through the local server.

     

    Getting Started with Local-NPM:

    Step 1: Install the module ‘local-npm’

    $ npm install –g local-npm

    Step 2: launch local-npm  and this will start the local npm server

    $ local-npm

    This will start the local npm server at localhost:5080.

    http://127.0.0.1:5080

    PS: Please note that, this step would take some time as this module tried to replicate the entire NPM repository remote skimdb to the local couchdb instance for efficient caching. But it will not eat up your disk space, as it caches modules based on usage only, it will not replicate the entire NPM repository.

    Step 3: Validate the local-NPM registry

    There is a basic NPMJS like UI to browse through local packages which can be accessed through.

    http://localhost:5080/_browse.

    Step 4: Then set npm to point to the local server:

    $ npm set registry http://127.0.0.1:5080

    Step 5: run  “npm install” of your modules and you can see that local-NPM caches these modules that you regularly use.

    Incase, to switch back to default NPMJS registry, you can do:

    $ npm set registry https://registry.npmjs.org

    How it works?

    npm is built on top of Apache CouchDB (a No-SQL db), so local-npm works by replicating the full “skimdb” database to a local PouchDB Server.

    You can inspect the running database at http://127.0.0.1:16984/_utils.

    References

    To understand more on local-NPM and documentation visit the module repository in github@https://github.com/local-npm/local-npm

    Introduction to Kubernetes

    April 22, 2018 Cloud Computing, Cloud Native Computing Foundation, Computing, Emerging Technologies, Google Cloud, IaaS, OpenSource, PaaS, Platforms No comments

    What is Kubernetes?

    Kubernetes (a.k.a K8s) is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and now maintained by the Cloud Native Computing Foundation.

    What Kubernetes can do?
    Kubernetes has a number of features in cloud computing world, it can be thought as a :

    • A container platform
    • A microservices platform
    • A portable cloud platform and a lot more

    Kubernetes defines a set of building blocks (“primitives”) which collectively provide mechanisms for deploying, maintaining, and scaling applications. The components which make up Kubernetes are designed to be loosely coupled and extensible so that it can meet a wide variety of different workloads. The extensibility is provided in large part by the Kubernetes API, which is used by internal components as well as extensions and containers running on Kubernetes.

    If you are interested  to know more, learn more about Kubernates  through Official tutorials:

    Some useful online training is:

    Azure Cosmos DB name changes

    April 17, 2018 Azure, CosmosDB, Document DB, Emerging Technologies, Microsoft, Windows Azure Development No comments

    An update from Microsoft Azure says that – As part of the transition from Azure DocumentDB to Azure Cosmos DB, the service and resource names are changing from “Azure DocumentDB” to “Azure Cosmos DB” on June 1, 2018.

    How does that Impact?

    When Microsoft introduced Cosmos DB, then have ensured that there was a smooth transition or migration of existing Document DB customers /tenants to Cosmos DB. This was achieved by without changing underlying service and resource names from “Document DB” to “Cosmos DB”.

    So, if you were an existing customer of Document DB, you have noticed the only disappearance of Document DB name and old service showing simply Cosmos DB. You did not feel much difference apart from some additional configuration options as part of multi-modal data source configuration.

    Your ARM deployment templates might need some changes in resource sizing, resource location, and some other configuration aspects.

    There is no a pricing impact because of this change, but you will have to modify billing parameters that rely on the new names. Now with this deadline what Microsoft intends to have is to deprecate the use of Old DocumentDB naming and start migrating all customers/tenants to follow the new naming for the resource billing/sizing purposes.

    To read more about the naming changes: https://azure.microsoft.com/en-us/updates/name-changes-cosmos-db/

    Kubernetes vs Service Fabric

    April 13, 2018 Application Virtualization, Azure, Emerging Technologies, Kubernates, Orchestrator, OS Virtualization, PaaS, Service Fabric, Virtual Machines, Virtualization No comments

    What is the difference between Kubernates and Service Fabric?

    It is a common question today among most of the business stakeholders, infrastructure specialists, and information technology architects.

     

     

     

     

     

     

     

     

     

    To answer in simpler words, quoting from this Reddit log :

    • Kubernetes manage/orchestrate containers and applications within. 
    • ServiceFabric is a framework for microservices based on one of three models; stateful, stateless, actor. Service Fabric provides a framework for creating micro services, runtime for managing distributed instances, and also provides the ‘fabric’ that holds everything together.

    A detailed comparison quoting from an MSDN blog  from here:

    Azure Container Service: If you are looking to deploy your application in Linux environment and are comfortable with an orchestrator such as Swarm, Kubernetes or DC/OS, use ACS. A typical 3 tier application (such as a web front end, a caching layer, a API layer and a database layer) can be easily container-ized with 1 single dockerfile (or docker-compose file). It can be continuously decomposed into smaller services gradually. This approach provides an immediate benefit of portability of such an application. Containers is Open technology and there is great community support around containers.

    Azure Service Fabric: If an application must have its state saved locally, then use Service Fabric. It is also a good choice if you are looking to deploy the application in Windows server ecosystem(Linux support is in the works as well!). Refer to common workloads on Service Fabric for more discussion on applications that can benefit from Service Fabric. Biggest benefit is that Service Fabric applications can run on-premise, on Azure or even in other cloud platforms also.