ASP.NET Web API / OWIN authenticated integration tests without authorization server

Integration testing of OWIN Web API services is super easy with the MIcrosoft.Owin.Testing.TestServer component. It is basically an in-memory OWIN host that runs together with the HttpClient without needing any network calls.

Authentication

Aaron Powell has written an extensive post about to test Web API services that require OAuth token authentication. With the method described in this post, tokens are requested from the /token resource (provided by the OWIN OAuth Authorization Server) before executing the actual API. This method works great for situations where the Web API service that is being tested also contains the authorization server. But sometimes the Web API service under test doesn’t contain the authorization server, so authentication tokens have to be requested from an external authorization server. This highly complicates the integration tests because the external server has to be setup for the tests. It would be great if we could get an authorization token for tests without the need for an external authorization server.

Generate an OAuth token without authorization server

To generate and use a token in the integration tests, we create a base class (BaseAuthenticatedApiTestFixture) for our authenticated test fixtures that borrows some of the logic of the Owin  OAuthAuthorizationServerMiddleware internals. This base class inherits again from BaseApiTestFixture. This class contains all logic for creating the Owin TestServer and calling the API and is very much inspired by the BaseServerTest class in Aaron Powell’s post.

/// <summary>
/// Base class for integration tests that require authentication.
/// </summary>
public abstract class BaseAuthenticatedApiTestFixture : BaseApiTestFixture
{
    private string _token;

    /// <summary>
    /// Token for authenticated requests.
    /// </summary>
    protected virtual string Token
    {
        get { return _token ?? (_token = GenerateToken()); }
    }

    protected override HttpRequestMessage CreateRequest(HttpMethod method, object data)
    {
        var request = base.CreateRequest(method, data);
        if (!String.IsNullOrEmpty(this.Token))
        {
            request.Headers.Add("Authorization", "Bearer " + this.Token);
        }
        return request;
    }

    private string GenerateToken()
    {
        // Generate an OAuth bearer token for ASP.NET/Owin Web Api service that uses the default OAuthBearer token middleware.
        var claims = new[]
        {
            new Claim(ClaimTypes.Name, "WebApiUser"),
            new Claim(ClaimTypes.Role, "User"),
            new Claim(ClaimTypes.Role, "PowerUser"),
        };
        var identity = new ClaimsIdentity(claims, "Test");

        // Use the same token generation logic as the OAuthBearer Owin middleware. 
        var tdf = new TicketDataFormat(this.DataProtector);
        var ticket = new AuthenticationTicket(identity, new AuthenticationProperties { ExpiresUtc = DateTime.UtcNow.AddHours(1) });
        var accessToken = tdf.Protect(ticket);

        return accessToken;
    }
}

The GenerateToken() method in the code above creates the token in three steps:

  1. Create a ClaimsIdentity that contains the username and claims (like roles);
  2. Create an AuthenticationTicket based on the ClaimsIdentity;
  3. Convert the AuthenticationTicket into a token with the TicketDataFormat class that uses a DataProtector to encrypt the ticket.

To make sure the token is accepted by the Owin OAuthBearer middleware, the DataProtector in step 3 needs to be the same as the one that is used for decrypting the token. Luckily we can create one during initialization of the Owin TestServer. This is set in a protected property of the BaseApiTestFixture so we can access it in BaseAuthenticatedApiTestFixture the subclass:

protected BaseApiTestFixture()
{
    // Normally you'd create the server with:
    //
    //    Server = TestServer.Create<Startup>();
    //
    // but in this case we need to get hold of a DataProtector that can be 
    // used to generate compatible OAuth tokens.

    Server = TestServer.Create(app =>
    {
        var apiStartup = new Startup();
        apiStartup.Configuration(app);
        DataProtector = app.CreateDataProtector(typeof(OAuthAuthorizationServerMiddleware).Namespace, "Access_Token", "v1");
    });
    AfterServerSetup();
}

Testing

To execute authenticated tests, just inherit from BaseAuthenticatedApiTestFixture and call the test methods in the base class. This is the controller we’re testing:

public class AuthenticatedController : ApiController
{
    [HttpGet]
    [Authorize]
    [Route("userinfo")]
    public IHttpActionResult GetUserInfo()
    {
        var currentPrincipal = Request.GetOwinContext().Authentication.User;

        var userInfo = new UserInfoDto
        {
            Name = currentPrincipal.Identity.Name,
            Roles = currentPrincipal.Claims.Where(c => c.Type == ClaimTypes.Role).Select(c => c.Value ).ToArray()
        };
        return Ok(userInfo);
    }

    [HttpGet]
    [Authorize(Roles = "PowerUser")]
    [Route("poweruserhello")]
    public IHttpActionResult GetPowerUserHello()
    {
        return Ok("hello poweruser");
    }
}

As you can see, there are Authorize attributes that require authorization. The actual test code (using XUnit) is super simple:

public class AuthenticatedApiTests : BaseAuthenticatedApiTestFixture
{
    private string _uri;

    protected override string Uri
    {
        get { return _uri; }
    }

    [Fact]
    public async void Get_UserInfo_Returns_200_And_UserInfo()
    {
        // Arrange
        _uri = "userinfo";

        // Act
        var response = await GetAsync();
        var result = await response.Content.ReadAsAsync<UserInfoDto>();

        // Assert
        Assert.Equal(HttpStatusCode.OK, response.StatusCode);
        Assert.Equal(result.Name, "WebApiUser");
        Assert.Equal(2, result.Roles.Length);
    }

    [Fact]
    public async void Get_PowerUserHello_Returns_200_And_UserInfo()
    {
        // Arrange
        _uri = "poweruserhello";

        // Act
        var response = await GetAsync();
        var result = await response.Content.ReadAsAsync<string>();

        // Assert
        Assert.Equal(HttpStatusCode.OK, response.StatusCode);
        Assert.Equal("hello poweruser", result);
    }
}

Example solution

Check out the complete example solution at https://github.com/martijnboland/AuthenticatedOwinIntegrationTests

Is webpack like fast food?

Webpack is a module bundling tool for packaging apps. I am using it more and more, even to the point where it almost entirely has replaced my previous grunt and gulp build environments.

But here’s the thing: I still don’t feel entirely comfortable using webpack. I wonder if it’s only me, or are there more people feeling the same?
To me, it’s still this magic box with a gazillion configuration options (which, in fact, you almost don’t need), awful documentation and a single core developer. You know, things that usually make you feel uncomfortable.

But man, you can do crazy things with it so fast! Almost like fast food. Instant gratification with perhaps a little bad taste afterwards.

webpack

Develop ReactJS + ASP.NET Web API apps in Visual Studio 2015

Update 2015-12-11

At about the same time this post was originally written, Mads Kristensen released the WebPack Task Runner extension. This post has been modified to reflect the new situation.

tl;dr

Developing ReactJS apps with an ASP.NET Web API back-end is not entirely straightforward. Most ReactJS apps use webpack to build and bundle the app and it’s also very convenient to leverage the hot module replacement abilities of the Webpack dev server. Visual Studio has no native support for webpack, but with the WebPack Task Runner or NPM Script Task Runner extensions it’s possible to integrate webpack with Visual Studio without the need to use the command-line.
An example solution can be found at
https://github.com/martijnboland/VSReact.

 

These days, I mostly do front-end development with text editors and command-line tools. However, in some situations, especially when front-end and back-end are relatively tied together and the back-end is built with ASP.NET Web API, I find it more convenient to also use Visual Studio for front-end development because constantly switching between editors makes my head hurt.

Visual Studio 2015 and the front-end

With Visual Studio 2015, Microsoft has made a shift towards the use of more open-source tooling for front-end development. JavaScript packages are now managed via Bower instead of NuGet and the server-side bundling (System.Web.Optimization) is replaced with Gulp or Grunt. Perfect for front-end JavaScript wouldn’t you say?

ReactJS and its ecosystem

When developing apps with ReactJS and Visual Studio, you could try to keep using Bower and Gulp/Grunt but it’s hard to find good guidance. Almost the entire ReactJS ecosystem uses NPM instead of Bower and webpack instead of Gulp/Grunt. Do yourself a favor and go with the flow. NPM works fine and is probably already the number one package manager for the browser. Webpack is a bundler for scripts and static assets where Grunt/Gulp are task runners that can be configured with plugins to do the bundling. You can not say that one is better than the other(s) because they serve a slightly different purpose. Let’s just use NPM and webpack.

One issue though: how do we integrate webpack and NPM in Visual Studio? Luckily, NPM is already supported in Visual Studio 2015. No visual tooling like ‘Manage Bower Packages’ but editing the package.json file is easy with autocomplete for packages and versions and Visual Studio automatically installs the packages that are newly added or missing. That still leaves webpack.

Running webpack from Visual Studio

We want to use webpack to bundle our JavaScript files and static assets, but webpack also comes with a very nice embedded webserver for development that serves the bundles and watches for changes. After saving a file, bundles are automatically recreated and served. On top of that, webpack has hot module replacement support that replaces code in a running app in your browser.
Normally you’d execute webpack from the command line and this is still possible, but wouldn’t it be great if opening the Visual Studio solution also starts the webpack development server and you can start everything with a single F5?

Since version 2015, Visual Studio has the ‘Task Runner Explorer’. Default it comes with support for Grunt and Gulp and you can link tasks to events like ‘Opened Project’ and ‘After Build’.
The Task Runner Explorer can be extended and this is great because we now have two extension options to integrate webpack in Visual Studio.

WebPack Task Runner extension

This extension makes webpack integration super easy. The webpack configuration file (webpack.config.js) is automatically detected after opening the Task Runner Explorer (and it even detects alternative config files for different environments):

 image

In the above example, the ‘Serve Hot’ task is bound to ‘Project Open’, in other words, when opening the Visual Studio Solution, the webpack dev server is started with hot module replacement enabled.

NPM Scripts Task Runner extension

An alternative for the WebPack Task Runner extension is the NPM Scripts Task Runner extension. Compared to the WebPack Task Runner extension, this extension is probably a little bit more flexible because you have to specify the webpack command line parameters yourself.

The extension allows us to execute the commands from the scripts section of the NPM package.json file:

"scripts": {
  "start": "webpack-dev-server --port 3000 --config webpack.config.js --hot --content-base build --progress --colors",
  "build": "webpack --config webpack.config.js --progress --profile --colors",
  "prod": "webpack -p --config webpack.prod.config.js --progress --colors"

start: run the webpack dev server with hot module replacement enabled
build: run webpack to create bundles with the default configuration
prod: run webpack to create optimized production bundles

The Visual Studio Task Runner Explorer:

image

You can see that the ‘start’ script is executed when the project is opened, which in our case means starting the webpack dev server with hot module replacement.

A complete solution

In the intro I mentioned ASP.NET Web API. Often, a Visual Studio solution consists of at least one web project for the client application and static assets and another ASP.NET Web API project as the back-end.

image

On GitHub, you can find an example solution  (https://github.com/martijnboland/VSReact) that has these projects. It’s the well known TodoMVC app with an ASP.NET Web API backend.

VSReact.Api is the ASP.NET Web API project and VSReact.Web is the ReactJS app. Normally you would set both projects as startup projects, but in this case, VSReact.Web should not be started because the webpack dev server is already running and serving the client files.

A special note: the VSReact.Web is still based on an empty ASP.NET project although there is zero ASP.NET in it. This is mainly to keep Visual Studio happy and enable the Task Runner Explorer.

For a true ‘hit F5 and run’ experience we have to do one last thing: point the start URL of the VSReact.Api project to the webpack dev server, in our case http://localhost:3000/index.html. Then, when you hit F5 or CTRL+F5, the API project builds and runs, but the browser opens nicely with the client app:

image

To run the example solution, you’ll need the following prerequisites:

Credits

The VSReact.Web project is a copy of the example React + Redux + DevTools project and the VSReact.Api project is copied from https://github.com/mforman/todo-backend-webapi. Both with small modifications for the integration. Thanks to the original authors for the examples.

JavaScript for .NET Developers – March 2015 edition

The last couple of years, many .NET developers are making a gradual shift from developing  Windows and/or server-side ASP.NET applications to client-side JavaScript apps, myself included. That is not really specific for .NET developers, it happens elsewhere too. What is specific, are the tools and frameworks that .NET developers tend to use to build these JavaScript apps.
I’m suspecting that the vast majority of the .NET community went through the following stages:

  • 2005-2007: ASP.NET AJAX
    This new ‘AJAX’ thing is all the rage, but we really don’t want to write JavaScript. Love those UpdatePanels!
  • 2007-2011: jQuery
    We still don’t want to write JavaScript, but ASP.NET AJAX is way too clunky and jQuery is supported by Microsoft, so hey, it must be good!
  • 2011-2012: KnockoutJS
    jQuery spaghetti becomes unmanageable but luckily, Steve Sanderson ports the MVVM concept from WPF and Silverlight and we can build proper client-side apps. At one time, KnockoutJS was even part of standard Visual Studio templates
  • 2012-2013: Durandal
    Based on KnockoutJS and RequireJS, Rob Eisenberg (of Caliburn fame) creates Durandal, which is a complete framework. Now we have a one-stop shop for all our needs. No hassle with 3rd party libs for routing etc. anymore.
  • 2014: AngularJS
    Some influential .NET people have discovered Angular. Two-way binding without those pesky observables and it’s backed by a real big company. Hurray! Even Durandal creator Rob Eisenberg joins the Angular team, so that must really be the best thing since bread came sliced.
  • 2015: Unkown
    Wait, whut?!
    In october 2014, AngularJS 2.0 was announced as a being non-compatible with the current 1.x version and a month later, Rob Eisenberg announced that he’s left the Angular team. BOOM!!! Suddenly, AngularJS wasn’t the coolest kid in town anymore and the .NET/JavaScript community is kind of left in the dark where to go now.

The Framework Vacuum

ienro

After the Angular 2.0 announcement, some of us went desperately looking for alternatives like React, Ember or even VanillaJS, some went back to their trusty Durandal or Knockout codebases and wait for Aurelia to mature and perhaps most of us just continued on the Angular 1.x stack. But there is no clear leading framework or library anymore.

And that is good.

Not having a dominant framework or library or whatever, forces us developers to think about what we really need instead of the cargo cult mentality that has been so prevalent the last  years. Take the time to do a proper analysis of the problem you need to solve.

And that is good too.

When starting a new project, just pick the framework that is the best fit for the project at that time, taking into account the particular skills and knowledge of your co-workers.
It’s impossible to predict which one will survive the next years. Just don’t go all in with a single framework. Keep your options open. Also keep considering full server-side apps. They’re terribly out of fashion but unbeatable for some scenarios.

Some people advocate the use of no framework at all, but only use small, very focused libraries (microjs) or write everything yourself. Personally, I think that only a very, very skilled team can get away with this because you’ll end up building a framework anyway.

Meanwhile today

Having some kind of framework vacuum these days gives us a nice opportunity to focus on other important aspects of modern day JavaScript development.

Javascript build tools

Invaluable. Automate everything from building and minifying JavaScript to deployment and everything in between. You could use ASP.NET bundling for this, but in my experience, dedicated build tools are much more powerful and flexible.

Grunt and Gulp are the main players. Grunt is widely used, has a huge amount of plugins but isn’t really fast and can be somewhat complex to configure, Gulp is faster and requires less configuration than Grunt and is generally preferred these days.

WebPack might be an interesting alternative. It’s not a general purpose build system, but focuses on bundling. A unique feature of WebPack is that it can bundle all parts of your application into a single bundle, including images and stylesheets.

Javascript modules

You should really put your JavaScript code in modules. It allows for better encapsulation, modularity and prevents polluting the global scope (the window object in browsers).
There are multiple ways to define modules in JavaScript: AMD with RequireJS and CommonJS with Browserify are widely used, but with ECMAScript 6, native modules are coming as well. AMD allows for dynamic module loading in the browser where CommonJS/Browserify always requires a build step. On the other hand, I personally find the AMD module syntax intrusive, RequireJS requires way too much configuration and a build step is required for your production builds anyway. I’d say use CommonJS/Browserify for the time being and reconsider when ES6 is ready.

Javascript package managers

Still using NuGet to pull in JavaScript libraries and frameworks? Please don’t. Use Bower, NPM or JSPM (can use Bower as well as NPM). All major JavaScript libraries are being released for Bower and NPM where NuGet comes as an afterthought (if at all).

ECMAScript 6 (ES6)

ECMAScript 6 is the next incarnation of the JavaScript language. Although native browser support is not there yet, there are so many interesting features that you might consider using it today. Transpilers like Babel or Traceur can turn your ES6 code into ES5 code that runs in all modern browsers. With the JavaScript build systems it’s super easy to add a transpile step so there are no real obstacles.

Visual Studio 2015

While it can be refreshing to step out of our trusty Visual Studio Environment, it’s good to know that Visual Studio 2015 will get support for Gulp, Grunt, Bower and NPM.

Angular is the new uncool?

This week, more details about the future of Angular 2.0 have been announced at the ng-europe conference (see https://www.youtube.com/watch?v=gNmWybAyBHI for the video). After the first announcements made in march 2014, more details have been disclosed. Angular 2.0 will not be backwards compatible with the current 1.x version.

It backfired. People are truly upset about the breaking changes (see this reddit thread, for example).

The interesting thing is: why is everyone suddenly so upset? From the start of Angular 2.0, it has always been clearly stated that the new version won’t be compatible with the current version in order to make drastic improvements. Also, the current 1.x version will be supported for quite a while (1,5 – 2 years after the release of 2.0, probably late 2015).

No problem or…?

I think the Angular team underestimated the impact of their message: by making such drastic changes they are giving the impression that the current 1.x version kind of sucks. To make things worse, in the presentation, they used the R.I.P. metaphor for the concepts of Angular 1.x that are going to be removed. Quick unfounded conclusion: Angular 1.x is dead.

People are upset because this message makes them feel that the cool technology they loved yesterday has suddenly become uncool and legacy. The new cool is still at least a year away and in the mean time we have to work with uncool technology, and envy other frameworks that have managed to stay cool.