Software by Steven

Sometimes a square peg is a round hole

Category Archives: Uncategorized

Async Enumerable Test Sources in NUnit

In my previous post I showed how to use awaitable TestCase or Value sources in NUnit 3.14. NUnit 4 continues the story of adding async support by also allowing TestCaseSource, ValueSource, or TestFixtureSource to return an IAsyncEnumerable.

public class AsyncTestSourcesTests
{
    [TestCaseSource(nameof(MyMethodAsync))]
    public void Test1Async(MyClass item)
    {
    }

    public static async IAsyncEnumerable<MyClass> MyMethodAsync()
    {
        using var file = File.OpenRead("Path/To/data.json");
        await foreach (var item in JsonSerializer.DeserializeAsyncEnumerable<MyClass>(file))
        {
            yield return item;
        }
    }

    public class MyClass
    {
        public int Foo { get; set; }
        public int Bar { get; set; }
    }
}

As with IEnumerable-backed sources, NUnit will lazily enumerate the collection to avoid bringing all the objects into memory at once, instead only generating the test or value sources when needed.

Many async enumerable operations also require async disposal of the underlying resource after enumerating the collection. NUnit will also take care of calling the dispose method to ensure everything is cleaned up properly. In the event that both DisposeAsync and Dispose methods are present then NUnit will only call the asynchronous DisposeAsync method and not the Dispose method.

Async Test Sources in NUnit

NUnit has long supported the definition of test cases in numerous forms, including via inline primitive data via the TestCaseAttribute or potentially more complex data returned from a method, property, or other source at runtime via TestCaseSourceAttribute. The latter has typically only supported synchronous methods. This complicated defining data-driven test cases where an internal operation required calling a Task-based API. A common example of this could be a test case which reads from a JSON source file or other stream using a method like JsonSerializer.DeserializeAsync().

In the past this would mean an awkward and unnatural call using something like .GetAwaiter().GetResult():

public class Tests
{
    [TestCaseSource(nameof(MyMethod))]
    public void Test1(MyClass item)
    {
    }

    public static IEnumerable<MyClass> MyMethod()
    {
        using var file = File.OpenRead("Path/To/data.json");
        var t = JsonSerializer.DeserializeAsync<IEnumerable<MyClass>>(file).AsTask();

        return t.GetAwaiter().GetResult();
    }
}

NUnit 3.14 was released a few months ago and included support for “async” or task-based test case sources. Now a TestCaseSource can target a Task-returning method to allow for much more natural code:

public class Tests
{
    [TestCaseSource(nameof(MyMethodAsync))]
    public void Test1Async(MyClass item)
    {
    }

    public static async Task<IEnumerable<MyClass>> MyMethodAsync()
    {
        using var file = File.OpenRead("Path/To/data.json");
        return await JsonSerializer.DeserializeAsync<IEnumerable<MyClass>>(file);
    }
}

The above example focuses on Task, but any awaitable type such as ValueTask or a custom awaitable also works. Other “source” attributes such as TestFixtureSource or ValueSource are supported as well.

Creating a Grunt plugin on Windows in MINGW

I’ve been working with grunt a bit lately, and have found the need for a task that there doesn’t seem to be in the expansive list of existing plugins. So I thought I’d create my own. Fortunately, grunt has a great and simple page on doing this (http://gruntjs.com/creating-plugins).

Unfortunately, I ran into some issues.

I usually like to work in the Git Bash shell that comes with MINGW. Trouble is, this was causing some pathing issues. Specifically, with these two commands:

  1. Install the gruntplugin template with git clone git://github.com/gruntjs/grunt-init-gruntplugin.git ~/.grunt-init/gruntplugin(%USERPROFILE%\.grunt-init\gruntplugin on Windows).
  2. Run grunt-init gruntplugin in an empty directory.

Apparently MINGW, or at least my version, has some issues resolving %USERPROFILE%. So I ended up with a cloned git repo in my local directory called %USERPROFILE%.grunt-initgruntplugin. After fixing that and moving it to my root user profile I kept geting an “EINVAL” issue on the next command. I figured this had to be a pathing issue too. So I dropped out of MINGW into a cmd shell by typing cmd. Except that didn’t quite do it. Maybe MINGW was intercepting input and filtering it in an unexpected way, but my problems became even worse. So my fix?:

  1. Add git to your OS Path variable (C:\Program Files\Git\bin)
  2. Run a regular command shell (cmd outside of MINGW)

With those two small changes, everything worked flawlessly.

Nod of the hat to integrating Popcorn js and BBB (Big Blue Button)

It looks like a few people have been hitting my blog trying to find information on integrating Popcorn.js and Big Blue Button. I thought I’d take the opportunity to give a nod of the hat to a colleague, dseif, for his recent contribution towards making this possible at Hackanooga.

THIS LINK has all the cool details.

Subtitle Parsing: Planning for Maintainability

While other commitments have kept me from contributing a lot of code to Popcorn.js in recent months, I’ve had plenty of time think about how to tackle a group of related tickets assigned to me. Approximately a year ago, for Popcorn version 0.3 and 0.4, I was responsible for incorporating some earlier subtitle parsing into popcorn. That grew into Popcorn supporting text display for 7 standardized subtitle formats. 5 of these formats also include their own in-source formatting, each with their own syntax. They grouped into two main classifications:

XML-based

  • TTXT Format
  • TTML Format

Text-based

  • WebVTT
  • Sub-Station Alpha (SSA)
  • Advanced Sub-Station (ASS)

With five different ways to represent the same information, I plan to avoid duplication as much as possible. When done properly, this not only makes it easier to initially develop, but makes future maintenance and improvements simpler. A real example of why this is useful is that in May/June 2011 I had a nearly-complete initial version of a WebVTT parser, with just a few CSS-related quirks to work out. After having put it down for a bit, the standard evolved such that much of what I had done was obsolete. Since it wasn’t modular, most of it has been discarded into a file on my desktop.

Looking forward, I see two independent cases for code maintenance: format evolution and CSS evolution. Similar domain problems have been solved by translating the source (raw subtitles) into a universal intermediary language, or a machine interpretation, which then gets processed and output. Human language translation has been approached like this, as have cross-platform programming languages. Both Java and .NET compile to an intermediary language (called bytecode and MSIL respectively), which then is translated to the desired, platform-specific output at runtime. While there is a very small overhead with a second translation, it has been said by a Microsoft engineer that maintainability increases drastically.

Essentially, given n input formats and m output formats, an intermediate level causes their to be n + m + 1 possible combinations, rather than n x m.

Translation Count

Translation Count

Strengthening the argument for this approach is that certain common display functionality (CSS class lookups, creation, etc.) will be required by multiple parsers, and changes must be reflected in all parsers. It is with this knowledge that I plan to stand on the shoulders of giants.

InnerHTML vs. DOM Manipulation Part 2 (IE9 Edition)

A while ago I researched some of the differences in modifying the structure of an HTML document using innerHTML or the DOM. The summary at that time was that, while using innerHTML means less code, most modern browsers provide negligible differences in execution time. That is, of course, provided you minimize modifications to the active document (web page) directly, as each will potentially cause a redraw operation. The bottom line was that it was personal preference, with a tradeoff on less code or more standards-adherence and reliability.

There is an epilogue to this story:

My personal preference is to using standards for cross-browser reliability. Unless file size is a concern (low-bandwidth networks) or large scale input modifications are needed (re-creating large portions of a document programmatically), working with the DOM directly is easier in the long run. It seems the output of innerHTML can vary across browsers, depending on the node structure underneath. This can make comparing DOM structure (unit tests, for example) by innerHTML difficult to get right cross-browser. Where as most browsers will concatenate nodes together without a space (<div><span></span></div>), IE seems to in some/most instances use spaces between nodes on output: (<div> <span> </span> </div>).

While this may seem an edge case, it’s one more argument to be mindful of using DOM-standard tools and apis.

Test Driving Popcorn on IE10

I’ve been hearing a bit of buzz about IE 10 increasing their support for HTML5 and CSS3, so I thought I’d take Popcorn.js for a whirl on it. VirtualBox made the install process pretty painless, I only had to think for myself once. Since it’s built upon Windows 8, I had to download the developer preview. There were quite a few tutorials on the web outlining the install process (some decrying that certain VMs didn’t work at the time), but VirtualBox seemed the most tried and true. In fact, between the time those articles were written and I tried this, newer VirtualBox versions had been created with special “Windows 8” configuration setup. I first followed this PCWorld one, but eventually felt comfortable winging it.

The only hang up I had with it was with hardware-level virtualization. I consistently would receive VM errors when starting up, and ignoring them prompted with with “Windows install error, please reboot” and HAL_INITIALIZATION_FAILED dialogs. Apparently, my processor supported hardware virtualization, and VirtualBox/Windows was trying to use it, but it was disabled in the BIOS. Changing that one thing allows for a clean and easy install.

Things are quite slow on both my host and guest OS’s right now, but I guess that can be expected with only 3 GB to run both, plus VirtualBox, Apache, Firefox and IE. Popcorn itself performed quite well, especially considering the pre-beta nature of Win8 and IE10 and the aforementioned memory constraints.

I was able to test with minimal effort, as my host OS has Apache setup on it, and Win8 is configured to connect to the network through it (the host OS) using NAT. All I had to do is, on Win8, enter the IP address of my host OS, followed by the port I run Apache on, and I can run everything I need remotely.

Building jQuery UI

First off, jQuery UI is great. It makes creating a responsive UI simple, and the tabs capability is great. However, I recently came across a use case where I wanted to dynamically alter the url of a tab (specifically, the query string) dynamically, something it seemed the library didn’t allow. The load-time href of the tab was wrapped and inaccessible at run-time. The fix would be simple, but the source file I was working with was the minified output of their customizable web-based build system, making modification impossible. Downloading fresh, unminified source was equally impractical. So I looked into building it myself. It turned out surprisingly easy.

Obtaining the source

First, I downloaded the latest stable source (1.8.16 at this time), and took a look through it. All the modules had their individual files. In the build directory, I noticed “build.xml”. This is an Ant build configuration file, so I then proceeded to setup Ant.

Setting up Ant

It took reading some documentation, but this was equally simple. I already had Java, so I jumped right to getting application:

  1. Download the Ant binaries from here
  2. After unzipping it, I configured the following environment variables:
  • %ANT_HOME% = Installation path for Ant
  • %JAVA_HOME% = Path to current Java install on my system (not technically necessary, since this was already in my %PATH%)
  • Add %ANT_HOME% to %PATH% so I needed fully-qualify the exe when I execute it

Building the Source

Again, this was simple.

  1. Open up a command prompt
  2. cd to the build folder inside the jquery.ui source directory (where build.xml is)
  3. “ant build deploy-release” to build it to build/dist

There were a few other build options I could’ve chosen. “Minify” will just output the minified source, without license info, documentation, or zipping it.

 

I also experimented with building jQuery UI 1.9 beta, but ran into some issues. Getting the source from github was simple, but they’ve replaced their minification engine from the Google Closure compiler to uglify.js, since it “saves 4 minutes per build, and actually produces slightly smaller files“. This posed an issue, since the current version of the code executes uglify.js from a shell script. I converted the Linux shell script to a Windows batch file, with a little help to get the directory name. This all went fine, until it came to actually executing the js from the command line. There is no console js engine by default in Windows, and using the js.exe output when I built Firefox seemed to error when it ran the script (issue with referencing “global”). Since using 1.8.16 was alright for my immediate purposes I didn’t look into further solutions.

I could’ve tried installing Rhino to get around this, but it’s something to try in the future. Another solution is obviously to try running it on Linux. I spent a bit of time updating my Ubuntu VM install after getting everything working. Until I try and venture into building 1.9, I’m just fine developing off of 1.8.16. After all, it works great.

Software teaching life lessons

More often than not, solving problems through software is simple. The issue/requirements are there in front of you, clear and needing to be solved. Every once in a while though, something stands up and teaches me a lesson.

Every system relies on external components. Whether it’s a third-party library to help with a specific purpose (jQuery, ComponentOne), a runtime to compile applications against (.NET, Java), or a compiler library for simple operations (iostream, math.h), all systems have dependencies. They’re everywhere.

99 times out of 100, when something goes wrong with the code, it’s my fault. These are the cases I mentioned earlier: clear issues, simple solutions. It’s the tricky ones that exist outside my program’s black box, where creative coding and problem solving comes into the picture. These are the ones I like. They teach me to think outside the box, to not take anything for granted, and that above all, we’re all human.

It’s what makes software fun to write.

Schrödinger’s bug

Today I learned a valuable lesson in debugging. Today I learned what it can be like to chase your own tail. I had accomplished the rare feat of introducing a bug through the very act of debugging.

A common feature for software is to behave differently depending on the rights of the user. Super users should see an entire list, while normal users should have less power, only seeing list items pertaining to them. Simple to code, tricky to test. All it takes in an “if” statement, like this:

if(superuser) {
   // Run special code
} else {
   // Run not-so special code
}

And then you write the code. Of course, with my default rights, I can only check one half of this conditional (I am a superuser). Rather than modifying the code and recompiling, or modifying my rights, I thought it would be great to use a debugging tool to bypass this check entirely, and hop right into the non-privileged. Big mistake. First time I tried this, I got a null reference error while calling a function. Not out of the ordinary, the function called a web service, maybe I wasn’t disposing of something properly. 20 fruitless minutes later, I’d rewritten that function several ways trying every way of calling, disposing, and scoping I knew. When that didn’t work, I looked higher up.

The erroring function was called in one other place, in a non-static context. No luck when I tried changing that. Using an intermediary variable for the result? Still it crashed. After a lot of simplification, I managed to make the null error happen one line further up in the code:

MyClass c = new MyClass();

Now this didn’t make sense. MyClass had a default, empty constructor, and no class-level fields. Yet running this line gave me a Null Reference Exception. Getting tired of stepping the debugger forward each time, I decided to just comment out the check, recompile, and continue debugging. Then, the error stopped. After a lot of hunting through Disassemblies, I found the rub: using the debugger to bypass a line of code changed (or rather, didn’t change) some values in memory that the instructions below expected. Eventually, when it tried to issue a “call” to a memory address, there would be random garbage data there. A seg fault may’ve been happening under the scenes, which .NET happened to present as a… Null Reference Exception.

My attempt to observe program state altered the very state I was trying to observe. Schrödinger’s bug.