Hello Scene — What’s in a Window?

Yes, what is a Window? How do I draw, how do I handle the user’s mouse/keyboard/joystick/touch/gestures?

As a commenter pointed out on the my last post, I’ve actually covered these topics before. Back then, the operative framework was ‘drawproc’. The design center for drawproc was being able to create ‘modules’, which were .dll files, and then load them dynamically with drawproc at runtime. I was essentially showing people how something like an internet browser might be able to work.

Things have evolved since then, and what I’m presenting here goes more into the design choices I’ve made along the way. So, what’s in a “Window”?

It’s about a couple of things. Surely it’s about displaying things on the screen. In most applications, the keyboard, mouse, and other ‘events’, are also handled by the Window, or at least strongly related to it. This has been true in the Windows environment from day one, and still persists to this day. For my demo scene apps, I want to make things are easy as possible. Simply, I want a pointer to some kind of frame buffer, where I can just manipulate the values of individual pixels. That’s first and foremost.

How to put random pixels on the screen? In minwe, there are some global variables created as part of the runtime construction. So, let’s look at a fairly simple application and walk through the bits and pieces available.

Just a bunch of random pixels on the screen. Let’s look at the code.

#include "apphost.h"
 
void drawRandomPoints()
{
    for (size_t i = 0; i < 200000; i++)
    {
        size_t x = random_int(canvasWidth-1);
        size_t y = random_int(canvasHeight-1);
        uint32_t gray = random_int(255);
 
        canvasPixels[(y * canvasWidth) + x] = PixelRGBA(gray, gray, gray);
    }
}
 
void onLoad()
{
    setCanvasSize(800, 600);
    drawRandomPoints();
}

That’s about as simple a program as you can write and put something on the screen. In the ‘onLoad()’, we set the size of the ‘canvas’. The canvas is important as it’s the area of the window upon which drawing will occur. Along with this canvas comes a pointer to the actual pixel data that is behind the canvas. A ‘pixel’ is this data structure.

struct PixelRGBA 
{
    uint32_t value;
};

That looks pretty simplistic, and it really is. Pixel values are one of those things in computing that has changed multiple times, and there are tons of representations. If you want to see all the little tidbits of how to manipulate the pixel values, you can checkout the source code: pixeltypes.h

In this case, the structure is the easiest possible, taylored to the Windows environment, and how quickly you can present something on the screen with the least amount of fuss. How this actually gets displayed on screen is by calling the ancient GDI API ‘StretchDIBits’:

int pResult = StretchDIBits(hdc,
    xDest,yDest,
    DestWidth,DestHeight,
    xSrc,ySrc,
    SrcWidth, SrcHeight,
    gAppSurface->getData(),&info,
    DIB_RGB_COLORS,
    SRCCOPY);

The fact that I’m using something from the GDI interface is a bit of a throwback, and current day Windows developers will scoff, smack their foreheads in disgust, and just change the channel. But, I’ll tell you what, for the past 30 years, this API has existed and worked reliably, and counter to any deprecation rumors you may have heard, it seems to be stable for the forseeable future. So, why not DirectXXX something or other? Well, even DirectX still deals with a “DevicContext”, which will show up soon, and I find the DirectXXX interfaces to be a lot of overkill for a very simple demo scene, so here I stick with the old.

There are lots of bits and pieces in that call to StretchDIBits. What we’re primarily interested in here is the ‘gAppSurface->getData()’. This will return the same pointer as ‘canvasPixels’. The other stuff is boilerplate. The best part is, I’ve encapsulated it in the framework, such that I’ll never actually call this function directly. The closest I’ll come to this is calling ‘refreshScreen()’, which will then make this call, or other necessary calls to put whatever is in the canvasPixels onto the actual display.

And where does this pixel pointer come from in the first place? Well, the design considerations here are about creating something that interacts well with the Windows APIs, as well as something I have ready access to. The choice I make here is to use a DIBSection. The primary thing we need to interact with the various drawing APIs (even DirectX), is a DeviceContext. This is basically a pointer to a datastructure that Windows can deal with. There are all kinds of DeviceContexts, from ones that show up on a screen, to ones that are associated with printers, or ones that are just in memory. We want the latter. There are lots of words to describe this, but the essential code can be found in User32PixelMap.h , and the real working end of that is here:

bool init(int awidth, int aheight)
{
    fFrame = { 0,0,awidth,aheight };
 
    fBytesPerRow = winme::GetAlignedByteCount(awidth, bitsPerPixel, alignment);
 
    fBMInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
    fBMInfo.bmiHeader.biWidth = awidth;
    fBMInfo.bmiHeader.biHeight = -(LONG)aheight;    // top-down DIB Section
    fBMInfo.bmiHeader.biPlanes = 1;
    fBMInfo.bmiHeader.biBitCount = bitsPerPixel;
    fBMInfo.bmiHeader.biSizeImage = fBytesPerRow * aheight;
    fBMInfo.bmiHeader.biClrImportant = 0;
    fBMInfo.bmiHeader.biClrUsed = 0;
    fBMInfo.bmiHeader.biCompression = BI_RGB;
    fDataSize = fBMInfo.bmiHeader.biSizeImage;
 
    // We'll create a DIBSection so we have an actual backing
    // storage for the context to draw into
    // BUGBUG - check for nullptr and fail if found
    fDIBHandle = ::CreateDIBSection(nullptr, &fBMInfo, DIB_RGB_COLORS, &fData, nullptr, 0);
 
 
    // Create a GDI Device Context
    fBitmapDC = ::CreateCompatibleDC(nullptr);
 
    // select the DIBSection into the memory context so we can 
    // peform operations with it
    fOriginDIBHandle = ::SelectObject(fBitmapDC, fDIBHandle);
 
    // Do some setup to the DC to make it suitable
    // for drawing with GDI if we choose to do that
    ::SetBkMode(fBitmapDC, TRANSPARENT);
    ::SetGraphicsMode(fBitmapDC, GM_ADVANCED);
 
    return true;
}

That’s a lot to digets, but there are only a couple pieces that really matter. First, in the call to CreateDIBSection, I pass in fData. This will be filled in with a pointer to the actual pixel data. We want to retain that, as it’s what we use for the canvasPixels pointer. There’s really no other place to get this.

Further down, we see the creation of a DeviceContext, and the magic incantation of ‘SelectObject’. This essentially associates the bitmap with that device context. Now, this is setup for both Windows to make graphics library calls, as well as us to do whatever we want with the pixel pointer. This same trick makes is possible to use other libraries, such as freetype, or blend2d, pretty much anything that just needs a pointer to a pixel buffer. So, this is one of the most important design choices to make. Small, light weight, suppors multiple different ways of working, etc.

I have made some other simplifying assumptions while pursuing this path. One is in the pixel representation. I chose rgba-32bit, and not 15 or 16 or 24 or 8 bit, which are all valid and useful pixel formats. That is basically in recognition that when it comes to actually just putting pixels on the screen, 32-bit is by far the most common, so using this as the native format will introduce the least amount of transformations, and thus speed up the process of putting things on the screen.

There is a bit of an implied choice here as well, which needs to be resolved one way or another when switching between architectures. This code was designed for the x64 (intel/AMD) environment, where “little-endian” is how integers are represented in memory. If you’re not familiar with this, a brief tutorial.

This concerns how integer values are actually laid out in memory. Let’s look at a hexadecimal representation of a number for easy viewing: 0xAABBCCDD (2,864,424,397)

On a Big Endian machine, which ‘endian’ determines which part of the number is in the lowest memory address, this would be represented in memory just as it looks

AA BB CC DD

On a Little Endiam machine, this would be laid out in memory as:

DD CC BB AA

So, how do we create our pixels?

PixelRGBA(uint32_t r, uint32_t g, uint32_t b, uint32_t a) : value((r << 16) | (g << 8) | b | (a << 24)) {}

A bunch of bit shifty stuff leaves us with:

AARRGGBB

  • AA – Alpha

  • RR – Red

  • GG – Green

  • BB – Blue

On a Little Endian machine, this will be represented in memory (0’th offset first) as:

BB GG RR AA

This might be called ‘bgra32’ in various places. And that’s really a native format for Windows. Of course, since this has been a well worked topic over the years, and there’s hardware in the graphics card to deal with it, one way or the other, it doesn’t really matter which way round things go, but it’s also good to know what’s happening under the covers, so if you want to use convenient APIs, you can, but if you want the most raw speed, you can forgo such APIs and roll your own.

Just a couple of examples.

  • 0xffff0000 – Red

  • 0xff00ff00 – Green

  • 0xff0000ff – Blue

  • 0xff00ffff – Turquoise

  • 0xffffff00 – Yellow

  • 0xff000000 – Black

  • 0xffffffff – White

Notice that in all cases the Alpha ‘AA’ was always ‘ff’. By convention, this means these pixels are fully opaque, non-transparent. For now, we’ll just take it as necessary, and later we’ll see how to deal with transparency.

Well, this has been a hand full, but now we know how to manipulate pixels on the screen (using canvasPixels), and we know where the pixels came from, and how to present the values in the window. With a little more work, we can have some building blocks for simple graphics.

One of the fundamentals of drawing most primitives in 2D, is the horizontal line span. If we can draw horizontal lines quickly, then we can build up to other primitives, such as rectangles, triangles, and polygons. So, here’s some code to do those basics.

#include "apphost.h"
 
// Some easy pixel values
#define black   PixelRGBA(0xff000000)
#define white   PixelRGBA(0xffffffff)
#define red     PixelRGBA(0xffff0000)
#define green   PixelRGBA(0xff00ff00)
#define blue    PixelRGBA(0xff0000ff)
#define yellow  PixelRGBA(0xffffff00)
 
// Return a pointer to a specific pixel in the array of
// canvasPixels
INLINE PixelRGBA* getPixelPointer(const int x, const int y) 
{ 
    return &((PixelRGBA*)canvasPixels)[(y * canvasWidth) + x]; 
}
 
// 
// Copy a pixel run as fast as we can
// to create horizontal lines.
// We do not check boundaries here.
// Boundary checks should be done elsewhere before
// calling this routine.  If you don't, you run the risk
// of running off the end of memory.
// The benefit is faster code possibly.
// This is the workhorse function for most other
// drawing primitives
INLINE void copyHLine(const size_t x, const size_t y, const size_t len, const PixelRGBA& c)
{
    unsigned long * dataPtr = (unsigned long*)getPixelPointer(x, y);
    __stosd(dataPtr, c.value, len);
}
 
// Draw a vertical line
// done as quickly as possible, only requiring an add
// between each pixel
// not as fast as HLine, because the pixels are not contiguous
// but pretty fast nonetheless.
INLINE void copyVLine(const size_t x, const size_t y, const size_t len, const PixelRGBA& c)
{
    size_t rowStride = canvasBytesPerRow;
    uint8_t * dataPtr = (uint8_t *)getPixelPointer(x, y);
 
    for (size_t counter = 0; counter < len; counter++)
    {
        *((PixelRGBA*)dataPtr) = c;
        dataPtr += rowStride;
    }
}
 
//
// create a rectangle by using copyHLine spans
// here we do clipping
INLINE void copyRectangle(const int x, const int y, const int w, const int h, const PixelRGBA &c)
{
    // We calculate clip area up front
    // so we don't have to do clipLine for every single line
    PixelRect dstRect = gAppSurface->frame().intersection({ x,y,w,h });
 
    // If the rectangle is outside the frame of the pixel map
    // there's nothing to be drawn
    if (dstRect.isEmpty())
        return;
 
    // Do a line by line draw
    for (int row = dstRect.y; row < dstRect.y + dstRect.height; row++)
    {
        copyHLine(dstRect.x, row, dstRect.width, c);
    }
}
 
// This gets called before the main application event loop
// gets going.
// The application framework calls refreshScreen() at least
// once after this, so we can do some drawing here to begin.
void onLoad()
{
    setCanvasSize(320, 240);
 
    // clear screen to white
    gAppSurface->setAllPixels(white);
 
    copyRectangle(5, 5, 205, 205, yellow);
 
    copyHLine(5, 10, 205, red);
 
    copyHLine(5, 200, 205, blue);
 
    copyVLine(10, 5, 205, green);
    copyVLine(205, 5, 205, green);
 
}

The function ‘getPixelPointer()’ is pure convenience. Just gives you a pointer to a particular pixel in the canvasPixels array. It’s a jumping off point. The function copyHLine is the workhorse, that will be used time and again in many situations. In this particular case, there is no boundary checking going on, so that’s a design choice. Leaving off boundary checking makes the routine faster, by a tiny bit, but it adds up when you’re potentially doing millions of lines at a time.

The implementation of the copyHLine() functions contains a bit of something you don’t see every day.

__stosd(dataPtr, c.value, len);

This is a compiler intrinsice specific to the Windows system. It essentially operates like a memcpy(), but instead of copy one byte over a memory range, it copies a 32-bit value over that range. This is perfect for rapidly copying our 32-bit pixel value to fill a span in the canvasPixels array. Being a compiler intrinsic, we can assume it’s implemented to be the most optimal code to implement this feature. Of course, you can only know for sure if you do some measurements. For now, we’ll stick with it as it does what we want.

The copyRectangle() function simply calls the copyHLine() function the required number of times. Notice here that we do clipping of the rectangle up front (intersection). Since we decided copyHLine() would not do any clipping, we do the clipping in the higher level primitives. Doing clipping here only occurs once, then we can feed known valid coordinates and lengths to the copyHLine() routine without having to do it in the inner loop.

Deciding when to clip, or range check is a key aspect of the framework. Delaying such decisions to the highest levels possible is a good design strategy. Of course, you can change these choices to match whatever you want to do. This is a key aspect of the framework’s design as well.

The framework will always try to be light weight and composable. It tries to keep the opinionated API as minimal as possible, not forcing particular design philosophies at the exclusion of others.

With that, we’re at a good stopping point. We’ve got a window up on the screen. We know how to draw everything from pixels to straight lines and rectangles, and our executable is only 39K in size. That in and of itself is interesting, and over the next couple of articles we’ll see whether we can maintain that small size while increasing capability. Remember, the size of the Commodore 64 computer of old, which was part of the demo scene, had only 64K of RAM to play with. Let’s see what we can do with the same constraint.

Next time around, some input and animation timers.

Previous
Previous

Hello Scene — Events, Organization, more drawing

Next
Next

Hello Scene — Win32 Wrangling