Vectored Exception Handling and OutputDebugString

    Try me

    If you write high-preformance game code in c++, there is an important language feature that is usually simply not available: structured exception handling, in the form of try / catch statement blocks:

    try
    {
      /* code goes here that throws an exception, like so: */
      throw MyException(/*arguments*/);
    }
    catch(MyException e)
    {
      /* handle exception e that was thrown in the try block */
    }
    
    

    A quarrel of stances

    Firstly, try/catch is a bit broken in c++ because the language ‘forgot’ to add ‘finally’, which is present in Java and c# to handle situations where all code paths converge and handle off the function block. In those languages ‘finally’ statement blocks are very explicit about releasing resources like file handles or memory buffers, after handling the exceptions. Stroustrup defends his language design omission decision by stating that the problem can be solved using RAII (Resource Acquisition Is Initialization) which is a valid, more object-oriented alternative.

    While that is true, it forces people to generate class structure overhead, which you have to keep under control if you’re writing game code. Secondly, Microsoft went so far to actually add the ‘finally’ syntax as a microsoft-specific compiler extension. Thirdly, if you look at Java and C#, Ruby, they do provide a finally statement block syntax.

    But apart from language syntax issues, these are the real reasons to stay away from try/catch blocks in game code:

    1. try/catch can lead to exceptions being thrown from one call to the parent call etc. The code pointer typically jumps around in the stack, meaning that your code cache gets utterly trashed. Performance-wise, this is by far the worst way of handling exceptional conditions.
    2. The definition of what exactly can be understood by the notion of an ‘exception‘ is not always clear to all programmers, and can become quite a philosophic debate. I’ll argue here that to have certain use cases handled as exceptions betrays that your code reserves a certain bias as to what is an exceptional operational condition and what is not. For game code, this is weird, since you know all conditional code paths at all times. The last thing that should happen is to have control yanked from under your feet by some 3rd party library call that bubbles an unknown exception up to the surface at totally the wrong place. And to make matters even more complex: there are 2 types of exceptions: system exceptions and standard exceptions.
    3. Stack unwinding during exception catching does not travel across threads and exceptions have to be explicitly handed over to the proper thread in multi-threaded applications. So in the end, programmers end up with a bit of a garbage bin of catch clauses at the top of the application threads. It’s obviously bad, but it happens more that you think.
    4. try/catch usage implies that you can trace back to the object type that generated the exception. This requires Run-Time Type Information (RTTI) during compilation to be enabled, so that you are able to use instanceof and typeid operators, and maybe also some dynamic_cast. Obviously the performance penalty of dynamic_cast is malicious and the operators trash the address registers and cache lines, but also the increased footprint of the executable and the associated memory costs per object is adding to the wrong side of the equation. And since we’re talking exceptions, most of your code should not even be needing it.

    Uh, wait..

    Ok, so basically game code is totally in love with RAII, though it might not always be implemented in a ‘nice’ (if such a thing should exist) Object Oriented approach. But wait! You said there still are system exceptions. What about those?

    Indeed. The system is sometimes throwing exceptions that relate to OS operations that fail, for example, to load a DLL. In such cases, it can be interesting to trap the exception somehow, without having to resort to the whole RTTI/try/catch overhead. This can be done in Windows using Microsoft’s Vectored Exception Handling.

    Basically it allows the application to install an additional custom exception handler for each exception error code that the system is able to throw, using the functions:

    LONG WINAPI VectoredExceptionHandler(PEXCEPTION_POINTERS /*pExceptionInfo*/){...}
    
    AddVectoredExceptionHandler(1, VectoredExceptionHandler);
    RemoveVectoredExceptionHandler(VectoredExceptionHandler);

    Check the MSDN for details.

    Deja vu

    The point I wanted to make in this post, is that while you’re in your VectoredExceptionHandler function, there is a problem with using OutputDebugString(), as it again throws an exception when the debugger is not attached.

    It does however seem to work when the debugger is attached. Suspicious as this is, at first I blamed my own code and it had me immediately looking for uninitialized values or other memory corruption, but nothing came up, and the same code runs fine in other code paths (frequently). After isolating it into a test program, it seems it is actually re-entering the VEH exception handler, and thus causes stack overflow.

    To conclude, this is not just an OutputDebugString() issue of course, it can happen at any point in your exception handler. As soon as you make a library call that depends on standard or system libraries, the VEH is vulnerable to re-entrant code. So guard your VEH’s against re-entrant code paths, or you may end up never leaving the VEH at all :)

    Cheers!

    Natvis Matrix visualizer

    Long story short,

    I moved to Visual Studio 2012 and then realized that all that tweaky autoexp.dat scripting that sports those fancy instance viewers during debugging was for naught. Since I didn’t move up to VS2013 yet, I’m a bit stuck with the “new and improved” Natvis system. Natvis stands for Native Visualizer, because your visializer can be compiled into a DLL (e.g. from c# code), which is a performance move from the purely scripted approach (using autoexec.dat) that was in use before. The scripting was arguably hairy, and after a while, hairpulling, so it’s only reasonable that Microsoft made an effort to improve things. So, out with the old, in with the new!

    I encountered 3 problems with Natvis. The original feature set somehow got trashed, and only a thin set of features remain. This means for example that some information can no longer be displayed, or that you can’t format it correctly. Secondly, there are no conversion tools, everything that once worked is gone (although someone claims that autoexp.dat can be re-enabled for native edit-and-continue debugging in VS2013). Thirdly, if you don’t want to jump into and out of c# projects to change your debugger, you can use…. wait for it… XML-based .natvis scripts instead.

    That’s right. I used XML and scripting in the same sentence. Off with my head!

    A Matrix Class

    Suppose you have a matrix class, say:

     template<typename T, int rows, int cols>
     struct MyMatrix
     {// your matrix members and methods here..
     };

    That’s all great, but how do we visualize it? It seems the .natvis system can only list array items (or other containers) if the count is known. But in the case above, MyMatrix has template arguments for rows and columns dimensions and is of rank 2. In fact, the matrix structure can be a recursive template type definition (and to be fair, that’s what I am actually using at the moment, but I omitted that for clarity). The template arguments can be captured using $T1, $T2, $T3, etc.. but here’s where things get interesting: you have to put them in curly brackets (e.g. {$T1}) in the DisplayString element to fetch their values. Of course, when expanding the elements, the curly brackets should not be used! That took me a while to find out.

    Second point of interest, and perhaps the gist of this post: all examples out there refer to internal members of the structs, but what if you have a recursive type? Well, it was mentioned in passing in the official documentation, but you can use the this pointer just as well, and index-cast it anyway you fancy, even using template types. This gives you access to anything that might be defined in the class/struct. The former type specifiers to print the values in a particular format are gone (apart from ,su for character strings it seems). Attempts to refer to other scoped types (i.e. (othertype*)this) failed so far, but maybe I made a typo.

     

    A solution

    I came up with the following. It’s far from perfect, and it tends to clutter the debug view with all the floats (because “,g” does no longer work), but it does show how to expand elements in a 2 dimensional ‘array’ after the this pointer.

    <?xml version="1.0" encoding="utf-8"?>
     <!-- Place file into My Documents/Visual Studio 2012/Visualizers/ -->
    <AutoVisualizer xmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010">
     <Type Name="MyMatrix&lt;*,*,*&gt;">
     <DisplayString >[{$T2}x{$T3}]({(*(($T1*)this))}, {*((($T1*)this)+1)}, {*((($T1*)this)+2)}, {*((($T1*)this)+3)}),({*((($T1*)this)+4)},{*((($T1*)this)+5)},{*((($T1*)this)+6)},{*((($T1*)this)+7)}),({*((($T1*)this)+8)},{*((($T1*)this)+9)},{*((($T1*)this)+10)},{*((($T1*)this)+11)}),({*((($T1*)this)+12)},{*((($T1*)this)+13)},{*((($T1*)this)+14)},{*((($T1*)this)+15)})</DisplayString>
       <Expand>
         <ArrayItems>
           <Direction>Forward</Direction>
           <Rank>2</Rank>
           <Size>$T2</Size>
           <ValuePointer>(($T1*)this)</ValuePointer>
         </ArrayItems>
       </Expand>
     </Type>
    </AutoVisualizer>

     

    Look, html-ified < and > brackets!

     

    You typically put a *.natvis file containing such scripts under the My Documents/Visual Studio 2012/Visualizers folder. You don’t have to restart Visual Studio, just restart your debugging session and it should work. If it does not work (i.e. you get the standard visualizer for a typed instance when you hover above it during debugging), you have to dig into the XML to find the problem.

    Room for Improvement

    The DisplayString is hard-coded for a 4×4 matrix. The obvious approach is to put conditional clauses for every possible combination of

    {$T2}x{$T3}

    but that just sucks as much as what I have now. Better would be to have some sort of iteration going on in the DisplayString, so that the length of it is dependent on the actual type, but that does not seem to be supported. Also, being able to split strings across multiple lines would be nice.

    I must say that during debugging, I found peculiar recursive behavior when improperly scoping the this pointer in brackets, so watch out for that. I expected 16 repetitions at some point, but only 9 displayed in the visualizer. This can be investigated further, possibly leading to a recursive debug visualizer, so that larger types can be viewed as a collation of smaller types (i.e. a 4×4 = 3×3+7 = 2×2 + 5 + 7 or somesuch).

    I tested this on 4×4 matrices. For 2×2, or 3×3 to work, you obviously need to remove or shorten the list of elements from the DisplayString (or make the list size dependent). For non-square matrices I’m not sure if the correct size is automatically inferred from a given dimensional size (row or col) and the size of the struct. If not, you have to drop the Rank element and set the size to row*col.

    Conclusion

    This demonstrates how to make the .natvis display a matrix array of values (or any kind of list), with the limit feature set, regardless of the type structure that is underneath it.

    Some of the reasonings for switching from the autoexp.dat syntax to something more modern make sense. You can write visual debugging tools for your data, which is fantastic, and the performance improvement undoubtedly is beneficial in any debugging session.. But I think most people are fine with using simple tools, and being able to script them is something that has been downplayed a bit too much here. That the ‘old’ autoexp.dat system still functions under certain conditions makes forcing people to rewrite their visualizers in XML ill-founded and unreasonable.

    The bad bad bad idea of trying to frame scripts with XML expressions has bitten any seasoned programmer at least once (and then hopefully we remember the pain long enough), but seeing even Microsoft fall into that trap reminds us to be thoughtful and repeat “I shall not script in XML” to yourself every once in a blue moon. Ruby or Python would have made more sense.

    To be fair, the natvis feature set was extended in Visual Studio 2013 – still XML though – and this article is written from a VS2012 perspective. So it might be useful for people like me who can’t find a good reason to update every half-moon to a new VS version. Would be nice if VS2012 could be patched up to 2013 levels regarding these debugger issues, but that’s another story.

    If it’s useful, let me know how it fares, and of course all comments are welcome.

    Achievements!

     

    Coding again for a while now, but yesterday (that means half a year ago, since this post had been sitting in my drafts section that long) I managed to tick another ancient to-do on my list. Time for me to give back a few things that I learned the hard way.

    The brief of it: I wrote a little effect intro containing a number of objects rendering with 2 passes, one shadow map and percentage closer filtering (single light source) and one that has a spotlight lit normal map using hardware. It’s using render to texture and alpha testing. Nothing truly incredible and earth shocking, but I’m quite proud of it because I feel I finally crossed a line. Actually 2 lines: the technical challenge, and the fact that I finally bulldozed through my own ignorance.

    The vertex buffer trickery earlier in this blog is still totally there, so I guess that works well.

    Here’s my list:

    • The creation of vertex buffers creates and destroys a separate worker thread in DirectX9. It is best to minimize the creation of buffers per frame by caching / reusing them. The performance gain in debug mode is dramatic.
    • FVF can not be mapped to something meaningful if we use tangent, normal, bi-normal formats. Use D3DVERTEXELEMENT9 instead for declarations.
    • A small issue I learned is that if you apply the template construction for for tangent space coordinates (normal map), the order in the vertex definition matters and you must make sure the tangent template is mentioned after all the fixed pipeline definitions (position, normal, color,  texcoord).
    • I used a D3DFMT_R32F type surface for the depth buffer / shadow map. It took me a while to realize I could only read the red component. It also supports alpha. D3DFMT_A32B32G32R32F works equally well, but I could not manage to store/read from the channels g and b (like e.g. w)
    • Unless I oversaw the obvious, it seems you can’t use the POSITION input in a pixel shader since the GPU will have consumed them. If you need them, you have to pass them 2ce, once as input and once as a TEXCOORD (which will interpolate them). That’s quite ridiculous.
    • HLSL matrices are column major by default. It usually means a transpose of your matrices.
    • There are 2 ways to upload the TBN values. A smooth surface (curves, spheres) needs a new tangent each fragment, whereas a non-smooth surface (polygonal) needs a tangent per tri. In the latter case, because since we interpolate between vertices, we still have to duplicate them in each vertex + you can’t share the vertices because they may belong to neighboring non-smooth tri’s with different normals. In the first case, computation happens in the vertex shader (re-normalize at fragment!), whereas the latter case can use pre-computed normals.
    • Normal maps behave badly in typical cases like e.g. spheres that converge in zenith points. Use appropriate topologies to get around this (e.g. geosphere)
    • Arrays are only supported locally in shaders? Passing them values from outside did not seem to work. I came across this issue when setting multiple light sources for a fragment shader.
    • It seems bind / render / unbind is mandatory, even if you render multiple objects with the shame shaders. I guess one possible way to optimize draw-call performance is to bundle objects together using vertex list tunnels.
    • I wasted a bit of time trying to combine the depth output of my shaders with forward rendered lines with z-test clipping. I solved it by using a vertex shader for the lines as well. For now I was a bit too focused on getting the shadows to work to really dig into this, but I’m pretty sure this isn’t necessary.
    • The tex2DProj function only works from profiles ps_4_0 onwards :/. Implementing PCS on ps_3_0 hw results in cutting back the filter to 9 samples instead of 16 with noticeable loss of quality.
    • Filtering a 4×4 kernel in 2 for loops did not work for me on ps_3_0. Not quite sure why it does not work, maybe compiler is out of registers? Unrolling the loops worked fine.

    If you have any remarks, ideas questions or answers, I’d be glad to hear about them!

    That said, this achievement has only kicked my enthusiasm up a few more notches and kept my coding skills brimming with new ideas. Here’s what happened in the second half of the year: After this little PC shadow mapping try-out I added another couple of other tricks to my proverbial bag:

    • I redid the whole DirectX but now with OpenGL shader model!
    • I built my own little demo rendering environment.
    • I gave a crash course on C++ AMP, yay!
    • I have Oculus Rift working on OpenGL, also yay. Might post some code although it’s pretty easy to port from the tutorial /SDK code.
    • I literally went all-out and turned my knowledge of GPGPU computing upside-down writing all sorts of stuff in Cuda and OpenCL , including direct integration with DirectX and OpenGL respectively. Cuda is blazingly fast and rather well designed, while OpenCL is just as clean as you would expect from OpenGL’s cousin, and actually a little more friendly. I’m curious where that leaves us with AMD’s new Mantle spec. My experiences with C++AMP were also OK. I quite liked the fact that it’s totally integrated in your compilation, but debugging this shit remains a drag.
    • I now have this ‘R’ language on my radar for some reason.
    • Implemented a full testing bench for thousands of tests (including the publicly available bg, doa, sc1, wc3 etc.. maps) and ran it on various kinds of path finding algorithms. Lots of data to read and lots of tweaking ahead. I also did work on path-finding and path smoothing, and I’m totally digging this almost theoretical shit! I should be able to publish something out of this soon.
    • The whole testing bench is also working for 3D models (PLY), includes a voxelizer and path finder and ray-caster. All of this is working pretty fast, but there’s still room for improvement. The indexing scheme and octile storage structures are blazingly fast.
    • For DragonCommander, after refactoring the hell out of some parts, I finally got around to finish the path-re-use scheme and add orientation-based position selection, which works remarkably well.  I’m anxious to see how close I can get to good group behavior, but I’m rather short on time. Kept a full diary of my progress, so I should be able to write something on this too.
    • Devised a little tracking routine to track the motion of n unrelated and unsorted objects in k-space, by estimating frame coherence based on path direction, velocity and acceleration. This routine was used in an IPEM project and is probably published soon.

    I might start posting code fragments or at least some nifty demo material here, if there’s a need for it, but at the moment I’m knee deep in project work and paper publish deadlining, and I’ll stay in that zone for a little while longer.

    My wish list for next year: something with robotics? Drones perhaps?

    Anyway, I wish you all a warm and safe X-mas, or at least a happy new year! Cheers!

    South of here

     

    In July we got ourselves a well deserved holiday. Well, I was forced to take it at work, to be more precise, but you don’t hear me complaining. I actually enjoy it fully with my wife and kid, since we’re currently in the one single spot in Europe that has amazing sun, blue sky + water and supportable heat, combined with a great outdoors, good food and friendly people. I guess if you belong to my ‘friends’ category, the envy types are already going ballistic over this, since it’s all rain and downpour in the north. :)

    Some thoughts about the time passed. Last year, after headaches and financial worry, we wrestled ourselves through our house-reconstruction. That went extremely well, actually. I’m now praying that the financial woes that are raging in the south of Europe are not going to affect our situation, but deep down, I can’t believe that won’t be the case. I’ve read far too much of ZeroHedge to keep an ignorant stance on the matter. Time will tell how this will develop.

    In may of this year (2012) we got ourselves a lovely son, Arthur, who is 2 months old at the time of writing. He’s “growing like cabbage” – a valid Dutch expression which I’m hoping translates into correct English – and we are very lucky that he eats and sleeps like a pro. Even in this fairly hot summer at the Cote d’ Azur, he’s managing unbelievably well, laughs a lot, and apart from not exposing him too much to the burning sun around 12.am to 4pm, we don’t have to do anything special to keep him happy. A totally adorable darling he is. The women we encounter – without exception – all fell victim to his charms, overloading us with compliments. The kid’s more fad than a space rock star!

    Having such a happy little child really changed a few things in my life. It puts more focus on my relationship, makes me work harder to get things ready and happening, and there is of course the feeding routine, the nappies and diapers, the bathing ritual, singing songs and acting like and allround idiot. Some of these things are easier to do than others.. I used to be convinced that breaking into my sacred 8 hours of sleep would wreck me, and while some nights can definately be rough, this proves to be untrue for the most part. Isabelle takes a lot of the work out of my hands, so of course that skews the picture a bit, but the gist of it is still true. She often tells me of her admiration of single mothers who have to cope on their own, and I can totally relate to that. The total net effect of the 3 of us surviving through all of the days and nights – and this totally surprised me – is the realization that a human is much stronger than he thinks he is, and can bare much more than he thought possible. I generalized the statement because I can’t believe I’m that special at all.

    Let me explain: up until recently, I’d been having a pretty minimalistic view on my personal capacities. I’m quite a sensitive person, and when people repeatedly tell me how hellish it would be to care for a small baby, well, you start to accept a mantra if people repeat it to you enough. I sympathized and nodded, smiling politely as they told their stories, and reserved an ever diminishing amount of hope that just maybe things would be different for me if I ever became fortunate enough to have kids. This was tough. And I mean really tough. For more than 10 years on end, people have been giving me lectures on what it means to have kids. I hold no grudge against them of course, but it eats at your confidence, and when you are looking forward to the experience of fatherhood, it also eats part of your idealistic world view. After a certain amount of time, the sincere hopes and longings I started out with gradually were buried under layers of fearful goo, leaving me a mechanical hunch to pursue a faint shadow of what once were pristine motives, only adding to the doubts. Add to this a few biological fertility problems, and you may start to ‘get’ the overall picture of what we went through.

    And then Isabelle did get pregnant, and Arthur did get born, and all of sudden, you get to experience everything first-hand. This was nothing short of an emotional wall that collapsed. Suddenly the hopes and yearnings that were buried in our deepest caves and canyons resonated firmly through our fibers. All the reasons and ideas literary came back to life, and one by one I started to revisit all the things people told us about. And one by one, those absolute truths crumbled to pieces. Yes, it’s an added responsibility, and yes, you have to adjust your daily life and give up things, and yes, things smell bad sometimes, but it is by no means hard. Maybe it’s because I’ve heard everything that there is to hear about having babies for the last 15 years – I had plenty of time imagining what that would be like. Or maybe it’s because I’m older than most parents are when they get kids and can, to some extend, let go of my favorite occupations more easily. (It still stings sometimes.) Or maybe it is because enduring all the psychological hardship is finally paying dividends. Or maybe it’s simply because Arthur is just a super kid. Whatever the reason, what I can say is that I tremendously enjoy being a father. It has made me stronger and a bit more confident. I’m giving it my very best and I know it’s working out when I hear the little guy laugh out, content, like he does every day. That’s in fact really all the feedback I need to regain my composure. It wipes out all that effort to store my hopes away in that private place full of dusty cobwebs. It slowly dissolves the feeling of driving with a handbrake that I’ve lived with for the last few years.

    In the process of all this, and next to my increased interest in following the financial developments around the world, I also discovered that I grew fond of writing things down. For a lot of people (including me) structuring thoughts into strings of words realigns our minds to whatever meaning speaks from them. For those people, writing can be a means to grow, to build on top of what is already written. Mind you, there is absolutely no importance tied to the amount of people reading what I write here, nor do I care if people ‘like’ what I’ve written. It’s nice to hear that, of course. But if that expels me from the facebook generation, so be it. The important thing for me has already happened. I wrote down this text. And realizing the metaphysical importance of that process is another leap. As you write, you tend to reform sentences, replace words, restructure the content, elaborate or cut out parts that have no added value. But the operations on the text also reflect a mental transformation. If you practice this a lot, you automatically evolve your brain patterns, too. It’s totally unsharable, but probably the most important aspect. I guess you could compare it to sitting at a bar. The first few times, you feel a bit uneasy starting conversations, but as you tend to repeat the act, you grow more proficient at talking to people. I just never was the type to sit in a bar much, and with writing, there’s the added benefit that no one has to go through the various drafts of your musings. At university, besides “nerding” around in the demo-scene and playing chess on-line, the poetry mailing list was one of my favorite time-sinks. If people would read the contents of it now – I secretly hope that nothing of it survived the test of time, though in this digital era you never know – they would probably have a good laugh. I suspect the unbridled creative nature of that occupation helped me to develop a taste for writing.

    So yeah. Confidence. Not an easy topic to write about. It’s now been 2 months since I wrote the previous parts, and I’m still gathering bits and pieces every day, and at the same time I learn to blend in into this city of surrealism with increasing success. The process teaches me that protective environments have to be temporary, or they do more harm than good. I know I still have a lot of work to do, but I’m out there pushing for it. So here it is. My honest account shared and published. I win this war. All that remains are the battles to survive, and I know these. Here I come again.

    More good news reached us today. One of my best friends had a new baby born! A little girl that already has a big sister and 2 great parents, whom we don’t get to see often these days, which is a shame. Me and Isabelle wish the whole family all the best!

    Vertex format templates

    While my pregnant girlfriend is counting the weeks until D-day, I managed to code a bit again. I may be more of an all-round game coder, but I know quite a bit about the math involved in lighting models and post processing techniques. Still I have to admit, I’m old-school. The last lighting function I wrote was in software in an unrolled loop of a triangle span-pooled sub-texel renderer (in assembly). Hardware shaders – not to mention the various kinds of languages and hardware specifics – always seemed a bit of a drag to get into. Over the years, the field has matured quite a bit, and tapping into the huge amount of fancy papers on shading made the itch to try this myself only stronger. So I bit the bullet and dived into the NVidia shader tutorials.

    I immediately ran into a quite obvious fact: the vertex format is pretty much a defining factor as to what you can do in your shader. The first shader stage – called vertex shader – obviously works on the vertex format. The second stage – called fragment shader – is directly based on the first one; it samples/interpolates from that stage. This may be less of an issue if you have an editor + compiler – which we happen to have in development at Larian a.t.m. But for the sake of learning the trade, I’m sticking to hand-coding them at home for now.

    So why do I bring this up? Well, if you’re setting up a scene to render, then obviously you need to fill an object with data. The vertex format that you need to use for this is usually fixed. But as you go through the shader tutorials and the shaders become gradually more complex, the vertex format starts to change too. One can just start over in a new project and go from there. That’s fine, it works. However I want to keep all my little scenes in the same app, so that I could just browse through all the shader samples without hassle. The difficulty rises in that every sample may need a unique vertex format composition to feed its associated shaders. So how to specify those formats? I didn’t like the idea of putting #ifdef guards, I didn’t like the idea of having a gazillion typedefs, so after a bit of googling I stumbled upon this.

    The idea is to use type lists, such that each vertex part (position, normal, color, tex coord., etc.. ) is included in a type list. Here’s some code:

        template <vertex_use use, typename Next = void>
        struct vertex_info
        {
            struct Vertex
                : public vertex_part<use>
                , public Next::Vertex
            {
                typedef vertex_info <use, Next> Type;
                void Lerp(Vertex& from, Vertex& to, float t)
                {
                    Next::Vertex::Lerp(from, to, t);
                    vertex_part<use>::Lerp(from, to, t);
                }
            };
    
            static const DWORD GetFlags()
            {
                return vertex_part<use>::GetFlag() | Next::GetFlags();
            }
    
            static void build_format(vertex_format & fmt)
            {
                Vertex v;
                vertex_node n;
                n.usage = use;
                n.offset = reinterpret_cast<byte *>(
                    static_cast<Next::Vertex *>(&v))
                    - reinterpret_cast<byte *>(&v);
                fmt.use.push_back(n);
                Next::build_format(fmt);
            }
        };
    
        template <vertex_use use>
        struct vertex_info <use, void>
        {
            struct Vertex : public vertex_part<use>
            {
                typedef vertex_info <use, void> Type;
                void Lerp(Vertex& from, Vertex& to, float t)
                {
                    vertex_part<use>::Lerp(from, to, t);
                }
            };
    
            static const DWORD GetFlags()
            {
                return vertex_part<use>::GetFlag();
            }
    
            static void build_format(vertex_format & fmt)
            {
                Vertex v;
                vertex_node n;
                n.usage = use;
                fmt.use.push_back(n);
            }
        };

    For a detailed explanation on how to apply a type list to a vertex declaration, please visit the previous web-link. The neath thing is – it can also be used to return the correct stride of your Vertex format, and even the DirectX9 flags used to communicate with the shader api. Nothing but goodies!

    But that is not all. If you implement something like lerp for all your parts, you can instantly lerp between any sort of Vertex, like so:

        template <vertex_use use>
        struct vertex_part
        {
            const vertex_part<use> Lerp(const vertex_part<use>& a, const vertex_part<use>& b, float t) { return *this; }
        };
    
        template < >
        struct vertex_part<v_position>
        {
            float3 position;
            static DWORD GetFlag() { return D3DFVF_XYZ; }
            const vertex_part<v_position> Lerp(const vertex_part<v_position>& a,
                                               const vertex_part<v_position>& b,
                                               float t)
            {
                position = a.position + (b.position-a.position)*t;
                return *this;
            }
        };
    
        template < >
        struct vertex_part<v_normal>
        {
            float3 normal;
            static DWORD GetFlag() { return D3DFVF_NORMAL; }
            const vertex_part<v_normal> Lerp(const vertex_part<v_normal>& a,
                                             const vertex_part<v_normal>& b,
                                             float t)
            {
                 normal = a.normal + (b.normal-a.normal)*t;
                 D3DXVec3Normalize(&normal, &normal);
                 return *this;
            }
        };
    
        template < >
        struct vertex_part<v_color>
        {
            DWORD color;
            static DWORD GetFlag() { return D3DFVF_DIFFUSE; }
            const vertex_part<v_color> Lerp(const vertex_part<v_color>& a,
                                            const vertex_part<v_color>& b,
                                            float t)
            {
                float a_alpha = float((a.color & 0xFF000000) >> 24);
                float a_red   = float((a.color & 0x00FF0000) >> 16);
                float a_green = float((a.color & 0x0000FF00) >> 8);
                float a_blue  = float((a.color & 0x000000FF));
                float b_alpha = float((b.color & 0xFF000000) >> 24);
                float b_red   = float((b.color & 0x00FF0000) >> 16);
                float b_green = float((b.color & 0x0000FF00) >> 8);
                float b_blue  = float((b.color & 0x000000FF));
                float alpha = a_alpha + (b_alpha-a_alpha) * t;
                float red   = a_red   + (b_red-a_red)     * t;
                float green = a_green + (b_green-a_green) * t;
                float blue  = a_blue  + (b_blue-a_blue)   * t;
                color =    (((int)alpha << 24) & 0xFF000000) +
                           (((int)red << 16)   & 0x00FF0000) +
                           (((int)green << 8)  & 0x0000FF00) +
                           (((int)blue)        & 0x000000FF);
                return *this;
            }
        };


    Isn’t template specialization cute? Obviously you can do the same for slerp and other interpolation schemes.

    I’m still toying around with this, but I haven’t ran into any sort of problems yet. This setup particularly shines when I wanted to subdivide a model (any model), using any sort of vertex format, just by telling them how to split up, as long as the splitter function calls onto the vertex format for specific vertex operations (such as the lerp mentioned above).

    Another thing it sped up doing is setting up a ‘universal’ TetraHedra structure, which is constructed using an initializer type. The initializer supplies the rough vertex locations and the face mapping based on tetraHedra formulas (more or less as if it were read from a data file), and the object is constructed using whatever vertex format is required in the shader example. Ah, sweet instant gratification. Nice. :)

    Well, nothing to get extremely excited about but I thought it was worth mentioning.

    Hope you like it and see you next time!
    a0a

    A word from our sponsor

    If you’re lucky, you live in a part of this world where news is ubiquitous, where information flows abundantly, and where you have something to eat 3 times a day and more if you should wish to. It is in that same part f that same world, curiously, that headlines are screaming bad news into our face from whichever angle you take it, day after day, relentlessly: massive job-losses, tax-increases, child molestation, uprisings, failing healthcare, traffic disasters, government corruption, credit default swaps..

    I’m sure it is a sign of the times, but right now the perception lives that everything we know, all the food we need, the water we use to wash ourselves, and the warmth we need to protect our kids and family has increasingly become part of a cataclysmic function, a dependency on an obscure technical stock-market index of sorts. That’s very natural, you say, because we live in a capitalistic world after all. The free market drives profit and loss, demand and supply, all by it’s own rules of greed and happiness. So yes, that’s actually a natural thing. Why, then, is the world seemingly going up in flames about this? What has changed? Why is natural food either dirt cheap and therefore worthless to produce, and at the same time increasingly expensive when it needs processing? Why why why..

    Big banker names in the financial world are crumbling, but it is not them who are to blame. Yes, their attitudes have changed, and they bring out facts that turn out to be not true at all, and they promise things that they can’t possibly promise. But the real driver of mass psychology are the media. Their global mantra of one financial disaster after another, minutely analyzed and tastefully illustrated with thousands of economist doctoral thesis models and estimates, is building us a self-fulfilling prophecy in 16.4 million colors, a fearful world vision that is out to debase whatever value there should have been in any investment ever made, and by no matter who. The only winner in this epic tale seems to be a guy called Warren Buffet. No, sorry, that was just a joke.

    The media claim their self-proclaimed moral duty to inform Warren and Joe Sixpack about his stock and bet. To keep doing that profitably, news outlets and networks collectively turned themselves into fiction publishers. It is no longer in their primary interest to bring real news or tell stale truths. One look at the dynamic headlines and today seems like so different then yesterday, and yet.. doesn’t everyone agree that most days will still very much be like their yesterdays? It´s in news agencies first interest to sell believable stories that have a profitable ‘wowie’ factor. They’re weeeee close to selling action figures to go with that. The war on terror taught them how to wage fear in a populace by twisting stories to whatever extend was needed to drive a message home. And everything else, as they say, is just history.

    If you take another step back, beyond the increased media attention and coverage of these domino events, you also see something else: The once communist china has turned into a giant rat-race pony on steroids. The once communist Soviet Russia discovered the money game and is simply torturing neighboring states with oil and gas cuts as they wish. The once honorable gentlemen of the British islands have all signed up to play Onslow in a series called Keeping Up Appearances in The Common Wealth (how ironic a name). The once, well, New Yorkers are still dreaming of an America where everything is possible, and they all act just as selfish as the next guy. The old continent is slowly grinding to a rusty halt while feeling incredibly intelligent and important. The third world is struggling to get into -but firmly kept out of- class. And the Middle Eastern countries are sitting on their oil and more or less watch the rest of the world play tag, you’re it!

    Ok. Stop. Let’s not believe all of that for a minute. Look at the real capital in this world again: hi, yes, I mean you. We can’t have the financials run our society the way it does today, we can’t have mass psychology dictate how valuable we are. In that sense, I agree with the protesters in Wall Street. What I hate is their destructive nature, because they provide no alternatives.

    And yet, here is a very simplistic idea: Make all intended trade actions globally public and accessible at least 3 days before they are allowed to proceed. Remember, yesterday will be a whole different day. No more death marches, no more ralleys, no more volume trading spikes. It would take a lot of fuel out of the psychosis that is currently in the markets, wipe out rumors on rumors of murmurs, and bring back common sense and square off speculation against a currency, let alone a country. The trading would become a lot more strategic, and less impulse driven, and I think that is exactly what this world needs today.

    So, 2 cents from me, all reactions welcome.

    Premature optimization myth debunked

    Anyone that ever got slammed with the phrase “Hey, don’t go optimizing this prematurely! It’s the root of all evil! Knuth wrote it in his book! You idiot!” by some lunatic manager of sorts should follow their gut-feeling and pick up the following to fight back the idiomatic dogmas:

    http://my.opera.com/Vorlath/blog/2011/08/14/optimizations

    It basically says: Knuth was an old fart who made love to inner loops. We’re way beyond that now. Either don’t give a fuck about optimization at all, or start caring from day 1. Something that is well known in the gaming industry, since here even the slightest difference in approach or implementation can bring a system to it’s knees if you’re not carefully considering all the details, and preferably, test them out (i.e. try all the options and compare).

    Note to our future

    No big statements here, I just wanted to keep a quote at hand for future reference, something I just read in a book I keep getting back into. I started reading before the unfortunate events occurred in peaceful Norway. Though the events described in the book were totally different, it was like what I just read in that book was happening live on TV at the same time. The parallels were making me dizzy. I’m still reading this book today. I have a tendency to read the books I like very slowly.

    “If you lose your ego, you lose the thread of that narrative you call your Self. Humans, however, can’t live very long without some sense of a continuing story. Such stories go beyond the limited rational system (or the systematic rationality) with which you surround yourself; they are crucial keys to sharing time-experience with others.”

    “Now a narrative is a story, not logic, nor ethics, nor philosophy. It is a dream you keep having, whether you realize it or not. Just as surely as you breathe, you go on ceaselessly dreaming your story. And in these stories you wear two faces. You are simultaneously subject and object. You are the whole and you are a part. You are real and you are shadow. “Storyteller” and at the same time “character”. It is through such multilayering of roles in our stories that we heal the loneliness of being an isolated individual in the world.”

    [..]

    “Because it seems to me that these discrepancies and contradictions [in stories told by people that experienced the very same scene, ed] say something in themselves. Sometimes, in this multifaceted world of ours, inconsistency can be more eloquent than consistency.”

    - H. Murakami, Underground

    The old new thing

    Hi everyone,

    This week has been marked in history as the week when someone thought it necessary to kill 80 people, most of them teenagers and youngsters, on an island in a fjord in Norway. I’m tempted to give this character no forum whatsoever – I deem it inhuman to consider any idea to be a valid reason enough to carry out his actions. Nonetheless I feel it is important to refute his so called ‘logical framework of reasoning’, simply because he’s dragging a good deal of common sense with him into the abyss of non-ethical and dehumanized mass-murder.

    This may sound strange, but I feel it is important to regard the ‘accused’ (no sentence is proclaimed as of yet, though at this time of writing, he admitted to having committed these horrible acts) as ‘one of us’, because his weaknesses (solitude, antisocial behavior, a struggling sense of justice that is tested over and over again in the current society) do not sound all that uncommon at all!

    So, instead of portraying this man as a mythical (religious?) anti-good figure (demon, devil, monster) is it not more interesting and a lot more healthy for our society to find out how things went so terribly wrong for him, and how our social model must evolve in order to prevent such a thing in the future? Is it a good idea to be able to file everyone in the same box? Is it a good idea to kill those who differ in opinion and mindset? Is it even a good idea to believe in perfection and purity? If one idea exists, isn’t that precisely because the inverse idea also exists? In other words: it’s not constructive to eradicate your intellectual opponents, because in essence it is suicide.

    Less shocking, hopefully, it is also a week where the whole house (the one we started rebuilding since last year) has sort of come together. It’s amazing how much time frees up now that we can do the dishes, clean the floor, cook dinner properly, send an email on this old computer, wash our clothes properly – even though the weather has been miserable and nothing seems to dry. We don’t have heating yet, but all in all the house feels nice and comfy, lots of stuff brimming and at the same time bringing a sense of relaxation. A big thank you must go out to Jessy & Ludo for letting us rent their place for a year and take a peak at all their fantastic pictures they made on their truly epic biking trip. I also have to pay tribute to my fantastic architect and all of the construction workers for pulling off an amazing job in a little over one year timespan. The amount of stress, sweat, tears, blood and phone calls that went into this project is undoubtedly high, but it also teaches us a thing or two about people, and perseverance. There’s still plenty of little ends and odds to fix, but we’re under roof and we’ll manage just fine from here.

    Lastly, I would like to end with a note to everyone who feels deserted in their feelings. You’re not alone. There’s a huge amount of human experience out there that can lend an ear and bring comfort. Life is totally indifferent to justice or human longings, but with every misfortune come other opportunities. When Darwin lost his treasured Annie at a young age due to the same illness he himself suffered from, it triggered his already questioning mind to dive deeper into ‘why’ things are the way they are, regardless of his beliefs and emotions. His theory throws us right in the middle of the very human question ‘why?’, and though the answer turned our to be very technical and not very ‘warm’ or comforting, it reminds us of the kind of determination and courage that the man had throughout his whole life. And with him, his wife and friends.

    Take care.

    Lighter wearing socks

    Time for a bit of an update rant on life.

    I’m currently running on some sort of autopilot. Project E complexity is increasing fast and needs more attention. In may we’re going to try to put together the Japanese version of DKS, and there’s a myriad of other things left and right that need to be fixed both in projects D and E. The company is still a bit understaffed but overall the vibe is quite good. I’ll miss Ava when he’s leaving, and will probably end up assuming some of his responsibilities. B and Eddy are doing great on project E, and it is sometimes frustrating to be tied up into my current personal situation.

    Our house is slowly starting to look like a house again. Still a few months to go and a number of headaches to fix but the progress has been amazing! Delays happen, of course. Frustrating. Biting nails helps some. But we’ve come a long way since a full 3 tons heavy crane wheeled through the facade of our ‘pan-cake-house’ as Isabelle calls it. Hopefully we can move in soon. We’ve done some cool things with light, and with a streak of mad luck I may find some time to upload some pictures.

    Relation-wise, our child-wish has pulled us a lot closer together over the years, but at this point it is still pretty much a wish. We count ourselves lucky to have the best medical care in the world. The trajectory has been a rough emotional ride for both of us. It’s difficult to explain everything to everyone, and my hope is that someday, either way, we’ll be able to join friends and relatives again and lift the invisible cloud that poisons every conversation with some sort of pity feeling, induced or donned, be it consciously or not. We”ll be going for all our options, but first, the house..

    Well it’s off to bed early again, tomorrow’s going to be a busy weekend again!

    Cheers all,
    a0a