DIY Binning

     

    Binning results

    The binning operation comes into play when you have a lot of test results. Say you’ve measured the rendering times of your rendered for different scenes and different parameter settings (e.g. several resolutions). You possibly did also measure the number of rendered triangles, the number of lit pixels, the number of overdraw per pixel.

    If you brute-force all those settings and keep log files that contain your results, you end up with quite a bit of data. To make this data meaningful, you have to group that data into arbitrary subdivisions again, and then let loose all your statistical magic for that set (mean, averages, standard deviations, etc.. ) Suppose you are only interested in the most expensive frames, or all the data for model A, or all the measurements for resolution XYZ. While the interesting bit is in the statistics, getting those statistics on the right data is actually the hardest part, especially when the data set is large.

    Now, you can argue that a few measurements are enough, and that you can extrapolate to other scenario’s. This is probably true for simple linear, quadratic and cubic relationships between statistics, but in some cases the performance landscape of your function can have surprising outliers, especially if you are measuring functions of systems that you do not yet fully understand.

    You can drop all that data in database tables and rely on SQL to deliver the right answers for you. While that may be a viable approach for a number of good reasons, sometimes dumping massive amounts of data points into a database is just not feasible. The next best option, is to bin the data yourself. There are a number of approaches you can take. One is to use commercial software, which will often break down on large data-sets. One is to use R, the statistical scripting language. And on is to write it yourself, which is what I did.

    Ground truth

    First you have to set up the bins you are interested in. This means that for every ‘set’ (say: resolution 130 x 251), you aggregate all the samples for every specific feature that is relevant. This simply means: storing them in memory per ‘set-id’. When you’ve gone through all your data for this bin, you process all the relevant stats on it, clear everything, and move to the next bin, until you’ve exhausted all the bins.

    The reason you have to store so much data is because we want to compute the sample and population variance to compute the standard deviation. To compute the sample variance and population variance, the summed squared error (relative to the mean of the bin) is divided by the total number of measurements for that bin (minus 1 for sample variance). The bin’s mean and the total number of measurements is only known after all measurements are added to it, so they all really need to be stored in memory, which is kinda sucky.

     

    Optimizing in C#

    There are 2 factors which can wildly run out of hand: memory and performance.I first concentrated mostly on performance, then gradually also started looking at memory consumption. The more memory that is touched and controlled in the garbage collector, the more time it takes for the application to context-switch and page-fault it’s way through the relevant memory.

    Performance was greatly improved by doing a number of things: profiling quickly indicated which parts were slow, and with a bit of restructuring and taking my specific scenario into account, a number of conditionals could be removed and for loops tightened so that the total amount of work dropped.

    Parallel = faster?

    Next, I started playing around with “Parallel.Foreach”, so that all CPU kernels were busy doing part of the work. Parallel.Foreach is great, but it can even worsen performance if you keep copying data around. It also brings up the issue of keeping the GUI up to date (using delegate Invokes – if you launch it from every thread, you generate a massive event overhead which ultimately results in UI -stall and serious performance drain). At first, it was difficult to rewrite the code and find good CPU coverage. All cores would be busy, but application processing would consume about 20% of the CPU time, the rest of it going to stalling and overhead. The initial changes are surprisingly easy, but getting good performance out of this is quite another ballpark. Once again, profiling helped fine-tune the application so that the cores were maximally loaded and overhead of scheduling was reduced to a minimum. This brought down the processing time to 2 days, from taking over half a week.

    Concurrent = better?

    After more twiddling I discovered that the “return yield x” allows an iterator function to delay the computation necessary to return an element until the iterator function is effectively queried for it. But this proved to be tricky, since simply calculating the size of the Enumerated elements would already cancel out that functionality. Additionally, it meant that Dispose() methods (file handles and streams) were now only run much later in the application, leading to serious memory consumption. Still, the return yield meant 1 day of crunching, instead of 2.

    Taking a better look at code coverage, I discovered that most of the overhead was due to memory copying and stalling on behalf of the UI, as well as a few conditionals that could possibly be removed. I started using parallel containers such as ConcurrentBag and once again rewrote the code. A warning though: you have to be very careful when you use such constructs, because contention can be extremely expensive. ConcurrentBag keeps bags per thread, and locks them at least 2 times per opeation (Add/Take). In my case, I was just adding items, and reading happened after all items were added, and on a single thread. If you have high contention with many different threads reading and writing at the same time, lock-less patterned containers will probably be a better choice. The CPU consumption shot up to 85-95% range.

    At this point something amazing happened: the profiler started showing String.Contains() as one of the prime contenders for spin-overhead. I was partly surprised to find it so openly up for grabs, and wondered if something could be done here. Of course, you can shorten the strings, but since it was based on actual data on disk, I didn’t want to touch that.

    Now String.Contains is fast..

    .. but somewhat surprisingly, this is even faster:

     

    return ((line.Length - line.Replace(searchText, String.Empty).Length) / searchText.Length) > 0;
    
    

     

    I came across this gem on this excellent benchmarking site. I had Contains() up on top in Intel’s VTune performance analyzer in the spin-overhead section.

    After applying this, the the spin-overhead on a test-scenario was reduced from 240s to 40s for the very same run. Instead of a full day, the results now took under one hour to compute. And the data set was actually enlarged by 40% since the previous tests, as new cases had to be evaluated.

    Obviously stripping out any stored members in your measurements is going to win you space and time. After the initial cut, the deferred de-allocation started playing a role. I played around with the GC but did not really find that it helped much. Instead I came across another setting that you can use: server-based garbage collection:

     

    <gcServer enabled="true"/>

     

    Afterwards, I switched the memory model in 64bit to avoid any other memory problems altogether.

     

    The media desert

    I am not a journalist or reporter

    I am a journal-ismist. And probably not a very good one, but I leave that judgment up to you.

    Before we dive into the question of what is good and what is bad, I’ll first clear up the terminology. Obviously, journalism is what journalists do. It’s a profession. It’s reporting on the truth of the day. Or at-least, cover a story that could probably be the truth.

     
    TLDNR; how about facts instead of news?

    Truth

    But truth is a very intangible thing. One even wonders if it really exists, yet a lot of people claim to be looking after it, or searching for it, depending on the angle you take. Seeing the truth sometimes occurs due to random stupidity instead of being a combo of genius insight, apt and abundant availability of information or the power the filter and process it. Breath here.

    The Wikipedia page on journalism has probably the longest short introduction on the word on the whole site, precisely because it is hard to draw lines around that concept. Sometimes it is hard to separate the satire from the serious, the culture from the context, the fact from perception, the ideology from the profession. There are social and moral ethics involved, an embedding cultural environment, a promise to report (only?) relevant new occurrences, insights, facts, etc, and another promise to ban all other political or economic influences. People like to make promises. And people are not failure-free. Oh and one other thing: it has to earn a living, too.

    Ok, so that is journalism. Should we believe everything the news tells us? You know the answer. Should we dismiss it because it’s just news? Same answer. But then, if news is both unbelievable and valuable at the same time,  how can our daily intake on information a.k.a. news be use to make good choices in our daily lives? Read the next paragraph, then ask yourself this question again.

    It’s up to you or anyone else to fill his or her personal answer (free as in beer). My answer is to regard “news” at best as ‘possibly true’, and at worst as ‘an indication of corruption of context’ – call it proof that there is information manipulation happening, and that there’s a high chance there is purpose in it. So in that sense, that news, though it be untrue, is also useful. It’s just really hard to tell them apart, and I feel  that has only become even more difficult. It’s easy to call this paranoia, and such ambivalent and impractical stances never get you very far in this efficient world.

    Here’s a thought experiment: Suppose you are sipping on a long-drink cocktail in Aruba, looking at the waves while reading this.. do you feel the sand in your hair and the salt on your skin? Perfect. In that truth of sand, air and water, these rambling musings on the value of truth, believability of journalism and the uselessness of the blur you taste after showering yourself with daily news is undoubtedly going to come off as theoretical, unimportant. Most probably, though, you sit behind a computer screen like I do, and found time to read this, and while asking yourself if you are spending your time right, you also wonder how to interpret the world, like everyone does. People around you, and possibly you too, (have to) make decisions. They do that based on ideals, emotions, reasons, and some degree of power or authority, and those decisions will always have a cost and benefit effect at some point. The power element is the  dangerous factor, because it skews visions, it skews arguments, it skews decisions, finally, it also skews context, and thus it is also bound to skew journalism, which drives the general perception of truth to a large extent. This is not new(s), of course,  but I bring it up to underline that our belief structures that we use daily are based on information that is subject to power shifts. And emotions play a big part.

    And so.

    In marched the social media, where everyone suddenly can take on the role of reporter. Where the power to publish is democratized (to some extent) and where money is no longer a driving factor (to some extent). This is the 2.0 economy, where everything is free, including your personal data, and with new rules and ethics, new lobbyists and group mantra’s, new do’s and.. well, more do’s.. , but also with secret deals between Youtubers and other parties, and new ways to convert money into power. The Arabic spring of late was initially a very positive example of this ‘emotionalism’, but also proved how fragile and easily hijacked that power really is. There is still much to learn from this. Bottom line: power shifts occur much more globally and profoundly than ever before.

    One way to detect these power shifts, is by analyzing both the frequency and the type of language that is used in communication (textual, graphic, ..). It is perfectly possible to bring the same message from different points of view, with different agenda’s, and drive each of those points home by wrapping the information in a suitable linguistic format. The difficulty is often to maintain said format after the power shift, at which point it usually becomes apparent to the masses, but then of course, the power shift has already taken place. It’s much harder to detect the changes early in the public discourse. When those formats start to change, when the choice of words change, when multiple people suddenly use the same sentences and structures, or even when the meaning of words is actively being attacked and changed or ridiculed, there is a high chance that other ideals and interest groups are entering the equation and are actively aiming to increase their influence actively on the topic. As in: skewing news, skewing perception, and most importantly: skewing the target-group’s common train of thought trajectories.

    So let met be clear about one thing

    This blog is journal-ism. It’s not truth. It’s purely my rendition of my truth. And in my rendition, I observe an increased loss of signal and much more random noise in all sorts of directions. Here’s an example: An automatic shutdown of a nuclear plant is sold as proof that said nuclear tech is no longer hip (and actually that nuclear plant was originally planned to dismantle 10 years ago) and at same time sold as proof that we can’t function without said plant (since it is summer and we’re lucky that the energy demand is currently low). Both points of view are true. Both have economic consequences for energy lobbyist groups, a decision in either direction holds important military, strategic, ecological and political as well as financial considerations. That’s just one example, but there are many more. Double-speak has been part and parcel of the ruling elite, and especially today, training yourself to detect it is almost life-saving. But to come back to the point of this paragraph: while such reporting has educational and intellectual merit, decision-wise it gets us nowhere. The loss of signal is due to the bulk of information and implicated consequences that are or are not included in the analysis. I believe those are important to tell, but so much remains  mis- or un- communicated (active or passive), misnomered, misplaced, misinterpreted, “knee-jerked” and “gun-jumped” that the important bits are sometimes “blatantly” (now there’s a word)  missing from the report, or totally “snowed under”.

    Now, I gave you my opinion, just like all the other blogs, and that is journal-ism (hy-phun intended). And you can read the newspaper instead or watch TV, and you’ll read or hear pieces written – supposedly totally not influenced by lobbyists, or policymakers, networks, channels, etc.. And that’s reporting or straight up journalism. I feel that neither of those is getting us anywhere, having us care about what-ifs and could and should, without even knowing the full truth.

    Factology?

    I could not think of a better name, but I think it should go without the ‘ism’ postfix, since that merely tells us personal influence of somebody is involved. And obviously, there’s a search or quest to write down factual data, hence the name. Then again, it almost sounds like Scientology, the infamous science-turned-religion movement that is less more than a piramidal get-rich-quick scheme. Factology is certainly not a religion, rather, a way to filter news and extract only true events and data, without the stories woven on top of them. So while History is studying the more or less integrated processes behind political, economical and social evolution of a certain subset in space and time, Factology strips down current news to it’s bare-bones, such that only verifiable facts remain. It’s more or less what Reuters and Tass are supposed to do, but cleaner, and more scientifically structured.

    Best example of Factology is how we report on wars. On one side, you’ll find numbers of casualties, ages of persons blown up, and places. A UN school building with kids under 6? The act is going to cost someone’s head. A depot of insurgent fanatics that unfortunately also happened to have a blackboard and some children in it, and a few fellow countrymen on that dangerous mission? Not ok, but at-least the 3,5 rockets are destroyed. For example; I have gripes with the terms ‘minor’ and ‘young adult’ in reporting. Why not simply report the true age and what the implications are in the juridical system of that locality? Let people pull their own god damn conclusions instead of pre-chewing your vision for them. There are numerous other such examples. Sometimes, suspects are named, sometimes they are white males, sometimes they are dedicated husbands that unfortunately have lost control over their lives. Get it out or just don’t.

    In all fairness^H^H^H^H^H^H^H^H^H truthfulness

    The objective sort of reporting is hard and possibly  too slow in this world. But news used to be slow. It used to be well-written, and it took time, and a few braincells to read and process. Most importantly, it was checked and verified against a number of different sources before it was published and the editor-in-chief made it his chief concern that it had social value. It was also the kind of information you could start relying on, or at-least have a feel for when it was written in paper X by journalist Y, and come to agree with the style, the tone, the sort of passion it took to report it the way it was reported. I’m not going to claim it used to be better, but it was different.  Today I regard the New York Times as a great news source. I don’t want to generalize, but a lot of other stuff out there is ‘incredible’, ‘look what he did’ and jingles.

    Technological advancements, web2.0 and the social media revolution may unknowingly have obsoleted an important vein in human progress and evolution. If we want to continue to make sane, medium-term and long-term choices and be smart about our future, we need that vein back. It’s nice to play the split-second stock-market, but let’s just not fool ourselves until it crashes. The slow, factual information we need is the sort of daily education a nation needs in order not to revert into tribalisms and cultural insecurities and other short-sighted or purely economic reflexes. Web2.0 has gotten to a point where the Internet is being eclipsed by corporate sandboxes such as Facebook and Google, and the real content is increasingly un-public. Few people realize it, and just click through their free entertainment stream that is ever so slightly skewed towards a specific (shifting) center of mass. On a positive note, there is hope: social media are social after all, and nothing prevents them from aiming higher and advocating higher ethical and moral grounds. Wikipedia, wikileaks, and other agencies are still in their infancy when it comes to negotiating the terms on what is fact and what is fiction. The last thing we should do is institutionalize ‘truth-sayers’, but this will probably happen too, at some point.

    In any case, food for thought, be it on the slow side and a bit in rant-mode style.

    Vectored Exception Handling and OutputDebugString

    Try me

    If you write high-preformance game code in c++, there is an important language feature that is usually simply not available: structured exception handling, in the form of try / catch statement blocks:

    try
    {
      /* code goes here that throws an exception, like so: */
      throw MyException(/*arguments*/);
    }
    catch(MyException e)
    {
      /* handle exception e that was thrown in the try block */
    }
    
    

    A quarrel of stances

    Firstly, try/catch is a bit broken in c++ because the language ‘forgot’ to add ‘finally’, which is present in Java and c# to handle situations where all code paths converge and handle off the function block. In those languages ‘finally’ statement blocks are very explicit about releasing resources like file handles or memory buffers, after handling the exceptions. Stroustrup defends his language design omission decision by stating that the problem can be solved using RAII (Resource Acquisition Is Initialization) which is a valid, more object-oriented alternative.

    While that is true, it forces people to generate class structure overhead, which you have to keep under control if you’re writing game code. Secondly, Microsoft went so far to actually add the ‘finally’ syntax as a microsoft-specific compiler extension. Thirdly, if you look at Java and C#, Ruby, they do provide a finally statement block syntax.

    But apart from language syntax issues, these are the real reasons to stay away from try/catch blocks in game code:

    1. try/catch can lead to exceptions being thrown from one call to the parent call etc. The code pointer typically jumps around in the stack, meaning that your code cache gets utterly trashed. Performance-wise, this is by far the worst way of handling exceptional conditions.
    2. The definition of what exactly can be understood by the notion of an ‘exception‘ is not always clear to all programmers, and can become quite a philosophic debate. I’ll argue here that to have certain use cases handled as exceptions betrays that your code reserves a certain bias as to what is an exceptional operational condition and what is not. For game code, this is weird, since you know all conditional code paths at all times. The last thing that should happen is to have control yanked from under your feet by some 3rd party library call that bubbles an unknown exception up to the surface at totally the wrong place. And to make matters even more complex: there are 2 types of exceptions: system exceptions and standard exceptions.
    3. Stack unwinding during exception catching does not travel across threads and exceptions have to be explicitly handed over to the proper thread in multi-threaded applications. So in the end, programmers end up with a bit of a garbage bin of catch clauses at the top of the application threads. It’s obviously bad, but it happens more that you think.
    4. try/catch usage implies that you can trace back to the object type that generated the exception. This requires Run-Time Type Information (RTTI) during compilation to be enabled, so that you are able to use instanceof and typeid operators, and maybe also some dynamic_cast. Obviously the performance penalty of dynamic_cast is malicious and the operators trash the address registers and cache lines, but also the increased footprint of the executable and the associated memory costs per object is adding to the wrong side of the equation. And since we’re talking exceptions, most of your code should not even be needing it.

    Uh, wait..

    Ok, so basically game code is totally in love with RAII, though it might not always be implemented in a ‘nice’ (if such a thing should exist) Object Oriented approach. But wait! You said there still are system exceptions. What about those?

    Indeed. The system is sometimes throwing exceptions that relate to OS operations that fail, for example, to load a DLL. In such cases, it can be interesting to trap the exception somehow, without having to resort to the whole RTTI/try/catch overhead. This can be done in Windows using Microsoft’s Vectored Exception Handling.

    Basically it allows the application to install an additional custom exception handler for each exception error code that the system is able to throw, using the functions:

    LONG WINAPI VectoredExceptionHandler(PEXCEPTION_POINTERS /*pExceptionInfo*/){...}
    
    AddVectoredExceptionHandler(1, VectoredExceptionHandler);
    RemoveVectoredExceptionHandler(VectoredExceptionHandler);

    Check the MSDN for details.

    Deja vu

    The point I wanted to make in this post, is that while you’re in your VectoredExceptionHandler function, there is a problem with using OutputDebugString(), as it again throws an exception when the debugger is not attached.

    It does however seem to work when the debugger is attached. Suspicious as this is, at first I blamed my own code and it had me immediately looking for uninitialized values or other memory corruption, but nothing came up, and the same code runs fine in other code paths (frequently). After isolating it into a test program, it seems it is actually re-entering the VEH exception handler, and thus causes stack overflow.

    To conclude, this is not just an OutputDebugString() issue of course, it can happen at any point in your exception handler. As soon as you make a library call that depends on standard or system libraries, the VEH is vulnerable to re-entrant code. So guard your VEH’s against re-entrant code paths, or you may end up never leaving the VEH at all :)

    Cheers!

    Natvis Matrix visualizer

    Long story short,

    I moved to Visual Studio 2012 and then realized that all that tweaky autoexp.dat scripting that sports those fancy instance viewers during debugging was for naught. Since I didn’t move up to VS2013 yet, I’m a bit stuck with the “new and improved” Natvis system. Natvis stands for Native Visualizer, because your visializer can be compiled into a DLL (e.g. from c# code), which is a performance move from the purely scripted approach (using autoexec.dat) that was in use before. The scripting was arguably hairy, and after a while, hairpulling, so it’s only reasonable that Microsoft made an effort to improve things. So, out with the old, in with the new!

    I encountered 3 problems with Natvis. The original feature set somehow got trashed, and only a thin set of features remain. This means for example that some information can no longer be displayed, or that you can’t format it correctly. Secondly, there are no conversion tools, everything that once worked is gone (although someone claims that autoexp.dat can be re-enabled for native edit-and-continue debugging in VS2013). Thirdly, if you don’t want to jump into and out of c# projects to change your debugger, you can use…. wait for it… XML-based .natvis scripts instead.

    That’s right. I used XML and scripting in the same sentence. Off with my head!

    A Matrix Class

    Suppose you have a matrix class, say:

     template<typename T, int rows, int cols>
     struct MyMatrix
     {// your matrix members and methods here..
     };

    That’s all great, but how do we visualize it? It seems the .natvis system can only list array items (or other containers) if the count is known. But in the case above, MyMatrix has template arguments for rows and columns dimensions and is of rank 2. In fact, the matrix structure can be a recursive template type definition (and to be fair, that’s what I am actually using at the moment, but I omitted that for clarity). The template arguments can be captured using $T1, $T2, $T3, etc.. but here’s where things get interesting: you have to put them in curly brackets (e.g. {$T1}) in the DisplayString element to fetch their values. Of course, when expanding the elements, the curly brackets should not be used! That took me a while to find out.

    Second point of interest, and perhaps the gist of this post: all examples out there refer to internal members of the structs, but what if you have a recursive type? Well, it was mentioned in passing in the official documentation, but you can use the this pointer just as well, and index-cast it anyway you fancy, even using template types. This gives you access to anything that might be defined in the class/struct. The former type specifiers to print the values in a particular format are gone (apart from ,su for character strings it seems). Attempts to refer to other scoped types (i.e. (othertype*)this) failed so far, but maybe I made a typo.

     

    A solution

    I came up with the following. It’s far from perfect, and it tends to clutter the debug view with all the floats (because “,g” does no longer work), but it does show how to expand elements in a 2 dimensional ‘array’ after the this pointer.

    <?xml version="1.0" encoding="utf-8"?>
     <!-- Place file into My Documents/Visual Studio 2012/Visualizers/ -->
    <AutoVisualizer xmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010">
     <Type Name="MyMatrix&lt;*,*,*&gt;">
     <DisplayString >[{$T2}x{$T3}]({(*(($T1*)this))}, {*((($T1*)this)+1)}, {*((($T1*)this)+2)}, {*((($T1*)this)+3)}),({*((($T1*)this)+4)},{*((($T1*)this)+5)},{*((($T1*)this)+6)},{*((($T1*)this)+7)}),({*((($T1*)this)+8)},{*((($T1*)this)+9)},{*((($T1*)this)+10)},{*((($T1*)this)+11)}),({*((($T1*)this)+12)},{*((($T1*)this)+13)},{*((($T1*)this)+14)},{*((($T1*)this)+15)})</DisplayString>
       <Expand>
         <ArrayItems>
           <Direction>Forward</Direction>
           <Rank>2</Rank>
           <Size>$T2</Size>
           <ValuePointer>(($T1*)this)</ValuePointer>
         </ArrayItems>
       </Expand>
     </Type>
    </AutoVisualizer>

     

    Look, html-ified < and > brackets!

     

    You typically put a *.natvis file containing such scripts under the My Documents/Visual Studio 2012/Visualizers folder. You don’t have to restart Visual Studio, just restart your debugging session and it should work. If it does not work (i.e. you get the standard visualizer for a typed instance when you hover above it during debugging), you have to dig into the XML to find the problem.

    Room for Improvement

    The DisplayString is hard-coded for a 4×4 matrix. The obvious approach is to put conditional clauses for every possible combination of

    {$T2}x{$T3}

    but that just sucks as much as what I have now. Better would be to have some sort of iteration going on in the DisplayString, so that the length of it is dependent on the actual type, but that does not seem to be supported. Also, being able to split strings across multiple lines would be nice.

    I must say that during debugging, I found peculiar recursive behavior when improperly scoping the this pointer in brackets, so watch out for that. I expected 16 repetitions at some point, but only 9 displayed in the visualizer. This can be investigated further, possibly leading to a recursive debug visualizer, so that larger types can be viewed as a collation of smaller types (i.e. a 4×4 = 3×3+7 = 2×2 + 5 + 7 or somesuch).

    I tested this on 4×4 matrices. For 2×2, or 3×3 to work, you obviously need to remove or shorten the list of elements from the DisplayString (or make the list size dependent). For non-square matrices I’m not sure if the correct size is automatically inferred from a given dimensional size (row or col) and the size of the struct. If not, you have to drop the Rank element and set the size to row*col.

    Conclusion

    This demonstrates how to make the .natvis display a matrix array of values (or any kind of list), with the limit feature set, regardless of the type structure that is underneath it.

    Some of the reasonings for switching from the autoexp.dat syntax to something more modern make sense. You can write visual debugging tools for your data, which is fantastic, and the performance improvement undoubtedly is beneficial in any debugging session.. But I think most people are fine with using simple tools, and being able to script them is something that has been downplayed a bit too much here. That the ‘old’ autoexp.dat system still functions under certain conditions makes forcing people to rewrite their visualizers in XML ill-founded and unreasonable.

    The bad bad bad idea of trying to frame scripts with XML expressions has bitten any seasoned programmer at least once (and then hopefully we remember the pain long enough), but seeing even Microsoft fall into that trap reminds us to be thoughtful and repeat “I shall not script in XML” to yourself every once in a blue moon. Ruby or Python would have made more sense.

    To be fair, the natvis feature set was extended in Visual Studio 2013 – still XML though – and this article is written from a VS2012 perspective. So it might be useful for people like me who can’t find a good reason to update every half-moon to a new VS version. Would be nice if VS2012 could be patched up to 2013 levels regarding these debugger issues, but that’s another story.

    If it’s useful, let me know how it fares, and of course all comments are welcome.

    Achievements!

     

    Coding again for a while now, but yesterday (that means half a year ago, since this post had been sitting in my drafts section that long) I managed to tick another ancient to-do on my list. Time for me to give back a few things that I learned the hard way.

    The brief of it: I wrote a little effect intro containing a number of objects rendering with 2 passes, one shadow map and percentage closer filtering (single light source) and one that has a spotlight lit normal map using hardware. It’s using render to texture and alpha testing. Nothing truly incredible and earth shocking, but I’m quite proud of it because I feel I finally crossed a line. Actually 2 lines: the technical challenge, and the fact that I finally bulldozed through my own ignorance.

    The vertex buffer trickery earlier in this blog is still totally there, so I guess that works well.

    Here’s my list:

    • The creation of vertex buffers creates and destroys a separate worker thread in DirectX9. It is best to minimize the creation of buffers per frame by caching / reusing them. The performance gain in debug mode is dramatic.
    • FVF can not be mapped to something meaningful if we use tangent, normal, bi-normal formats. Use D3DVERTEXELEMENT9 instead for declarations.
    • A small issue I learned is that if you apply the template construction for for tangent space coordinates (normal map), the order in the vertex definition matters and you must make sure the tangent template is mentioned after all the fixed pipeline definitions (position, normal, color,  texcoord).
    • I used a D3DFMT_R32F type surface for the depth buffer / shadow map. It took me a while to realize I could only read the red component. It also supports alpha. D3DFMT_A32B32G32R32F works equally well, but I could not manage to store/read from the channels g and b (like e.g. w)
    • Unless I oversaw the obvious, it seems you can’t use the POSITION input in a pixel shader since the GPU will have consumed them. If you need them, you have to pass them 2ce, once as input and once as a TEXCOORD (which will interpolate them). That’s quite ridiculous.
    • HLSL matrices are column major by default. It usually means a transpose of your matrices.
    • There are 2 ways to upload the TBN values. A smooth surface (curves, spheres) needs a new tangent each fragment, whereas a non-smooth surface (polygonal) needs a tangent per tri. In the latter case, because since we interpolate between vertices, we still have to duplicate them in each vertex + you can’t share the vertices because they may belong to neighboring non-smooth tri’s with different normals. In the first case, computation happens in the vertex shader (re-normalize at fragment!), whereas the latter case can use pre-computed normals.
    • Normal maps behave badly in typical cases like e.g. spheres that converge in zenith points. Use appropriate topologies to get around this (e.g. geosphere)
    • Arrays are only supported locally in shaders? Passing them values from outside did not seem to work. I came across this issue when setting multiple light sources for a fragment shader.
    • It seems bind / render / unbind is mandatory, even if you render multiple objects with the shame shaders. I guess one possible way to optimize draw-call performance is to bundle objects together using vertex list tunnels.
    • I wasted a bit of time trying to combine the depth output of my shaders with forward rendered lines with z-test clipping. I solved it by using a vertex shader for the lines as well. For now I was a bit too focused on getting the shadows to work to really dig into this, but I’m pretty sure this isn’t necessary.
    • The tex2DProj function only works from profiles ps_4_0 onwards :/. Implementing PCS on ps_3_0 hw results in cutting back the filter to 9 samples instead of 16 with noticeable loss of quality.
    • Filtering a 4×4 kernel in 2 for loops did not work for me on ps_3_0. Not quite sure why it does not work, maybe compiler is out of registers? Unrolling the loops worked fine.

    If you have any remarks, ideas questions or answers, I’d be glad to hear about them!

    That said, this achievement has only kicked my enthusiasm up a few more notches and kept my coding skills brimming with new ideas. Here’s what happened in the second half of the year: After this little PC shadow mapping try-out I added another couple of other tricks to my proverbial bag:

    • I redid the whole DirectX but now with OpenGL shader model!
    • I built my own little demo rendering environment.
    • I gave a crash course on C++ AMP, yay!
    • I have Oculus Rift working on OpenGL, also yay. Might post some code although it’s pretty easy to port from the tutorial /SDK code.
    • I literally went all-out and turned my knowledge of GPGPU computing upside-down writing all sorts of stuff in Cuda and OpenCL , including direct integration with DirectX and OpenGL respectively. Cuda is blazingly fast and rather well designed, while OpenCL is just as clean as you would expect from OpenGL’s cousin, and actually a little more friendly. I’m curious where that leaves us with AMD’s new Mantle spec. My experiences with C++AMP were also OK. I quite liked the fact that it’s totally integrated in your compilation, but debugging this shit remains a drag.
    • I now have this ‘R’ language on my radar for some reason.
    • Implemented a full testing bench for thousands of tests (including the publicly available bg, doa, sc1, wc3 etc.. maps) and ran it on various kinds of path finding algorithms. Lots of data to read and lots of tweaking ahead. I also did work on path-finding and path smoothing, and I’m totally digging this almost theoretical shit! I should be able to publish something out of this soon.
    • The whole testing bench is also working for 3D models (PLY), includes a voxelizer and path finder and ray-caster. All of this is working pretty fast, but there’s still room for improvement. The indexing scheme and octile storage structures are blazingly fast.
    • For DragonCommander, after refactoring the hell out of some parts, I finally got around to finish the path-re-use scheme and add orientation-based position selection, which works remarkably well.  I’m anxious to see how close I can get to good group behavior, but I’m rather short on time. Kept a full diary of my progress, so I should be able to write something on this too.
    • Devised a little tracking routine to track the motion of n unrelated and unsorted objects in k-space, by estimating frame coherence based on path direction, velocity and acceleration. This routine was used in an IPEM project and is probably published soon.

    I might start posting code fragments or at least some nifty demo material here, if there’s a need for it, but at the moment I’m knee deep in project work and paper publish deadlining, and I’ll stay in that zone for a little while longer.

    My wish list for next year: something with robotics? Drones perhaps?

    Anyway, I wish you all a warm and safe X-mas, or at least a happy new year! Cheers!

    South of here

     

    In July we got ourselves a well deserved holiday. Well, I was forced to take it at work, to be more precise, but you don’t hear me complaining. I actually enjoy it fully with my wife and kid, since we’re currently in the one single spot in Europe that has amazing sun, blue sky + water and supportable heat, combined with a great outdoors, good food and friendly people. I guess if you belong to my ‘friends’ category, the envy types are already going ballistic over this, since it’s all rain and downpour in the north. :)

    Some thoughts about the time passed. Last year, after headaches and financial worry, we wrestled ourselves through our house-reconstruction. That went extremely well, actually. I’m now praying that the financial woes that are raging in the south of Europe are not going to affect our situation, but deep down, I can’t believe that won’t be the case. I’ve read far too much of ZeroHedge to keep an ignorant stance on the matter. Time will tell how this will develop.

    In may of this year (2012) we got ourselves a lovely son, Arthur, who is 2 months old at the time of writing. He’s “growing like cabbage” – a valid Dutch expression which I’m hoping translates into correct English – and we are very lucky that he eats and sleeps like a pro. Even in this fairly hot summer at the Cote d’ Azur, he’s managing unbelievably well, laughs a lot, and apart from not exposing him too much to the burning sun around 12.am to 4pm, we don’t have to do anything special to keep him happy. A totally adorable darling he is. The women we encounter – without exception – all fell victim to his charms, overloading us with compliments. The kid’s more fad than a space rock star!

    Having such a happy little child really changed a few things in my life. It puts more focus on my relationship, makes me work harder to get things ready and happening, and there is of course the feeding routine, the nappies and diapers, the bathing ritual, singing songs and acting like and allround idiot. Some of these things are easier to do than others.. I used to be convinced that breaking into my sacred 8 hours of sleep would wreck me, and while some nights can definately be rough, this proves to be untrue for the most part. Isabelle takes a lot of the work out of my hands, so of course that skews the picture a bit, but the gist of it is still true. She often tells me of her admiration of single mothers who have to cope on their own, and I can totally relate to that. The total net effect of the 3 of us surviving through all of the days and nights – and this totally surprised me – is the realization that a human is much stronger than he thinks he is, and can bare much more than he thought possible. I generalized the statement because I can’t believe I’m that special at all.

    Let me explain: up until recently, I’d been having a pretty minimalistic view on my personal capacities. I’m quite a sensitive person, and when people repeatedly tell me how hellish it would be to care for a small baby, well, you start to accept a mantra if people repeat it to you enough. I sympathized and nodded, smiling politely as they told their stories, and reserved an ever diminishing amount of hope that just maybe things would be different for me if I ever became fortunate enough to have kids. This was tough. And I mean really tough. For more than 10 years on end, people have been giving me lectures on what it means to have kids. I hold no grudge against them of course, but it eats at your confidence, and when you are looking forward to the experience of fatherhood, it also eats part of your idealistic world view. After a certain amount of time, the sincere hopes and longings I started out with gradually were buried under layers of fearful goo, leaving me a mechanical hunch to pursue a faint shadow of what once were pristine motives, only adding to the doubts. Add to this a few biological fertility problems, and you may start to ‘get’ the overall picture of what we went through.

    And then Isabelle did get pregnant, and Arthur did get born, and all of sudden, you get to experience everything first-hand. This was nothing short of an emotional wall that collapsed. Suddenly the hopes and yearnings that were buried in our deepest caves and canyons resonated firmly through our fibers. All the reasons and ideas literary came back to life, and one by one I started to revisit all the things people told us about. And one by one, those absolute truths crumbled to pieces. Yes, it’s an added responsibility, and yes, you have to adjust your daily life and give up things, and yes, things smell bad sometimes, but it is by no means hard. Maybe it’s because I’ve heard everything that there is to hear about having babies for the last 15 years – I had plenty of time imagining what that would be like. Or maybe it’s because I’m older than most parents are when they get kids and can, to some extend, let go of my favorite occupations more easily. (It still stings sometimes.) Or maybe it is because enduring all the psychological hardship is finally paying dividends. Or maybe it’s simply because Arthur is just a super kid. Whatever the reason, what I can say is that I tremendously enjoy being a father. It has made me stronger and a bit more confident. I’m giving it my very best and I know it’s working out when I hear the little guy laugh out, content, like he does every day. That’s in fact really all the feedback I need to regain my composure. It wipes out all that effort to store my hopes away in that private place full of dusty cobwebs. It slowly dissolves the feeling of driving with a handbrake that I’ve lived with for the last few years.

    In the process of all this, and next to my increased interest in following the financial developments around the world, I also discovered that I grew fond of writing things down. For a lot of people (including me) structuring thoughts into strings of words realigns our minds to whatever meaning speaks from them. For those people, writing can be a means to grow, to build on top of what is already written. Mind you, there is absolutely no importance tied to the amount of people reading what I write here, nor do I care if people ‘like’ what I’ve written. It’s nice to hear that, of course. But if that expels me from the facebook generation, so be it. The important thing for me has already happened. I wrote down this text. And realizing the metaphysical importance of that process is another leap. As you write, you tend to reform sentences, replace words, restructure the content, elaborate or cut out parts that have no added value. But the operations on the text also reflect a mental transformation. If you practice this a lot, you automatically evolve your brain patterns, too. It’s totally unsharable, but probably the most important aspect. I guess you could compare it to sitting at a bar. The first few times, you feel a bit uneasy starting conversations, but as you tend to repeat the act, you grow more proficient at talking to people. I just never was the type to sit in a bar much, and with writing, there’s the added benefit that no one has to go through the various drafts of your musings. At university, besides “nerding” around in the demo-scene and playing chess on-line, the poetry mailing list was one of my favorite time-sinks. If people would read the contents of it now – I secretly hope that nothing of it survived the test of time, though in this digital era you never know – they would probably have a good laugh. I suspect the unbridled creative nature of that occupation helped me to develop a taste for writing.

    So yeah. Confidence. Not an easy topic to write about. It’s now been 2 months since I wrote the previous parts, and I’m still gathering bits and pieces every day, and at the same time I learn to blend in into this city of surrealism with increasing success. The process teaches me that protective environments have to be temporary, or they do more harm than good. I know I still have a lot of work to do, but I’m out there pushing for it. So here it is. My honest account shared and published. I win this war. All that remains are the battles to survive, and I know these. Here I come again.

    More good news reached us today. One of my best friends had a new baby born! A little girl that already has a big sister and 2 great parents, whom we don’t get to see often these days, which is a shame. Me and Isabelle wish the whole family all the best!

    Vertex format templates

    While my pregnant girlfriend is counting the weeks until D-day, I managed to code a bit again. I may be more of an all-round game coder, but I know quite a bit about the math involved in lighting models and post processing techniques. Still I have to admit, I’m old-school. The last lighting function I wrote was in software in an unrolled loop of a triangle span-pooled sub-texel renderer (in assembly). Hardware shaders – not to mention the various kinds of languages and hardware specifics – always seemed a bit of a drag to get into. Over the years, the field has matured quite a bit, and tapping into the huge amount of fancy papers on shading made the itch to try this myself only stronger. So I bit the bullet and dived into the NVidia shader tutorials.

    I immediately ran into a quite obvious fact: the vertex format is pretty much a defining factor as to what you can do in your shader. The first shader stage – called vertex shader – obviously works on the vertex format. The second stage – called fragment shader – is directly based on the first one; it samples/interpolates from that stage. This may be less of an issue if you have an editor + compiler – which we happen to have in development at Larian a.t.m. But for the sake of learning the trade, I’m sticking to hand-coding them at home for now.

    So why do I bring this up? Well, if you’re setting up a scene to render, then obviously you need to fill an object with data. The vertex format that you need to use for this is usually fixed. But as you go through the shader tutorials and the shaders become gradually more complex, the vertex format starts to change too. One can just start over in a new project and go from there. That’s fine, it works. However I want to keep all my little scenes in the same app, so that I could just browse through all the shader samples without hassle. The difficulty rises in that every sample may need a unique vertex format composition to feed its associated shaders. So how to specify those formats? I didn’t like the idea of putting #ifdef guards, I didn’t like the idea of having a gazillion typedefs, so after a bit of googling I stumbled upon this.

    The idea is to use type lists, such that each vertex part (position, normal, color, tex coord., etc.. ) is included in a type list. Here’s some code:

        template <vertex_use use, typename Next = void>
        struct vertex_info
        {
            struct Vertex
                : public vertex_part<use>
                , public Next::Vertex
            {
                typedef vertex_info <use, Next> Type;
                void Lerp(Vertex& from, Vertex& to, float t)
                {
                    Next::Vertex::Lerp(from, to, t);
                    vertex_part<use>::Lerp(from, to, t);
                }
            };
    
            static const DWORD GetFlags()
            {
                return vertex_part<use>::GetFlag() | Next::GetFlags();
            }
    
            static void build_format(vertex_format & fmt)
            {
                Vertex v;
                vertex_node n;
                n.usage = use;
                n.offset = reinterpret_cast<byte *>(
                    static_cast<Next::Vertex *>(&v))
                    - reinterpret_cast<byte *>(&v);
                fmt.use.push_back(n);
                Next::build_format(fmt);
            }
        };
    
        template <vertex_use use>
        struct vertex_info <use, void>
        {
            struct Vertex : public vertex_part<use>
            {
                typedef vertex_info <use, void> Type;
                void Lerp(Vertex& from, Vertex& to, float t)
                {
                    vertex_part<use>::Lerp(from, to, t);
                }
            };
    
            static const DWORD GetFlags()
            {
                return vertex_part<use>::GetFlag();
            }
    
            static void build_format(vertex_format & fmt)
            {
                Vertex v;
                vertex_node n;
                n.usage = use;
                fmt.use.push_back(n);
            }
        };

    For a detailed explanation on how to apply a type list to a vertex declaration, please visit the previous web-link. The neath thing is – it can also be used to return the correct stride of your Vertex format, and even the DirectX9 flags used to communicate with the shader api. Nothing but goodies!

    But that is not all. If you implement something like lerp for all your parts, you can instantly lerp between any sort of Vertex, like so:

        template <vertex_use use>
        struct vertex_part
        {
            const vertex_part<use> Lerp(const vertex_part<use>& a, const vertex_part<use>& b, float t) { return *this; }
        };
    
        template < >
        struct vertex_part<v_position>
        {
            float3 position;
            static DWORD GetFlag() { return D3DFVF_XYZ; }
            const vertex_part<v_position> Lerp(const vertex_part<v_position>& a,
                                               const vertex_part<v_position>& b,
                                               float t)
            {
                position = a.position + (b.position-a.position)*t;
                return *this;
            }
        };
    
        template < >
        struct vertex_part<v_normal>
        {
            float3 normal;
            static DWORD GetFlag() { return D3DFVF_NORMAL; }
            const vertex_part<v_normal> Lerp(const vertex_part<v_normal>& a,
                                             const vertex_part<v_normal>& b,
                                             float t)
            {
                 normal = a.normal + (b.normal-a.normal)*t;
                 D3DXVec3Normalize(&normal, &normal);
                 return *this;
            }
        };
    
        template < >
        struct vertex_part<v_color>
        {
            DWORD color;
            static DWORD GetFlag() { return D3DFVF_DIFFUSE; }
            const vertex_part<v_color> Lerp(const vertex_part<v_color>& a,
                                            const vertex_part<v_color>& b,
                                            float t)
            {
                float a_alpha = float((a.color & 0xFF000000) >> 24);
                float a_red   = float((a.color & 0x00FF0000) >> 16);
                float a_green = float((a.color & 0x0000FF00) >> 8);
                float a_blue  = float((a.color & 0x000000FF));
                float b_alpha = float((b.color & 0xFF000000) >> 24);
                float b_red   = float((b.color & 0x00FF0000) >> 16);
                float b_green = float((b.color & 0x0000FF00) >> 8);
                float b_blue  = float((b.color & 0x000000FF));
                float alpha = a_alpha + (b_alpha-a_alpha) * t;
                float red   = a_red   + (b_red-a_red)     * t;
                float green = a_green + (b_green-a_green) * t;
                float blue  = a_blue  + (b_blue-a_blue)   * t;
                color =    (((int)alpha << 24) & 0xFF000000) +
                           (((int)red << 16)   & 0x00FF0000) +
                           (((int)green << 8)  & 0x0000FF00) +
                           (((int)blue)        & 0x000000FF);
                return *this;
            }
        };


    Isn’t template specialization cute? Obviously you can do the same for slerp and other interpolation schemes.

    I’m still toying around with this, but I haven’t ran into any sort of problems yet. This setup particularly shines when I wanted to subdivide a model (any model), using any sort of vertex format, just by telling them how to split up, as long as the splitter function calls onto the vertex format for specific vertex operations (such as the lerp mentioned above).

    Another thing it sped up doing is setting up a ‘universal’ TetraHedra structure, which is constructed using an initializer type. The initializer supplies the rough vertex locations and the face mapping based on tetraHedra formulas (more or less as if it were read from a data file), and the object is constructed using whatever vertex format is required in the shader example. Ah, sweet instant gratification. Nice. :)

    Well, nothing to get extremely excited about but I thought it was worth mentioning.

    Hope you like it and see you next time!
    a0a

    A word from our sponsor

    If you’re lucky, you live in a part of this world where news is ubiquitous, where information flows abundantly, and where you have something to eat 3 times a day and more if you should wish to. It is in that same part f that same world, curiously, that headlines are screaming bad news into our face from whichever angle you take it, day after day, relentlessly: massive job-losses, tax-increases, child molestation, uprisings, failing healthcare, traffic disasters, government corruption, credit default swaps..

    I’m sure it is a sign of the times, but right now the perception lives that everything we know, all the food we need, the water we use to wash ourselves, and the warmth we need to protect our kids and family has increasingly become part of a cataclysmic function, a dependency on an obscure technical stock-market index of sorts. That’s very natural, you say, because we live in a capitalistic world after all. The free market drives profit and loss, demand and supply, all by it’s own rules of greed and happiness. So yes, that’s actually a natural thing. Why, then, is the world seemingly going up in flames about this? What has changed? Why is natural food either dirt cheap and therefore worthless to produce, and at the same time increasingly expensive when it needs processing? Why why why..

    Big banker names in the financial world are crumbling, but it is not them who are to blame. Yes, their attitudes have changed, and they bring out facts that turn out to be not true at all, and they promise things that they can’t possibly promise. But the real driver of mass psychology are the media. Their global mantra of one financial disaster after another, minutely analyzed and tastefully illustrated with thousands of economist doctoral thesis models and estimates, is building us a self-fulfilling prophecy in 16.4 million colors, a fearful world vision that is out to debase whatever value there should have been in any investment ever made, and by no matter who. The only winner in this epic tale seems to be a guy called Warren Buffet. No, sorry, that was just a joke.

    The media claim their self-proclaimed moral duty to inform Warren and Joe Sixpack about his stock and bet. To keep doing that profitably, news outlets and networks collectively turned themselves into fiction publishers. It is no longer in their primary interest to bring real news or tell stale truths. One look at the dynamic headlines and today seems like so different then yesterday, and yet.. doesn’t everyone agree that most days will still very much be like their yesterdays? It´s in news agencies first interest to sell believable stories that have a profitable ‘wowie’ factor. They’re weeeee close to selling action figures to go with that. The war on terror taught them how to wage fear in a populace by twisting stories to whatever extend was needed to drive a message home. And everything else, as they say, is just history.

    If you take another step back, beyond the increased media attention and coverage of these domino events, you also see something else: The once communist china has turned into a giant rat-race pony on steroids. The once communist Soviet Russia discovered the money game and is simply torturing neighboring states with oil and gas cuts as they wish. The once honorable gentlemen of the British islands have all signed up to play Onslow in a series called Keeping Up Appearances in The Common Wealth (how ironic a name). The once, well, New Yorkers are still dreaming of an America where everything is possible, and they all act just as selfish as the next guy. The old continent is slowly grinding to a rusty halt while feeling incredibly intelligent and important. The third world is struggling to get into -but firmly kept out of- class. And the Middle Eastern countries are sitting on their oil and more or less watch the rest of the world play tag, you’re it!

    Ok. Stop. Let’s not believe all of that for a minute. Look at the real capital in this world again: hi, yes, I mean you. We can’t have the financials run our society the way it does today, we can’t have mass psychology dictate how valuable we are. In that sense, I agree with the protesters in Wall Street. What I hate is their destructive nature, because they provide no alternatives.

    And yet, here is a very simplistic idea: Make all intended trade actions globally public and accessible at least 3 days before they are allowed to proceed. Remember, yesterday will be a whole different day. No more death marches, no more ralleys, no more volume trading spikes. It would take a lot of fuel out of the psychosis that is currently in the markets, wipe out rumors on rumors of murmurs, and bring back common sense and square off speculation against a currency, let alone a country. The trading would become a lot more strategic, and less impulse driven, and I think that is exactly what this world needs today.

    So, 2 cents from me, all reactions welcome.

    Premature optimization myth debunked

    Anyone that ever got slammed with the phrase “Hey, don’t go optimizing this prematurely! It’s the root of all evil! Knuth wrote it in his book! You idiot!” by some lunatic manager of sorts should follow their gut-feeling and pick up the following to fight back the idiomatic dogmas:

    http://my.opera.com/Vorlath/blog/2011/08/14/optimizations

    It basically says: Knuth was an old fart who made love to inner loops. We’re way beyond that now. Either don’t give a fuck about optimization at all, or start caring from day 1. Something that is well known in the gaming industry, since here even the slightest difference in approach or implementation can bring a system to it’s knees if you’re not carefully considering all the details, and preferably, test them out (i.e. try all the options and compare).

    Note to our future

    No big statements here, I just wanted to keep a quote at hand for future reference, something I just read in a book I keep getting back into. I started reading before the unfortunate events occurred in peaceful Norway. Though the events described in the book were totally different, it was like what I just read in that book was happening live on TV at the same time. The parallels were making me dizzy. I’m still reading this book today. I have a tendency to read the books I like very slowly.

    “If you lose your ego, you lose the thread of that narrative you call your Self. Humans, however, can’t live very long without some sense of a continuing story. Such stories go beyond the limited rational system (or the systematic rationality) with which you surround yourself; they are crucial keys to sharing time-experience with others.”

    “Now a narrative is a story, not logic, nor ethics, nor philosophy. It is a dream you keep having, whether you realize it or not. Just as surely as you breathe, you go on ceaselessly dreaming your story. And in these stories you wear two faces. You are simultaneously subject and object. You are the whole and you are a part. You are real and you are shadow. “Storyteller” and at the same time “character”. It is through such multilayering of roles in our stories that we heal the loneliness of being an isolated individual in the world.”

    [..]

    “Because it seems to me that these discrepancies and contradictions [in stories told by people that experienced the very same scene, ed] say something in themselves. Sometimes, in this multifaceted world of ours, inconsistency can be more eloquent than consistency.”

    - H. Murakami, Underground