Into the many-fold

    Ignace Saenen 22 hours ago
    Fellow #demoscene friend #gongo a.k.a. Dimitri Smits suddenly died age 38 on 17/11. My thoughts are with family and friends. We'll miss you.
    Ignace Saenen 5 days ago
    @peter_lambert @MMLab_UGent @iMinds @ResearchUGent Thanks! None of the 1000's pathfinding agents were harmed in this research!
    Ignace Saenen 1 week ago
    If the ocean is too big, you need a bigger boat: youtu.be/nqERLsNTnXk via @YouTube
    Ignace Saenen 3 weeks ago
    Sick at home, rain outside: First attempt at SSAO: pic.twitter.com/8kz0trU6z7
    Twitter Media
    Ignace Saenen 1 month ago
    Looking forward to #iMindsConf. Come pay us a visit and see our #RTS game where the GTTF team @ #MMLab shows off #nextgen #pathplanning
    Ignace Saenen 2 months ago
    x 10..
    Ignace Saenen 2 months ago
    @pointinpolygon FreeImage_OpenMultiBitmap with FIF_GIF, fyi
    Ignace Saenen 2 months ago
    Thanks @mike_acton for sharing a great cppcon talk! github.com/CppCon/CppCon2…
    Ignace Saenen 2 months ago
    Played around with instanced opengl rendering. 80k+ voxels top 51fps, possibly more improvements in the future. pic.twitter.com/ucJgLXPCcQ
    Twitter Media
    Ignace Saenen 4 months ago
    @paniq that number two is spooky..
    Ignace Saenen 4 months ago
    Bravo @rygorous 4 making a humanistic stance against "'vr" in gist.github.com/rygorous/251b9… Hard choices are obvious when one follows the hart.
    Ignace Saenen 5 months ago
    @peter_lambert oops @ eurographics: new #siggraph chapter in London london.siggraph.org
    Ignace Saenen 5 months ago
    @paniq you live in Belgium?
    Ignace Saenen 5 months ago
    Man controls arm using a chip implanted in his brain independent.co.uk/news/science/a… important word: "his" #bionic #rmindreading
    Ignace Saenen 5 months ago
    Good ranting on OpenGl, Mantle, Metal, Dx12 and AZDO c0de517e.blogspot.ca/search/label/G…
    Ignace Saenen 6 months ago
    PlayCanvas (WebGL Engine) Goes Open Source hacks.mozilla.org/2014/06/playca… via @mozhacks
    Ignace Saenen 6 months ago
    After AMD (Mantle) and MS(Dx12), now Apple vs. OpenGL. So what about this: geeks3d.com/20140321/openg… via @SlideShare
    Ignace Saenen 6 months ago
    Finally back in the game: 10k @ #stadsloopDeGentenaar around 1:00h. Hitting targets rocks.
    Ignace Saenen 6 months ago
    Relevant research @ #superminds2014 : media, ehealth, internetofthings. pic.twitter.com/y3QKL7UKVj
    Twitter Media

    In loving memory

    Earlier this week, sad news reached us that an old friend, Dimitri Smits (a.k.a. ‘Gongo’ or ‘Discordis’) had died at age 38. He died at home in the morning of 17 November 2014. Today, together with lots of other people, we said good bye one final time.

    Although we slowly lost each-other to other activities and passions, it still came as a complete shock. It is absolutely incomprehensible that someone so young would leave us without any warning, without any good reason. I’m struggling to accept that that really happened. Something inside is aching.

    People who knew Dimitri will remember him as a lively character, always looking to kick some fun, but also a warm and gentle guy. I had the honor and pleasure of sharing one of his passions – computer graphics programming. I vividly remember him in some of our demo-scene moments, our time at the university and occasionally when we met elsewhere. I remember the endless FidoNet and IRC sessions, the BBS days, the coding tutorials, discussions, and pages and pages of emails and forum posts. Dimi was active in the presidium of the student association Winak, where he built his closest circle of friends that would stay around in the rest of his career. In hindsight, those university days were endless in potential and possibilities. It’s a very remarkable sensation to realize that when he is no longer with us.

    I admired him, and sometimes envied the certainty with which he defended his ideas and positions. I remember him as a connecting figure, able to bind strong-minded people together, and make them act as a team, often absorbing the shocks and playing the glue in-between when needed. It seemed to come so natural to him.

    There are so many great and funny stories to tell, it would take a good couple of years to recount them all. If you have something to share, please do. There are so many things he left behind, unfinished.  I consider this digital account as a starting point, to keep his memory alive, and to find a way to say goodbye, in a way that he would understand. Dimitri, your code ran blazingly fast, optimized with all the tricks in the book. That one last trick, I’ll never really understand.

    My condolences and thoughts are with his family and friends.

    Farewell my friend, hope you’re safe. We’ll miss you. Sleep tight.

    Politics revisited

     

    Analysis

    In the last few years, right-winged parties have gathered more and more voters, to the effect that much of mainland Europe now has ‘nationalist’ parties in offices and governments. After decades of political exclusion and unbridled tenacity, the constantly repackaged and restyled message is gaining more and more traction in the majority of generations. The message itself is often one of anti-, a disdain of certain common themes, situations or problems, and a profound wish to change the system. Almost all of them frame neo-liberal socialism tandems that have ruled most European countries – the (self-proclaimed) ‘democratic’ parties – as the cause of all problems, and specifically target the socialist legs.

    The need for change in itself is not the point of discussion. All situations are dynamic, it is only normal to constantly re-evaluate and take action in case of serious derailment or problems. And the problems are abundant, the challenges huge. No surprises there either.

    So why is the public warming up to nationalist idea’s today? It is of course partly the ideology, but 3 other factors are also at work:

    • The change of style and language, and in some cases tactics to acquire power is remarkable. One of the catalysts in this process is the exceedingly fast media and the ability to infuse it with new half-truths, opinions and documents from academic sources that are hijacked to sell the message. The traditional parties are often too slow and not well prepared to the constant assaults, and their politicians are not prepared for the incredibly fast avalanches that only aim to push them out. In many cases, it is very remarkable to see the party program – assumed to be based on ideological grounds – of a right party change overnight, because current strategic and tactical analysis to find power dictates the change. This may even completely fade away the nationalist undertone in it’s discourse, even though the same people with strong nationalist roots remain in the same party positions.
    • The current economic malaise is another factor that underlines the alleged necessity of their reform plans. But while the anti-dote to the financial worries is usually austerity and budget cuts –  a tune that rings well with other liberal and neo-conservative parties, it is often unclear how this will in fact benefit the population of the country, much rather than benefit the wealthy. In this regard, the acquisition of power is based on the age-old Romeo and Julia scene, where Julia in this case is simply the Capitalist market powers that be. Though the jury is still out, many have already drawn the parallel with the period before the second world war, in the 1930’s.
    • That said, the actual rise of nationalist themes has only moderately enforced the nationalist base. Most of the gains are predominantly based on key figures that dare challenge the established scene: Marie LePenn, Nigel Farage, Bart Dewever and Geert Wilders all have something clownishness, but manage to win part of the populace with their play of words and provocative discourse. This sits well with the younger generations that often lack all the background information and historical events and as such see no difference in playing with new cards. To them, the ideology is often just as bad as all the others, but has to have a fair chance of proving that, too.

    Most people understand the environment around them. Their personal as well as sociological problems are usually known and understood, and the citizens are aware of the dangers and fears that they bring. In fact the constant stream of news makes our daily life a constant uphill battle to be happy, despite all the permanent fear buckets poured over us every night and day.

    The proposed solutions from right-winged parties are usually simplistic and straight forward, easy to identify with, understand and reproduce. They usually aim to disband and break down existing structures and ideas, contain some promise of a better future, but fail to explain how they will deliver. Due to the simplistic communication, they often inhibit hidden strategic choices that are simply not explained; they further empower the strong, and bring almost nothing to the poor; they stigmatize and tax parts of society out of a dreaded need to further build a new identity, a new moral ground, a new invented legacy to believe in.

    But perhaps worst of all, there is no direct link between the proposed solution and a vision or project for the future that is inclusive. There is no real creativity, only a call for enhanced “responsibility”, which is the same as promising: “if you play by our rules, we might be nice to you”. It other words: it puts the responsibility to organize society on the people itself, not on the system or government, to the extend that they can suffer the consequences of a defunct society precisely because it was their own fault. This is a totally different type of world view than what Europe has experienced in the last 60 years.

    There are 3 other processes that has invariably built a case for a shift to the right:

    • The re-definition of identity. This is by far the most obvious change: a new promotional newspaper (“nationality today”), the change of language in certain public functions, flag-waving traditions on ceremonial and sportive occasions (“cycling”), quota on external cultural influences, budget allocation control for certain types of critical assessments, the allocation of slots of ‘local’ culture, protective measures for  the ‘local’ economy and social life (“we need x% local-language music on the radio and tv”). The attack on existing social structures and networks (“removal of certain cultural budgets and lifelines”), and replacing them with alternative structures based on the younger generations (“make restaurants more child-friendly and force less sophisticated food options”).
    • The re-definition of history. The change of names, places and events. Names that have existed for many years are rejected and new names are being suggested, offering an alternative take on an invented past. Even though this is often quickly exposed as fraudulent to history itself, a small percentage of people will see rejection of the proposal as a rejection of their ideology. Places that have led to some great achievements are being linked to practices which may not always have had the moral high-ground. Events that were assumed to have positive outcome are minimized and their effects ridiculed. In both cases, the essence of the mater is bullied out of context, and the historic value attacked, so that it can be replaced.
    • The re-definition of language itself. That is to say; words are being used in different contexts, so that previous notions of them become stained and stigmatized, and further use is denounced, so that people who normally would accept such language, would suddenly distrust their original sources, and hence maybe change their voting behavior. One typical example is how anti-social measures are sold as social, since if it helps the economy, the economy will then help the socially weakest. Another example is how solutions to enhance mobility are actually solutions to enhance a single kind of mobility, and force other types out. Or how to force people to move closer to where they work, yet to be flexible when their company decides to move. Or how certain creative models (for example for solar power) are turned against themselves by simply changing the context or the conditionals that make them work in the first place, and thus bring nuclear power back into the picture. Or by changing the definition of discrimination. Or by changing the understanding of what a ‘federation’ actually means.

     

    Anti

    If there’s such a strong rising opposition, that must mean that there is also a strong opposite side. But the lines are sometimes blurry. Multi-party governments are not a given, and in many countries, they are not even possible. If they are possible, it may mean parties previously associated with left may now be tolerating / working together with / supporting right or nationalist parties. In that case the socialist base is quite hard to delimit. In the former case, the socialist party will have to have a quite liberal stance, since Capitalism makes it almost impossible to wield power without at-least following the same rules and reasoning. In any case, it is losing ground and losing voters.

    Of course the right-winged proposed changes will not simply go through; no half-sacred fortress was ever taken without at least some serious fight. In this case it means that unions and other targeted organizations will undoubtedly make their voices heard. I can only hope that people really understand what is at stake today, which is our modern-day western-world social model. The “anti” movement is gaining traction, and the only anti-dote to that movement is to remain positive in projects, actions and opposition. Only by showing positive and creative ideas and solution by example can left win back enough potential to outrun the nationalist ideology and build an inclusive rather than partitioned society. Only by debasing the mocking and the ridicule can left find legitimate grounds to defend a better world.

     

    What is at stake

    The fears I had in previous years were mainly that, when I would grow old, pension plans would simply have disappeared. A strong cause for worry, yet, somehow, the notion of working until I die never really bothered me. But let us suppose for a minute that the nationalist right-winged parties are actually concerned with financial state of their territory (which in almost all cases has granted them the power and airplay in media outlets today across Europe). In that case, the current generation of rulers are trying to save their own pension plans, and maybe (but that is not even in the fine print), save that of future generations. But in fact, it comes down to a sort of sell-out. Each new power-shift is based on a promise to sell out faster and hotter than the one who is in power currently. So in fact, while the current rising nationalists may be promising all kinds of greatness, they will in fact sell out parts of the wealth of the country to get that power, and possibly empty the bags one more time, and turn off the light when closing the door.

    To be fair, this also happens with all previous (left) governments. What is new now, is the totally different take that right-winged people have on a social model of society. In order to get that bag empty, the values and regulations that the right deems acceptable are quite different. The moral base on which they act is selectively blind and deaf to parts of society and their problems. They thrive on a conflict-model, rather than inclusive potential model. They are much more intolerant and passive-aggressive to all that is not adhering to their ideology. And it is precisely this underground ideology coupled with surface-based market liberalism that worries me. Nationalist parties that have historic roots beyond the period of the second world war have specific ideas about how societies should be organized. The Berlin Wall may have fallen, the very nationalist ideology that wrecked so many lives just 20 years ago still haunts the tidy salons and clean conference rooms today. Now of course the soup may not be eaten as hot as it is served, some nuance and a healthy dose of sanity and self-respect can go a long way, but it can’t hurt to write it down lest someone would forget any of this happened.

     

    Suggestion

    Read “Animal Farm”. Read “1984”. Read “Brave new world”. These are books written by previous generations and in previous times, yet it is simply appalling to find exactly the same thematics, the same problems, the same alleged solutions, and in many cases, a strong warning towards future generations to avoid the same traps.

     

    Why I write

    I hope, someday, my children can read this, and smile in a world that has a brighter future than the one we currently are facing. My generation grew up in the aftermath of the 60ies and 70ies, and never felt the woes and hardship that must have reigned in war times. The new generations today will find themselves in a very confused world, one that is tearing down instead of building up. One that is full of internal and external conflict, in which language is both the poison and the cure. I am a positive guy, every day I wake up and hope for the best. When I look at the future, I see that there is much to loose, and a lot to fight over. Let me be wrong so that my children can develop themselves, rather than be enslaved in someone else’s conflict or war.

    DIY Binning

     

    Binning results

    The binning operation comes into play when you have a lot of test results. Say you’ve measured the rendering times of your rendered for different scenes and different parameter settings (e.g. several resolutions). You possibly did also measure the number of rendered triangles, the number of lit pixels, the number of overdraw per pixel.

    If you brute-force all those settings and keep log files that contain your results, you end up with quite a bit of data. To make this data meaningful, you have to group that data into arbitrary subdivisions again, and then let loose all your statistical magic for that set (mean, averages, standard deviations, etc.. ) Suppose you are only interested in the most expensive frames, or all the data for model A, or all the measurements for resolution XYZ. While the interesting bit is in the statistics, getting those statistics on the right data is actually the hardest part, especially when the data set is large.

    Now, you can argue that a few measurements are enough, and that you can extrapolate to other scenario’s. This is probably true for simple linear, quadratic and cubic relationships between statistics, but in some cases the performance landscape of your function can have surprising outliers, especially if you are measuring functions of systems that you do not yet fully understand.

    You can drop all that data in database tables and rely on SQL to deliver the right answers for you. While that may be a viable approach for a number of good reasons, sometimes dumping massive amounts of data points into a database is just not feasible. The next best option, is to bin the data yourself. There are a number of approaches you can take. One is to use commercial software, which will often break down on large data-sets. One is to use R, the statistical scripting language. And on is to write it yourself, which is what I did.

    Ground truth

    First you have to set up the bins you are interested in. This means that for every ‘set’ (say: resolution 130 x 251), you aggregate all the samples for every specific feature that is relevant. This simply means: storing them in memory per ‘set-id’. When you’ve gone through all your data for this bin, you process all the relevant stats on it, clear everything, and move to the next bin, until you’ve exhausted all the bins.

    The reason you have to store so much data is because we want to compute the sample and population variance to compute the standard deviation. To compute the sample variance and population variance, the summed squared error (relative to the mean of the bin) is divided by the total number of measurements for that bin (minus 1 for sample variance). The bin’s mean and the total number of measurements is only known after all measurements are added to it, so they all really need to be stored in memory, which is kinda sucky.

     

    Optimizing in C#

    There are 2 factors which can wildly run out of hand: memory and performance.I first concentrated mostly on performance, then gradually also started looking at memory consumption. The more memory that is touched and controlled in the garbage collector, the more time it takes for the application to context-switch and page-fault it’s way through the relevant memory.

    Performance was greatly improved by doing a number of things: profiling quickly indicated which parts were slow, and with a bit of restructuring and taking my specific scenario into account, a number of conditionals could be removed and for loops tightened so that the total amount of work dropped.

    Parallel = faster?

    Next, I started playing around with “Parallel.Foreach”, so that all CPU kernels were busy doing part of the work. Parallel.Foreach is great, but it can even worsen performance if you keep copying data around. It also brings up the issue of keeping the GUI up to date (using delegate Invokes – if you launch it from every thread, you generate a massive event overhead which ultimately results in UI -stall and serious performance drain). At first, it was difficult to rewrite the code and find good CPU coverage. All cores would be busy, but application processing would consume about 20% of the CPU time, the rest of it going to stalling and overhead. The initial changes are surprisingly easy, but getting good performance out of this is quite another ballpark. Once again, profiling helped fine-tune the application so that the cores were maximally loaded and overhead of scheduling was reduced to a minimum. This brought down the processing time to 2 days, from taking over half a week.

    Concurrent = better?

    After more twiddling I discovered that the “return yield x” allows an iterator function to delay the computation necessary to return an element until the iterator function is effectively queried for it. But this proved to be tricky, since simply calculating the size of the Enumerated elements would already cancel out that functionality. Additionally, it meant that Dispose() methods (file handles and streams) were now only run much later in the application, leading to serious memory consumption. Still, the return yield meant 1 day of crunching, instead of 2.

    Taking a better look at code coverage, I discovered that most of the overhead was due to memory copying and stalling on behalf of the UI, as well as a few conditionals that could possibly be removed. I started using parallel containers such as ConcurrentBag and once again rewrote the code. A warning though: you have to be very careful when you use such constructs, because contention can be extremely expensive. ConcurrentBag keeps bags per thread, and locks them at least 2 times per opeation (Add/Take). In my case, I was just adding items, and reading happened after all items were added, and on a single thread. If you have high contention with many different threads reading and writing at the same time, lock-less patterned containers will probably be a better choice. The CPU consumption shot up to 85-95% range.

    At this point something amazing happened: the profiler started showing String.Contains() as one of the prime contenders for spin-overhead. I was partly surprised to find it so openly up for grabs, and wondered if something could be done here. Of course, you can shorten the strings, but since it was based on actual data on disk, I didn’t want to touch that.

    Now String.Contains is fast..

    .. but somewhat surprisingly, this is even faster:

     

    return ((line.Length - line.Replace(searchText, String.Empty).Length) / searchText.Length) > 0;
    
    

     

    I came across this gem on this excellent benchmarking site. I had Contains() up on top in Intel’s VTune performance analyzer in the spin-overhead section.

    After applying this, the the spin-overhead on a test-scenario was reduced from 240s to 40s for the very same run. Instead of a full day, the results now took under one hour to compute. And the data set was actually enlarged by 40% since the previous tests, as new cases had to be evaluated.

    Obviously stripping out any stored members in your measurements is going to win you space and time. After the initial cut, the deferred de-allocation started playing a role. I played around with the GC but did not really find that it helped much. Instead I came across another setting that you can use: server-based garbage collection:

     

    <gcServer enabled="true"/>

     

    Afterwards, I switched the memory model in 64bit to avoid any other memory problems altogether.

     

    The media desert

    I am not a journalist or reporter

    I am a journal-ismist. And probably not a very good one, but I leave that judgment up to you.

    Before we dive into the question of what is good and what is bad, I’ll first clear up the terminology. Obviously, journalism is what journalists do. It’s a profession. It’s reporting on the truth of the day. Or at-least, cover a story that could probably be the truth.
    TLDNR; how about facts instead of news?

    Truth

    But truth is a very intangible thing. One even wonders if it really exists, yet a lot of people claim to be looking after it, or searching for it, depending on the angle you take. Seeing the truth sometimes occurs due to random stupidity instead of being a combo of genius insight, apt and abundant availability of information or the power the filter and process it. Breath here.

    The Wikipedia page on journalism has probably the longest short introduction on the word on the whole site, precisely because it is hard to draw lines around that concept. Sometimes it is hard to separate the satire from the serious, the culture from the context, the fact from perception, the ideology from the profession. There are social and moral ethics involved, an embedding cultural environment, a promise to report (only?) relevant new occurrences, insights, facts, etc, and another promise to ban all other political or economic influences. People like to make promises. And people are not failure-free. Oh and one other thing: it has to earn a living, too.

    Ok, so that is journalism. Should we believe everything the news tells us? You know the answer. Should we dismiss it because it’s just news? Same answer. But then, if news is both unbelievable and valuable at the same time,  how can our daily intake on information a.k.a. news be use to make good choices in our daily lives? Read the next paragraph, then ask yourself this question again.

    It’s up to you or anyone else to fill his or her personal answer (free as in beer). My answer is to regard “news” at best as ‘possibly true’, and at worst as ‘an indication of corruption of context’ – call it proof that there is information manipulation happening, and that there’s a high chance there is purpose in it. So in that sense, that news, though it be untrue, is also useful. It’s just really hard to tell them apart, and I feel  that has only become even more difficult. It’s easy to call this paranoia, and such ambivalent and impractical stances never get you very far in this efficient world.

    Here’s a thought experiment: Suppose you are sipping on a long-drink cocktail in Aruba, looking at the waves while reading this.. do you feel the sand in your hair and the salt on your skin? Perfect. In that truth of sand, air and water, these rambling musings on the value of truth, believability of journalism and the uselessness of the blur you taste after showering yourself with daily news is undoubtedly going to come off as theoretical, unimportant. Most probably, though, you sit behind a computer screen like I do, and found time to read this, and while asking yourself if you are spending your time right, you also wonder how to interpret the world, like everyone does. People around you, and possibly you too, (have to) make decisions. They do that based on ideals, emotions, reasons, and some degree of power or authority, and those decisions will always have a cost and benefit effect at some point. The power element is the  dangerous factor, because it skews visions, it skews arguments, it skews decisions, finally, it also skews context, and thus it is also bound to skew journalism, which drives the general perception of truth to a large extent. This is not new(s), of course,  but I bring it up to underline that our belief structures that we use daily are based on information that is subject to power shifts. And emotions play a big part.

    And so.

    In marched the social media, where everyone suddenly can take on the role of reporter. Where the power to publish is democratized (to some extent) and where money is no longer a driving factor (to some extent). This is the 2.0 economy, where everything is free, including your personal data, and with new rules and ethics, new lobbyists and group mantra’s, new do’s and.. well, more do’s.. , but also with secret deals between Youtubers and other parties, and new ways to convert money into power. The Arabic spring of late was initially a very positive example of this ‘emotionalism’, but also proved how fragile and easily hijacked that power really is. There is still much to learn from this. Bottom line: power shifts occur much more globally and profoundly than ever before.

    One way to detect these power shifts, is by analyzing both the frequency and the type of language that is used in communication (textual, graphic, ..). It is perfectly possible to bring the same message from different points of view, with different agenda’s, and drive each of those points home by wrapping the information in a suitable linguistic format. The difficulty is often to maintain said format after the power shift, at which point it usually becomes apparent to the masses, but then of course, the power shift has already taken place. It’s much harder to detect the changes early in the public discourse. When those formats start to change, when the choice of words change, when multiple people suddenly use the same sentences and structures, or even when the meaning of words is actively being attacked and changed or ridiculed, there is a high chance that other ideals and interest groups are entering the equation and are actively aiming to increase their influence actively on the topic. As in: skewing news, skewing perception, and most importantly: skewing the target-group’s common train of thought trajectories.

    So let met be clear about one thing

    This blog is journal-ism. It’s not truth. It’s purely my rendition of my truth. And in my rendition, I observe an increased loss of signal and much more random noise in all sorts of directions. Here’s an example: An automatic shutdown of a nuclear plant is sold as proof that said nuclear tech is no longer hip (and actually that nuclear plant was originally planned to dismantle 10 years ago) and at same time sold as proof that we can’t function without said plant (since it is summer and we’re lucky that the energy demand is currently low). Both points of view are true. Both have economic consequences for energy lobbyist groups, a decision in either direction holds important military, strategic, ecological and political as well as financial considerations. That’s just one example, but there are many more. Double-speak has been part and parcel of the ruling elite, and especially today, training yourself to detect it is almost life-saving. But to come back to the point of this paragraph: while such reporting has educational and intellectual merit, decision-wise it gets us nowhere. The loss of signal is due to the bulk of information and implicated consequences that are or are not included in the analysis. I believe those are important to tell, but so much remains  mis- or un- communicated (active or passive), misnomered, misplaced, misinterpreted, “knee-jerked” and “gun-jumped” that the important bits are sometimes “blatantly” (now there’s a word)  missing from the report, or totally “snowed under”.

    Now, I gave you my opinion, just like all the other blogs, and that is journal-ism (hy-phun intended). And you can read the newspaper instead or watch TV, and you’ll read or hear pieces written – supposedly totally not influenced by lobbyists, or policymakers, networks, channels, etc.. And that’s reporting or straight up journalism. I feel that neither of those is getting us anywhere, having us care about what-ifs and could and should, without even knowing the full truth.

    Factology?

    I could not think of a better name, but I think it should go without the ‘ism’ postfix, since that merely tells us personal influence of somebody is involved. And obviously, there’s a search or quest to write down factual data, hence the name. Then again, it almost sounds like Scientology, the infamous science-turned-religion movement that is less more than a piramidal get-rich-quick scheme. Factology is certainly not a religion, rather, a way to filter news and extract only true events and data, without the stories woven on top of them. So while History is studying the more or less integrated processes behind political, economical and social evolution of a certain subset in space and time, Factology strips down current news to it’s bare-bones, such that only verifiable facts remain. It’s more or less what Reuters and Tass are supposed to do, but cleaner, and more scientifically structured.

    Best example of Factology is how we report on wars. On one side, you’ll find numbers of casualties, ages of persons blown up, and places. A UN school building with kids under 6? The act is going to cost someone’s head. A depot of insurgent fanatics that unfortunately also happened to have a blackboard and some children in it, and a few fellow countrymen on that dangerous mission? Not ok, but at-least the 3,5 rockets are destroyed. For example; I have gripes with the terms ‘minor’ or ‘young adult’ in reporting, because first of all it generalizes the issue to a whole section of the population, and secondly, those terms mean different things in different regions of the world. Why not simply report the true age and what the implications are in the juridical system of that locality? Let people pull their own god damn conclusions instead of pre-chewing your vision for them. There are numerous other such examples. Sometimes, crime-suspects are named and opinions made in the media months before the person even has had a fair trial. Sometimes the reporting is literally colorized, as if mentioning someone’s skin-color, social situation, religion or sexual orientation helps to make the news more understandable, because one culture’s tags are already dumped all over it. The last thing I want to do is check the background of the reporting agency to see what kind of ‘implied’ content is hidden behind those ‘tags’.

    In all fairness^H^H^H^H^H^H^H^H^H truthfulness

    The objective sort of reporting is hard and possibly  too slow in this world. But news used to be slow. It used to be well-written, and it took time, and a few braincells to read and process. Most importantly, it was checked and verified against a number of different sources before it was published and the editor-in-chief made it his chief concern that it had social value. It was also the kind of information you could start relying on, or at-least have a feel for when it was written in paper X by journalist Y, and come to agree with the style, the tone, the sort of passion it took to report it the way it was reported. I’m not going to claim it used to be better, but it was different.  Today I regard the New York Times as a great news source. I don’t want to generalize, but a lot of other stuff out there is ‘incredible’, ‘look what he did’ and jingles.

    Technological advancements, web2.0 and the social media revolution may unknowingly have obsoleted an important vein in human progress and evolution. If we want to continue to make sane, medium-term and long-term choices and be smart about our future, we need that vein back. It’s nice to play the split-second stock-market, but let’s just not fool ourselves until it crashes. The slow, factual information we need is the sort of daily education a nation needs in order not to revert into tribalisms and cultural insecurities and other short-sighted or purely economic reflexes. Web2.0 has gotten to a point where the Internet is being eclipsed by corporate sandboxes such as Facebook and Google, and the real content is increasingly un-public. Few people realize it, and just click through their free entertainment stream that is ever so slightly skewed towards a specific (shifting) center of mass. On a positive note, there is hope: social media are social after all, and nothing prevents them from aiming higher and advocating higher ethical and moral grounds. Wikipedia, wikileaks, and other agencies are still in their infancy when it comes to negotiating the terms on what is fact and what is fiction. The last thing we should do is institutionalize ‘truth-sayers’, but this will probably happen too, at some point.

    In any case, food for thought, be it on the slow side and a bit in rant-mode style.

     

    ps: I’m going to list a few examples here of what I call hollowisation of language:

    * National Geographic, once a symbol of quality reporting on our planets diversity and natural complexity and wonder, now daily reports on “drug network busts”, “police crime cases”, “air crashes”, “trucking problems”, “oil drilling challenges”, etc… These episodes are merely half an hour long, re-run the same footage 3 times and take a full hour due to all the advertising breaks in between. And none of them even remotely have something really ‘natural’ or ‘geographic’, or even ‘scientific’ relevance in them. Pure shock and awe commercials. I guess by now no-one thinks polar bears and dolphins when they see the yellow “buy-me” box icon.

     

    Vectored Exception Handling and OutputDebugString

    Try me

    If you write high-preformance game code in c++, there is an important language feature that is usually simply not available: structured exception handling, in the form of try / catch statement blocks:

    try
    {
      /* code goes here that throws an exception, like so: */
      throw MyException(/*arguments*/);
    }
    catch(MyException e)
    {
      /* handle exception e that was thrown in the try block */
    }
    
    

    A quarrel of stances

    Firstly, try/catch is a bit broken in c++ because the language ‘forgot’ to add ‘finally’, which is present in Java and c# to handle situations where all code paths converge and handle off the function block. In those languages ‘finally’ statement blocks are very explicit about releasing resources like file handles or memory buffers, after handling the exceptions. Stroustrup defends his language design omission decision by stating that the problem can be solved using RAII (Resource Acquisition Is Initialization) which is a valid, more object-oriented alternative.

    While that is true, it forces people to generate class structure overhead, which you have to keep under control if you’re writing game code. Secondly, Microsoft went so far to actually add the ‘finally’ syntax as a microsoft-specific compiler extension. Thirdly, if you look at Java and C#, Ruby, they do provide a finally statement block syntax.

    But apart from language syntax issues, these are the real reasons to stay away from try/catch blocks in game code:

    1. try/catch can lead to exceptions being thrown from one call to the parent call etc. The code pointer typically jumps around in the stack, meaning that your code cache gets utterly trashed. Performance-wise, this is by far the worst way of handling exceptional conditions.
    2. The definition of what exactly can be understood by the notion of an ‘exception‘ is not always clear to all programmers, and can become quite a philosophic debate. I’ll argue here that to have certain use cases handled as exceptions betrays that your code reserves a certain bias as to what is an exceptional operational condition and what is not. For game code, this is weird, since you know all conditional code paths at all times. The last thing that should happen is to have control yanked from under your feet by some 3rd party library call that bubbles an unknown exception up to the surface at totally the wrong place. And to make matters even more complex: there are 2 types of exceptions: system exceptions and standard exceptions.
    3. Stack unwinding during exception catching does not travel across threads and exceptions have to be explicitly handed over to the proper thread in multi-threaded applications. So in the end, programmers end up with a bit of a garbage bin of catch clauses at the top of the application threads. It’s obviously bad, but it happens more that you think.
    4. try/catch usage implies that you can trace back to the object type that generated the exception. This requires Run-Time Type Information (RTTI) during compilation to be enabled, so that you are able to use instanceof and typeid operators, and maybe also some dynamic_cast. Obviously the performance penalty of dynamic_cast is malicious and the operators trash the address registers and cache lines, but also the increased footprint of the executable and the associated memory costs per object is adding to the wrong side of the equation. And since we’re talking exceptions, most of your code should not even be needing it.

    Uh, wait..

    Ok, so basically game code is totally in love with RAII, though it might not always be implemented in a ‘nice’ (if such a thing should exist) Object Oriented approach. But wait! You said there still are system exceptions. What about those?

    Indeed. The system is sometimes throwing exceptions that relate to OS operations that fail, for example, to load a DLL. In such cases, it can be interesting to trap the exception somehow, without having to resort to the whole RTTI/try/catch overhead. This can be done in Windows using Microsoft’s Vectored Exception Handling.

    Basically it allows the application to install an additional custom exception handler for each exception error code that the system is able to throw, using the functions:

    LONG WINAPI VectoredExceptionHandler(PEXCEPTION_POINTERS /*pExceptionInfo*/){...}
    
    AddVectoredExceptionHandler(1, VectoredExceptionHandler);
    RemoveVectoredExceptionHandler(VectoredExceptionHandler);

    Check the MSDN for details.

    Deja vu

    The point I wanted to make in this post, is that while you’re in your VectoredExceptionHandler function, there is a problem with using OutputDebugString(), as it again throws an exception when the debugger is not attached.

    It does however seem to work when the debugger is attached. Suspicious as this is, at first I blamed my own code and it had me immediately looking for uninitialized values or other memory corruption, but nothing came up, and the same code runs fine in other code paths (frequently). After isolating it into a test program, it seems it is actually re-entering the VEH exception handler, and thus causes stack overflow.

    To conclude, this is not just an OutputDebugString() issue of course, it can happen at any point in your exception handler. As soon as you make a library call that depends on standard or system libraries, the VEH is vulnerable to re-entrant code. So guard your VEH’s against re-entrant code paths, or you may end up never leaving the VEH at all :)

    Cheers!

    Natvis Matrix visualizer

    Long story short,

    I moved to Visual Studio 2012 and then realized that all that tweaky autoexp.dat scripting that sports those fancy instance viewers during debugging was for naught. Since I didn’t move up to VS2013 yet, I’m a bit stuck with the “new and improved” Natvis system. Natvis stands for Native Visualizer, because your visializer can be compiled into a DLL (e.g. from c# code), which is a performance move from the purely scripted approach (using autoexec.dat) that was in use before. The scripting was arguably hairy, and after a while, hairpulling, so it’s only reasonable that Microsoft made an effort to improve things. So, out with the old, in with the new!

    I encountered 3 problems with Natvis. The original feature set somehow got trashed, and only a thin set of features remain. This means for example that some information can no longer be displayed, or that you can’t format it correctly. Secondly, there are no conversion tools, everything that once worked is gone (although someone claims that autoexp.dat can be re-enabled for native edit-and-continue debugging in VS2013). Thirdly, if you don’t want to jump into and out of c# projects to change your debugger, you can use…. wait for it… XML-based .natvis scripts instead.

    That’s right. I used XML and scripting in the same sentence. Off with my head!

    A Matrix Class

    Suppose you have a matrix class, say:

     template<typename T, int rows, int cols>
     struct MyMatrix
     {// your matrix members and methods here..
     };

    That’s all great, but how do we visualize it? It seems the .natvis system can only list array items (or other containers) if the count is known. But in the case above, MyMatrix has template arguments for rows and columns dimensions and is of rank 2. In fact, the matrix structure can be a recursive template type definition (and to be fair, that’s what I am actually using at the moment, but I omitted that for clarity). The template arguments can be captured using $T1, $T2, $T3, etc.. but here’s where things get interesting: you have to put them in curly brackets (e.g. {$T1}) in the DisplayString element to fetch their values. Of course, when expanding the elements, the curly brackets should not be used! That took me a while to find out.

    Second point of interest, and perhaps the gist of this post: all examples out there refer to internal members of the structs, but what if you have a recursive type? Well, it was mentioned in passing in the official documentation, but you can use the this pointer just as well, and index-cast it anyway you fancy, even using template types. This gives you access to anything that might be defined in the class/struct. The former type specifiers to print the values in a particular format are gone (apart from ,su for character strings it seems). Attempts to refer to other scoped types (i.e. (othertype*)this) failed so far, but maybe I made a typo.

     

    A solution

    I came up with the following. It’s far from perfect, and it tends to clutter the debug view with all the floats (because “,g” does no longer work), but it does show how to expand elements in a 2 dimensional ‘array’ after the this pointer.

    <?xml version="1.0" encoding="utf-8"?>
     <!-- Place file into My Documents/Visual Studio 2012/Visualizers/ -->
    <AutoVisualizer xmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010">
     <Type Name="MyMatrix&lt;*,*,*&gt;">
     <DisplayString >[{$T2}x{$T3}]({(*(($T1*)this))}, {*((($T1*)this)+1)}, {*((($T1*)this)+2)}, {*((($T1*)this)+3)}),({*((($T1*)this)+4)},{*((($T1*)this)+5)},{*((($T1*)this)+6)},{*((($T1*)this)+7)}),({*((($T1*)this)+8)},{*((($T1*)this)+9)},{*((($T1*)this)+10)},{*((($T1*)this)+11)}),({*((($T1*)this)+12)},{*((($T1*)this)+13)},{*((($T1*)this)+14)},{*((($T1*)this)+15)})</DisplayString>
       <Expand>
         <ArrayItems>
           <Direction>Forward</Direction>
           <Rank>2</Rank>
           <Size>$T2</Size>
           <ValuePointer>(($T1*)this)</ValuePointer>
         </ArrayItems>
       </Expand>
     </Type>
    </AutoVisualizer>

     

    Look, html-ified < and > brackets!

     

    You typically put a *.natvis file containing such scripts under the My Documents/Visual Studio 2012/Visualizers folder. You don’t have to restart Visual Studio, just restart your debugging session and it should work. If it does not work (i.e. you get the standard visualizer for a typed instance when you hover above it during debugging), you have to dig into the XML to find the problem.

    Room for Improvement

    The DisplayString is hard-coded for a 4×4 matrix. The obvious approach is to put conditional clauses for every possible combination of

    {$T2}x{$T3}

    but that just sucks as much as what I have now. Better would be to have some sort of iteration going on in the DisplayString, so that the length of it is dependent on the actual type, but that does not seem to be supported. Also, being able to split strings across multiple lines would be nice.

    I must say that during debugging, I found peculiar recursive behavior when improperly scoping the this pointer in brackets, so watch out for that. I expected 16 repetitions at some point, but only 9 displayed in the visualizer. This can be investigated further, possibly leading to a recursive debug visualizer, so that larger types can be viewed as a collation of smaller types (i.e. a 4×4 = 3×3+7 = 2×2 + 5 + 7 or somesuch).

    I tested this on 4×4 matrices. For 2×2, or 3×3 to work, you obviously need to remove or shorten the list of elements from the DisplayString (or make the list size dependent). For non-square matrices I’m not sure if the correct size is automatically inferred from a given dimensional size (row or col) and the size of the struct. If not, you have to drop the Rank element and set the size to row*col.

    Conclusion

    This demonstrates how to make the .natvis display a matrix array of values (or any kind of list), with the limit feature set, regardless of the type structure that is underneath it.

    Some of the reasonings for switching from the autoexp.dat syntax to something more modern make sense. You can write visual debugging tools for your data, which is fantastic, and the performance improvement undoubtedly is beneficial in any debugging session.. But I think most people are fine with using simple tools, and being able to script them is something that has been downplayed a bit too much here. That the ‘old’ autoexp.dat system still functions under certain conditions makes forcing people to rewrite their visualizers in XML ill-founded and unreasonable.

    The bad bad bad idea of trying to frame scripts with XML expressions has bitten any seasoned programmer at least once (and then hopefully we remember the pain long enough), but seeing even Microsoft fall into that trap reminds us to be thoughtful and repeat “I shall not script in XML” to yourself every once in a blue moon. Ruby or Python would have made more sense.

    To be fair, the natvis feature set was extended in Visual Studio 2013 – still XML though – and this article is written from a VS2012 perspective. So it might be useful for people like me who can’t find a good reason to update every half-moon to a new VS version. Would be nice if VS2012 could be patched up to 2013 levels regarding these debugger issues, but that’s another story.

    If it’s useful, let me know how it fares, and of course all comments are welcome.

    Achievements!

     

    Coding again for a while now, but yesterday (that means half a year ago, since this post had been sitting in my drafts section that long) I managed to tick another ancient to-do on my list. Time for me to give back a few things that I learned the hard way.

    The brief of it: I wrote a little effect intro containing a number of objects rendering with 2 passes, one shadow map and percentage closer filtering (single light source) and one that has a spotlight lit normal map using hardware. It’s using render to texture and alpha testing. Nothing truly incredible and earth shocking, but I’m quite proud of it because I feel I finally crossed a line. Actually 2 lines: the technical challenge, and the fact that I finally bulldozed through my own ignorance.

    The vertex buffer trickery earlier in this blog is still totally there, so I guess that works well.

    Here’s my list:

    • The creation of vertex buffers creates and destroys a separate worker thread in DirectX9. It is best to minimize the creation of buffers per frame by caching / reusing them. The performance gain in debug mode is dramatic.
    • FVF can not be mapped to something meaningful if we use tangent, normal, bi-normal formats. Use D3DVERTEXELEMENT9 instead for declarations.
    • A small issue I learned is that if you apply the template construction for for tangent space coordinates (normal map), the order in the vertex definition matters and you must make sure the tangent template is mentioned after all the fixed pipeline definitions (position, normal, color,  texcoord).
    • I used a D3DFMT_R32F type surface for the depth buffer / shadow map. It took me a while to realize I could only read the red component. It also supports alpha. D3DFMT_A32B32G32R32F works equally well, but I could not manage to store/read from the channels g and b (like e.g. w)
    • Unless I oversaw the obvious, it seems you can’t use the POSITION input in a pixel shader since the GPU will have consumed them. If you need them, you have to pass them 2ce, once as input and once as a TEXCOORD (which will interpolate them). That’s quite ridiculous.
    • HLSL matrices are column major by default. It usually means a transpose of your matrices.
    • There are 2 ways to upload the TBN values. A smooth surface (curves, spheres) needs a new tangent each fragment, whereas a non-smooth surface (polygonal) needs a tangent per tri. In the latter case, because since we interpolate between vertices, we still have to duplicate them in each vertex + you can’t share the vertices because they may belong to neighboring non-smooth tri’s with different normals. In the first case, computation happens in the vertex shader (re-normalize at fragment!), whereas the latter case can use pre-computed normals.
    • Normal maps behave badly in typical cases like e.g. spheres that converge in zenith points. Use appropriate topologies to get around this (e.g. geosphere)
    • Arrays are only supported locally in shaders? Passing them values from outside did not seem to work. I came across this issue when setting multiple light sources for a fragment shader.
    • It seems bind / render / unbind is mandatory, even if you render multiple objects with the shame shaders. I guess one possible way to optimize draw-call performance is to bundle objects together using vertex list tunnels.
    • I wasted a bit of time trying to combine the depth output of my shaders with forward rendered lines with z-test clipping. I solved it by using a vertex shader for the lines as well. For now I was a bit too focused on getting the shadows to work to really dig into this, but I’m pretty sure this isn’t necessary.
    • The tex2DProj function only works from profiles ps_4_0 onwards :/. Implementing PCS on ps_3_0 hw results in cutting back the filter to 9 samples instead of 16 with noticeable loss of quality.
    • Filtering a 4×4 kernel in 2 for loops did not work for me on ps_3_0. Not quite sure why it does not work, maybe compiler is out of registers? Unrolling the loops worked fine.

    If you have any remarks, ideas questions or answers, I’d be glad to hear about them!

    That said, this achievement has only kicked my enthusiasm up a few more notches and kept my coding skills brimming with new ideas. Here’s what happened in the second half of the year: After this little PC shadow mapping try-out I added another couple of other tricks to my proverbial bag:

    • I redid the whole DirectX but now with OpenGL shader model!
    • I built my own little demo rendering environment.
    • I gave a crash course on C++ AMP, yay!
    • I have Oculus Rift working on OpenGL, also yay. Might post some code although it’s pretty easy to port from the tutorial /SDK code.
    • I literally went all-out and turned my knowledge of GPGPU computing upside-down writing all sorts of stuff in Cuda and OpenCL , including direct integration with DirectX and OpenGL respectively. Cuda is blazingly fast and rather well designed, while OpenCL is just as clean as you would expect from OpenGL’s cousin, and actually a little more friendly. I’m curious where that leaves us with AMD’s new Mantle spec. My experiences with C++AMP were also OK. I quite liked the fact that it’s totally integrated in your compilation, but debugging this shit remains a drag.
    • I now have this ‘R’ language on my radar for some reason.
    • Implemented a full testing bench for thousands of tests (including the publicly available bg, doa, sc1, wc3 etc.. maps) and ran it on various kinds of path finding algorithms. Lots of data to read and lots of tweaking ahead. I also did work on path-finding and path smoothing, and I’m totally digging this almost theoretical shit! I should be able to publish something out of this soon.
    • The whole testing bench is also working for 3D models (PLY), includes a voxelizer and path finder and ray-caster. All of this is working pretty fast, but there’s still room for improvement. The indexing scheme and octile storage structures are blazingly fast.
    • For DragonCommander, after refactoring the hell out of some parts, I finally got around to finish the path-re-use scheme and add orientation-based position selection, which works remarkably well.  I’m anxious to see how close I can get to good group behavior, but I’m rather short on time. Kept a full diary of my progress, so I should be able to write something on this too.
    • Devised a little tracking routine to track the motion of n unrelated and unsorted objects in k-space, by estimating frame coherence based on path direction, velocity and acceleration. This routine was used in an IPEM project and is probably published soon.

    I might start posting code fragments or at least some nifty demo material here, if there’s a need for it, but at the moment I’m knee deep in project work and paper publish deadlining, and I’ll stay in that zone for a little while longer.

    My wish list for next year: something with robotics? Drones perhaps?

    Anyway, I wish you all a warm and safe X-mas, or at least a happy new year! Cheers!

    South of here

     

    In July we got ourselves a well deserved holiday. Well, I was forced to take it at work, to be more precise, but you don’t hear me complaining. I actually enjoy it fully with my wife and kid, since we’re currently in the one single spot in Europe that has amazing sun, blue sky + water and supportable heat, combined with a great outdoors, good food and friendly people. I guess if you belong to my ‘friends’ category, the envy types are already going ballistic over this, since it’s all rain and downpour in the north. :)

    Some thoughts about the time passed. Last year, after headaches and financial worry, we wrestled ourselves through our house-reconstruction. That went extremely well, actually. I’m now praying that the financial woes that are raging in the south of Europe are not going to affect our situation, but deep down, I can’t believe that won’t be the case. I’ve read far too much of ZeroHedge to keep an ignorant stance on the matter. Time will tell how this will develop.

    In may of this year (2012) we got ourselves a lovely son, Arthur, who is 2 months old at the time of writing. He’s “growing like cabbage” – a valid Dutch expression which I’m hoping translates into correct English – and we are very lucky that he eats and sleeps like a pro. Even in this fairly hot summer at the Cote d’ Azur, he’s managing unbelievably well, laughs a lot, and apart from not exposing him too much to the burning sun around 12.am to 4pm, we don’t have to do anything special to keep him happy. A totally adorable darling he is. The women we encounter – without exception – all fell victim to his charms, overloading us with compliments. The kid’s more fad than a space rock star!

    Having such a happy little child really changed a few things in my life. It puts more focus on my relationship, makes me work harder to get things ready and happening, and there is of course the feeding routine, the nappies and diapers, the bathing ritual, singing songs and acting like and allround idiot. Some of these things are easier to do than others.. I used to be convinced that breaking into my sacred 8 hours of sleep would wreck me, and while some nights can definately be rough, this proves to be untrue for the most part. Isabelle takes a lot of the work out of my hands, so of course that skews the picture a bit, but the gist of it is still true. She often tells me of her admiration of single mothers who have to cope on their own, and I can totally relate to that. The total net effect of the 3 of us surviving through all of the days and nights – and this totally surprised me – is the realization that a human is much stronger than he thinks he is, and can bare much more than he thought possible. I generalized the statement because I can’t believe I’m that special at all.

    Let me explain: up until recently, I’d been having a pretty minimalistic view on my personal capacities. I’m quite a sensitive person, and when people repeatedly tell me how hellish it would be to care for a small baby, well, you start to accept a mantra if people repeat it to you enough. I sympathized and nodded, smiling politely as they told their stories, and reserved an ever diminishing amount of hope that just maybe things would be different for me if I ever became fortunate enough to have kids. This was tough. And I mean really tough. For more than 10 years on end, people have been giving me lectures on what it means to have kids. I hold no grudge against them of course, but it eats at your confidence, and when you are looking forward to the experience of fatherhood, it also eats part of your idealistic world view. After a certain amount of time, the sincere hopes and longings I started out with gradually were buried under layers of fearful goo, leaving me a mechanical hunch to pursue a faint shadow of what once were pristine motives, only adding to the doubts. Add to this a few biological fertility problems, and you may start to ‘get’ the overall picture of what we went through.

    And then Isabelle did get pregnant, and Arthur did get born, and all of sudden, you get to experience everything first-hand. This was nothing short of an emotional wall that collapsed. Suddenly the hopes and yearnings that were buried in our deepest caves and canyons resonated firmly through our fibers. All the reasons and ideas literary came back to life, and one by one I started to revisit all the things people told us about. And one by one, those absolute truths crumbled to pieces. Yes, it’s an added responsibility, and yes, you have to adjust your daily life and give up things, and yes, things smell bad sometimes, but it is by no means hard. Maybe it’s because I’ve heard everything that there is to hear about having babies for the last 15 years – I had plenty of time imagining what that would be like. Or maybe it’s because I’m older than most parents are when they get kids and can, to some extend, let go of my favorite occupations more easily. (It still stings sometimes.) Or maybe it is because enduring all the psychological hardship is finally paying dividends. Or maybe it’s simply because Arthur is just a super kid. Whatever the reason, what I can say is that I tremendously enjoy being a father. It has made me stronger and a bit more confident. I’m giving it my very best and I know it’s working out when I hear the little guy laugh out, content, like he does every day. That’s in fact really all the feedback I need to regain my composure. It wipes out all that effort to store my hopes away in that private place full of dusty cobwebs. It slowly dissolves the feeling of driving with a handbrake that I’ve lived with for the last few years.

    In the process of all this, and next to my increased interest in following the financial developments around the world, I also discovered that I grew fond of writing things down. For a lot of people (including me) structuring thoughts into strings of words realigns our minds to whatever meaning speaks from them. For those people, writing can be a means to grow, to build on top of what is already written. Mind you, there is absolutely no importance tied to the amount of people reading what I write here, nor do I care if people ‘like’ what I’ve written. It’s nice to hear that, of course. But if that expels me from the facebook generation, so be it. The important thing for me has already happened. I wrote down this text. And realizing the metaphysical importance of that process is another leap. As you write, you tend to reform sentences, replace words, restructure the content, elaborate or cut out parts that have no added value. But the operations on the text also reflect a mental transformation. If you practice this a lot, you automatically evolve your brain patterns, too. It’s totally unsharable, but probably the most important aspect. I guess you could compare it to sitting at a bar. The first few times, you feel a bit uneasy starting conversations, but as you tend to repeat the act, you grow more proficient at talking to people. I just never was the type to sit in a bar much, and with writing, there’s the added benefit that no one has to go through the various drafts of your musings. At university, besides “nerding” around in the demo-scene and playing chess on-line, the poetry mailing list was one of my favorite time-sinks. If people would read the contents of it now – I secretly hope that nothing of it survived the test of time, though in this digital era you never know – they would probably have a good laugh. I suspect the unbridled creative nature of that occupation helped me to develop a taste for writing.

    So yeah. Confidence. Not an easy topic to write about. It’s now been 2 months since I wrote the previous parts, and I’m still gathering bits and pieces every day, and at the same time I learn to blend in into this city of surrealism with increasing success. The process teaches me that protective environments have to be temporary, or they do more harm than good. I know I still have a lot of work to do, but I’m out there pushing for it. So here it is. My honest account shared and published. I win this war. All that remains are the battles to survive, and I know these. Here I come again.

    More good news reached us today. One of my best friends had a new baby born! A little girl that already has a big sister and 2 great parents, whom we don’t get to see often these days, which is a shame. Me and Isabelle wish the whole family all the best!

    Vertex format templates

    While my pregnant girlfriend is counting the weeks until D-day, I managed to code a bit again. I may be more of an all-round game coder, but I know quite a bit about the math involved in lighting models and post processing techniques. Still I have to admit, I’m old-school. The last lighting function I wrote was in software in an unrolled loop of a triangle span-pooled sub-texel renderer (in assembly). Hardware shaders – not to mention the various kinds of languages and hardware specifics – always seemed a bit of a drag to get into. Over the years, the field has matured quite a bit, and tapping into the huge amount of fancy papers on shading made the itch to try this myself only stronger. So I bit the bullet and dived into the NVidia shader tutorials.

    I immediately ran into a quite obvious fact: the vertex format is pretty much a defining factor as to what you can do in your shader. The first shader stage – called vertex shader – obviously works on the vertex format. The second stage – called fragment shader – is directly based on the first one; it samples/interpolates from that stage. This may be less of an issue if you have an editor + compiler – which we happen to have in development at Larian a.t.m. But for the sake of learning the trade, I’m sticking to hand-coding them at home for now.

    So why do I bring this up? Well, if you’re setting up a scene to render, then obviously you need to fill an object with data. The vertex format that you need to use for this is usually fixed. But as you go through the shader tutorials and the shaders become gradually more complex, the vertex format starts to change too. One can just start over in a new project and go from there. That’s fine, it works. However I want to keep all my little scenes in the same app, so that I could just browse through all the shader samples without hassle. The difficulty rises in that every sample may need a unique vertex format composition to feed its associated shaders. So how to specify those formats? I didn’t like the idea of putting #ifdef guards, I didn’t like the idea of having a gazillion typedefs, so after a bit of googling I stumbled upon this.

    The idea is to use type lists, such that each vertex part (position, normal, color, tex coord., etc.. ) is included in a type list. Here’s some code:

        template <vertex_use use, typename Next = void>
        struct vertex_info
        {
            struct Vertex
                : public vertex_part<use>
                , public Next::Vertex
            {
                typedef vertex_info <use, Next> Type;
                void Lerp(Vertex& from, Vertex& to, float t)
                {
                    Next::Vertex::Lerp(from, to, t);
                    vertex_part<use>::Lerp(from, to, t);
                }
            };
    
            static const DWORD GetFlags()
            {
                return vertex_part<use>::GetFlag() | Next::GetFlags();
            }
    
            static void build_format(vertex_format & fmt)
            {
                Vertex v;
                vertex_node n;
                n.usage = use;
                n.offset = reinterpret_cast<byte *>(
                    static_cast<Next::Vertex *>(&v))
                    - reinterpret_cast<byte *>(&v);
                fmt.use.push_back(n);
                Next::build_format(fmt);
            }
        };
    
        template <vertex_use use>
        struct vertex_info <use, void>
        {
            struct Vertex : public vertex_part<use>
            {
                typedef vertex_info <use, void> Type;
                void Lerp(Vertex& from, Vertex& to, float t)
                {
                    vertex_part<use>::Lerp(from, to, t);
                }
            };
    
            static const DWORD GetFlags()
            {
                return vertex_part<use>::GetFlag();
            }
    
            static void build_format(vertex_format & fmt)
            {
                Vertex v;
                vertex_node n;
                n.usage = use;
                fmt.use.push_back(n);
            }
        };

    For a detailed explanation on how to apply a type list to a vertex declaration, please visit the previous web-link. The neath thing is – it can also be used to return the correct stride of your Vertex format, and even the DirectX9 flags used to communicate with the shader api. Nothing but goodies!

    But that is not all. If you implement something like lerp for all your parts, you can instantly lerp between any sort of Vertex, like so:

        template <vertex_use use>
        struct vertex_part
        {
            const vertex_part<use> Lerp(const vertex_part<use>& a, const vertex_part<use>& b, float t) { return *this; }
        };
    
        template < >
        struct vertex_part<v_position>
        {
            float3 position;
            static DWORD GetFlag() { return D3DFVF_XYZ; }
            const vertex_part<v_position> Lerp(const vertex_part<v_position>& a,
                                               const vertex_part<v_position>& b,
                                               float t)
            {
                position = a.position + (b.position-a.position)*t;
                return *this;
            }
        };
    
        template < >
        struct vertex_part<v_normal>
        {
            float3 normal;
            static DWORD GetFlag() { return D3DFVF_NORMAL; }
            const vertex_part<v_normal> Lerp(const vertex_part<v_normal>& a,
                                             const vertex_part<v_normal>& b,
                                             float t)
            {
                 normal = a.normal + (b.normal-a.normal)*t;
                 D3DXVec3Normalize(&normal, &normal);
                 return *this;
            }
        };
    
        template < >
        struct vertex_part<v_color>
        {
            DWORD color;
            static DWORD GetFlag() { return D3DFVF_DIFFUSE; }
            const vertex_part<v_color> Lerp(const vertex_part<v_color>& a,
                                            const vertex_part<v_color>& b,
                                            float t)
            {
                float a_alpha = float((a.color & 0xFF000000) >> 24);
                float a_red   = float((a.color & 0x00FF0000) >> 16);
                float a_green = float((a.color & 0x0000FF00) >> 8);
                float a_blue  = float((a.color & 0x000000FF));
                float b_alpha = float((b.color & 0xFF000000) >> 24);
                float b_red   = float((b.color & 0x00FF0000) >> 16);
                float b_green = float((b.color & 0x0000FF00) >> 8);
                float b_blue  = float((b.color & 0x000000FF));
                float alpha = a_alpha + (b_alpha-a_alpha) * t;
                float red   = a_red   + (b_red-a_red)     * t;
                float green = a_green + (b_green-a_green) * t;
                float blue  = a_blue  + (b_blue-a_blue)   * t;
                color =    (((int)alpha << 24) & 0xFF000000) +
                           (((int)red << 16)   & 0x00FF0000) +
                           (((int)green << 8)  & 0x0000FF00) +
                           (((int)blue)        & 0x000000FF);
                return *this;
            }
        };


    Isn’t template specialization cute? Obviously you can do the same for slerp and other interpolation schemes.

    I’m still toying around with this, but I haven’t ran into any sort of problems yet. This setup particularly shines when I wanted to subdivide a model (any model), using any sort of vertex format, just by telling them how to split up, as long as the splitter function calls onto the vertex format for specific vertex operations (such as the lerp mentioned above).

    Another thing it sped up doing is setting up a ‘universal’ TetraHedra structure, which is constructed using an initializer type. The initializer supplies the rough vertex locations and the face mapping based on tetraHedra formulas (more or less as if it were read from a data file), and the object is constructed using whatever vertex format is required in the shader example. Ah, sweet instant gratification. Nice. :)

    Well, nothing to get extremely excited about but I thought it was worth mentioning.

    Hope you like it and see you next time!
    a0a

    A word from our sponsor

    If you’re lucky, you live in a part of this world where news is ubiquitous, where information flows abundantly, and where you have something to eat 3 times a day and more if you should wish to. It is in that same part f that same world, curiously, that headlines are screaming bad news into our face from whichever angle you take it, day after day, relentlessly: massive job-losses, tax-increases, child molestation, uprisings, failing healthcare, traffic disasters, government corruption, credit default swaps..

    I’m sure it is a sign of the times, but right now the perception lives that everything we know, all the food we need, the water we use to wash ourselves, and the warmth we need to protect our kids and family has increasingly become part of a cataclysmic function, a dependency on an obscure technical stock-market index of sorts. That’s very natural, you say, because we live in a capitalistic world after all. The free market drives profit and loss, demand and supply, all by it’s own rules of greed and happiness. So yes, that’s actually a natural thing. Why, then, is the world seemingly going up in flames about this? What has changed? Why is natural food either dirt cheap and therefore worthless to produce, and at the same time increasingly expensive when it needs processing? Why why why..

    Big banker names in the financial world are crumbling, but it is not them who are to blame. Yes, their attitudes have changed, and they bring out facts that turn out to be not true at all, and they promise things that they can’t possibly promise. But the real driver of mass psychology are the media. Their global mantra of one financial disaster after another, minutely analyzed and tastefully illustrated with thousands of economist doctoral thesis models and estimates, is building us a self-fulfilling prophecy in 16.4 million colors, a fearful world vision that is out to debase whatever value there should have been in any investment ever made, and by no matter who. The only winner in this epic tale seems to be a guy called Warren Buffet. No, sorry, that was just a joke.

    The media claim their self-proclaimed moral duty to inform Warren and Joe Sixpack about his stock and bet. To keep doing that profitably, news outlets and networks collectively turned themselves into fiction publishers. It is no longer in their primary interest to bring real news or tell stale truths. One look at the dynamic headlines and today seems like so different then yesterday, and yet.. doesn’t everyone agree that most days will still very much be like their yesterdays? It´s in news agencies first interest to sell believable stories that have a profitable ‘wowie’ factor. They’re weeeee close to selling action figures to go with that. The war on terror taught them how to wage fear in a populace by twisting stories to whatever extend was needed to drive a message home. And everything else, as they say, is just history.

    If you take another step back, beyond the increased media attention and coverage of these domino events, you also see something else: The once communist china has turned into a giant rat-race pony on steroids. The once communist Soviet Russia discovered the money game and is simply torturing neighboring states with oil and gas cuts as they wish. The once honorable gentlemen of the British islands have all signed up to play Onslow in a series called Keeping Up Appearances in The Common Wealth (how ironic a name). The once, well, New Yorkers are still dreaming of an America where everything is possible, and they all act just as selfish as the next guy. The old continent is slowly grinding to a rusty halt while feeling incredibly intelligent and important. The third world is struggling to get into -but firmly kept out of- class. And the Middle Eastern countries are sitting on their oil and more or less watch the rest of the world play tag, you’re it!

    Ok. Stop. Let’s not believe all of that for a minute. Look at the real capital in this world again: hi, yes, I mean you. We can’t have the financials run our society the way it does today, we can’t have mass psychology dictate how valuable we are. In that sense, I agree with the protesters in Wall Street. What I hate is their destructive nature, because they provide no alternatives.

    And yet, here is a very simplistic idea: Make all intended trade actions globally public and accessible at least 3 days before they are allowed to proceed. Remember, yesterday will be a whole different day. No more death marches, no more ralleys, no more volume trading spikes. It would take a lot of fuel out of the psychosis that is currently in the markets, wipe out rumors on rumors of murmurs, and bring back common sense and square off speculation against a currency, let alone a country. The trading would become a lot more strategic, and less impulse driven, and I think that is exactly what this world needs today.

    So, 2 cents from me, all reactions welcome.