Planet Gamedev

Game AI for Developers

BROADCAST: Crowd Animation Techniques and Tools from Visual Effects (April 30th)

by Alex J. Champandard at April 27, 2015 12:00 AM

BROADCAST: Crowd Animation Techniques and Tools from Visual Effects (April 30th)

This upcoming broadcast on Thursday, April 30th at 19:00 UTC will take place online within your browser using streaming audio/video:

“This broadcast with Michaël Rouillé, CTO at Golaem, will explain the crowd simulation techniques and tools required to animate convincing crowds in many films, TV series and adverts. He'll also discuss how these ideas and tools can be applied within real-time simulations and games.”

To subscribe for email reminders and check the exact time in your current timezone, visit this broadcast's page on AiGameDev.com.

Confetti Special FX

GBR Innovator: Confetti Interactive, Creating Graphics For Games and Entertainment

by Wolfgang at April 26, 2015 04:53 PM

Here is an article on Confetti by Games Business Review:

http://gamingbusinessreview.com/features/gbrinnovators/gbr-innovators-confetti-special-effects

Timothy Lottes

Source-Less Programming : 5

by Timothy Lottes (noreply@blogger.com) at April 26, 2015 02:56 PM

Boot Loader Bring-up
Managed to get the boot loader done, which includes the following steps,

(1.) Move the stack seg:pointer (since next step overwrites it).
(2.) Use BIOS to read the other 62 512-byte sectors for the first track.
(3.) Use BIOS to switch to 80x50 text mode and load custom character glyphs.
(4.) Use BIOS to set EGA text palette to 0-15 with 0 for overscan.
(5.) Program VGA palette registers for those 16 colors.
(6.) Use BIOS to enable A20.
(7.) Turn off interrupts, and relocate the image's 63 sectors to zero.
(8.) Load zero entry IDT, minimal 3 entry GDT.
(9.) Enable protected mode and jump to the 3rd sector.

The 2nd 512-byte sector contains the 8x8 character bitmaps for the first 64 characters. The majority of the time was spent making a nice font, getting colors the way I wanted, and prototyping editor look and feel (without building it).

Didn't feel like fully hand assembling 16-bit x86 machine code for the boot loader, so I used NASM and hexdump to accellerate the process (to provide machine code I could pad out to 32-bit alignment). Also wrote a quick C based tool to bootstrap the process of building the loader. Something which would enable me to easily build out an annotated image, and show a print out in the console of what I'd be seeing in the editor. Here is a shot of a bit of the scratch C code I used to make the font,



Here is a shot in QEMU of the loader displaying the font,



And another shot from QEMU showing the pallet,



What the Current Annotated Image Looks Like
Below is a shot captured from the terminal window output of the C tool. I'm using 3 cache lines for the loader code.



Grey lines separate the 512-byte sectors. Memory address on the left in grey. Each pair of lines shows half a x86 cacheline. The blue to white shows the 5 character/word annotation strings (now using the extra 2 bits of the label for color). The red hex show the image data. Not using {GET,ABS,REL} tagged words in this part, so everything in the bootloader is just hand assembled 16-bit machine code, and this is not representative of what the rest of the system will look like. The rest of the system will have {GET opcode} followed by {HEX} or {ABS} for opcode immediates (easy to write). The 16-bit code is {HEX} mixed opcode and immediates, quite a bit different (hard to write).

Some hints on the annotations,

Everything is in base 16. AX is TOP so I don't bother with "A=9000" (which wouldn't fit anyway), instead I just write "9000" (the A= is implied). The "!" means store so "SSSP!" is storing TOP (or AX) into both SS and SP. The "B=200" means BX=200h. In this 16-bit x86 case I use 3E to pad out opcodes to 32-bit. The "X" = SI, "Y" = DI, "F" = BP.

Next Step
Ground work is done, next step is to bring up the opcode dictionary for {GET} words, then write a little IDE driver to get access to load the rest of the image, and to be able to save in the editor. After that, write the drawing code for the editor, then a mini PS/2 driver for the input, then write editor input handling. Then I have a full OS ready to start on a real machine.



Gamasutra Feature Articles

You can get U.S. government grant money for your educational game

April 24, 2015 09:50 PM

"Eight years ago, our program's portfolio didn't include a single educational-themed game or project. Today, about half of those projects are educational games." ...

Video: VR game design for indies

April 24, 2015 09:24 PM

As part of the GDC 2015 Independent Games Summit, a panel of indie developers with VR development experience try and shed some light on the idiosyncracies of making VR games. ...

In defense of Valve's new Steam Workshop storefronts

April 24, 2015 08:07 PM

"I'm currently working on a mod right now. I plan to release my mod for free once it's done. But as a creator, it's also nice to know that there may be an option to make some income off of future projects." ...

Obituary: Artist Francis Tsai

April 24, 2015 07:06 PM

Artist who worked on games such as Myst III and the Tomb Raider franchise succumbs to ALS after a five-year struggle. ...

Real-Time Rendering

Why not?

by Eric at April 24, 2015 07:04 PM

I like to ask researchers whether they think the release of code should be encouraged, if not required, for technical papers. My argument (stolen from somewhere) is, “would you allow someone to publish an analysis of Hamlet but not allow anyone to see Hamlet itself?” The main argument for publishing the code (beyond helping the world as a whole) is that people can check your work, which I hear is a part of this science stuff in “computer science.”
       
Often they’re against it. The two reasons I hear are “my code sucks” and “we’ve patented the technique.” I can also imagine, “I don’t want those commercial fatcats stealing my code,” to which I say, “put some ridiculous license on it, then.” If the reason is, “I want to publish to enhance my resume and reputation, but I also want to keep it all secret because I’m going to make money off it,” then choose A or B, you can’t have both (or shouldn’t, in my Utopian fantasy world).

Don’t worry about code quality. I love “there are codebases that suck, and there are codebases that aren’t used“. This quote was by a lead programmer on one of the best selling videogame development platforms, Unity3D; he got it from someone else. Show us the code, we won’t laugh (much). It doesn’t have to be easy to build. For example, MeshLab, for me at least, is about impossible to build, and has (or had – they’ve improved considerably over the years) some horrific bugs, but I still appreciate that the code is available to look at. I also use the program a lot, I just reached my hundredth use of it this week.
       
It takes a few minutes to slap your source files onto Github and costs nothing. If you’re worried about code quality, don’t – you’re in good company, about 90% of all code on Github is crap (Sturgeon’s Law), including my own (the executable of which gets like 15,000 downloads a month). Notch’s $2.5 billion code for Minecraft sucks. Let it go.
      
Patents: I admit to not liking most software patents, perhaps all. But that’s irrelevant, or should be. If you’re embarrassed to admit you have a patent on some algorithm, that shouldn’t stand in the way of others understanding your research – deal with your shame. The point of a patent is that you are revealing the process. In return your idea is protected for a number of years. This is as opposed to a trade secret, where the process is kept quiet. A patent stops others from using your idea without paying you a licensing fee. However, your part of the bargain is to explain the idea. A trade secret risks someone reverse engineering your clever idea, for which you have little protection. Obvious, but people seem to forget that.
      
I expect these arguments are entirely convincing and code publication still won’t happen, due to pride and lawyers. No one likes to show off their dirty laundry. And lawyers will see no benefit to revealing code: “What’s this ‘research’ stuff you’re talking about? We’re making I.P. here, not research. Releasing code will increase the risk of undetected infringement by others of our I.P., or, worse yet, we might be found to be infringing on someone else’s algorithm patent.”
      
Ah well, I tried. Now get off my lawn.

Gamasutra Feature Articles

Tears in rain: Remembering the Blade Runner game

April 24, 2015 05:11 PM

"There wasn't anything quite like Blade Runner back when it launched in 1997. Worse yet, there still isn't. The game deserves to be influential but remains virtually unknown." ...



Real-Time Rendering

New CRC Books

by Eric at April 24, 2015 04:05 PM

Well, newish books, from the past year. By the way, I’ve also updated our books list with all relevant new graphics books I could find. Let me know if I missed any.

This post reviews four books from CRC Press in the past year. Why CRC Press? Because they offered to send me books for review and I asked for these. I’ve listed the four books reviewed in my own order of preference, best first. Writing a book is a ton of work; I admire anyone who takes it on. I honestly dread writing a few of these reviews. Still, at the risk of being disliked, I feel obligated to give my impressions, since I was sent copies specifically for review, and I should not break that trust. These are my opinions, not my cat’s, and they could well differ from yours. Our own book would get four out of five stars by my reckoning, and lower as it ages. I’m a tough critic.

I’m also an unpaid one: I spent a few hours with each book, but certainly did not read each cover to cover (though I hope to find the time to do so with Game Engine Architecture for topics I know nothing about). So, beyond a general skim, I decided to choose a few graphics-related operations in advance and see how well each book covered them. The topics:

  • Antialiasing, since it’s important to modern applications
  • Phong shading vs. lighting, since they’re different
  • Clip coordinates, which is what vertex shaders produce

Game Engine ArchitectureGame Engine Architecture, Second Edition, by Jason Gregory, August 2014 (book’s extensive websiteGoogle Preview and Table of Contents)

Overall this is a pretty great book. It’s not meant as a graphics programming guide; rather, it’s more a course about all the aspects of actually programming a videogame. I’m impressed with its quick summaries of hundreds of different algorithms, techniques, and tools and what each is used for. It performs a valuable service, alerting the reader that a topic even exists and giving some sense of what it’s about, all in plain English. The main problem with writing about current practices is that the book is about two years old, so of course some newer techniques and tools are not covered. However, it gets you about 90% up to speed. The book is not full color, rather has color plates, and that’s just as well. Full color throughout would have been expensive and made the book quite heavy (possibly unpublishable) without adding a lot of value.

Antialiasing: generally good coverage, though it assumes the reader already knows what jaggies actually are. Discusses MSAA and FXAA, and notes the idea of MLAA. MSAA is described correctly and clearly. FSAA is covered briefly and (properly) dismissed. CSAA is covered, since at the time it was a thing. SMAA is not discussed, since it hadn’t really been picked up by games yet at the time of writing. There’s a minor typo on page 506, “4 X MLAA” when MSAA is meant.

Phong: the term Phong doesn’t appear in the index. Perhaps this is fair enough for Phong shading, which is often replaced with the more descriptive term “per pixel shading”. I blame my age and schooling for considering these to be important terms. This book has a bit of confusion on the subject, however, mixing per pixel evaluation with the implication that texture mapping fixes Gouraud shading artifacts (p. 462). This is too bad – I want to like everything about this book, since it gets so much correct. Phong illumination is not in the index, nor is Blinn-Phong. I did finally find Blinn-Phong and Phong under “lighting” in the index. In general the index is somewhat weak, as it has less cross-referencing than I would like. Presentation of Blinn-Phong is short and succinct, which is appropriate for the survey nature of this book. A set of thumbnails showing the effect of changing the exponent would have been useful. A long Wikipedia URL is given for more information; better would have been to say “Search on ‘Blinn-Phong’ on Wikipedia”, since no one will type in the URL.

Clip coordinates: Clip coordinates for a perspective view usually have a W value that’s not 1, and clipping is done on points that have X, Y, or Z values that are outside the range [-W,W] (when W is positive). Clip coordinates are what the vertex shader produces, so are important to understand properly. Unfortunately, this book gets this topic a bit wrong, but so do most texts. This text mixes clip space with Normalized Device Coordinates (NDC). This is a common “shorthand” used to explain clipping, but something of a false savings. We as humans tend to think about clipping against the NDC coordinates, but clip space is where clipping actually happens, before dividing by W. The book does point out something that is (surprisingly) rarely mentioned in other books, that along the z-axis NDC goes from 0 to 1 for DirectX.

Summary: despite my criticisms, four out of five stars, maybe higher. It covers a huge number of subjects, has much practical advice (e.g., performance and debugging tool recommendations), and is written in a clear and intelligent style. The author clearly cares about his subject and does his best to help the reader learn all about it. As important, he cuts the fluff – I didn’t see any signs of pet topics he cares deeply about that mostly don’t matter to the field. Finally, at $62.96 for a thousand-plus page book, a great price per page.

IntroToCG Introduction to Computer Graphics: A Practical Learning Approach, by Fabio Ganovelli, Massimiliano Corsini, Sumanta Pattanaik, and Marco Di Benedetto, October 2014 (Google Preview and Table of Contentsauthors’ websitepublisher’s page)

This book is about computer graphics in general. It has a focus on interactive techniques and uses WebGL for exercises (good plan!), but also tries to give a wider view of the field. Theory is favored over practice. One factor in favor of this book is that I haven’t (yet) found any serious errors. I would expect no glaring errors from these authors, researchers all. However, there are omissions or short explanations where a bit more ink would have been useful, along with a number of typos for important terms. At 375 pages (not including table of contents), this book overall feels condensed, given its scope. I sometimes found it terse and quick to jump to equations without enough background. To its credit, there are many helpful figures. The book suffers from not being fully in color, rather including just some color plates. The small book page size makes the text feel a bit crowded.

Antialiasing: somewhat abstract coverage, first talking about line rasterization in HSV space. Mentions full-screen antialiasing, mislabelling it FSSA, but fails to note that this is rarely done in practice. Important antialiasing techniques for interactive rendering, such as MSAA and FXAA/SMAA/MLAA, are not mentioned.

Phong: properly indexed and fully covered, and a warning given to the reader to not confuse shading with illumination. The difference between Phong and Blinn-Phong is covered, though it does not discuss that the exponent in each has a considerably different effect (Game Engine Architecture notes the exponent is “slightly different”, when in fact it’s about a factor of 4 different – see “R.E versus N.H Specular Highlights,” in Graphics Gems IV). Oddly, fragment and vertex shaders are not listed in the index, though fragment shaders are presented in the text for the exercises. Typo, repeated in the index: “Match banding” instead of “Mach banding”.

Clip coordinates: not incorrect, just omitted. Clip space is briefly mentioned on page 117 and the text properly notes that it is not the same as NDC. Much else along the pipeline is dealt with in some depth, but clipping in homogeneous space is given a sentence. There is an interesting pipeline figure on page 121, but clipping is left out. DirectX’s range of [0,1] for the Z axis of its space is not mentioned. Classical clipping algorithms such as Sutherland-Hodgman are covered, but without mention of clip space vs. NDC space. Proper clipping for perspective views is feeling like a lost art to me. It’s an easy topic to skip – the GPU does all the clipping nowadays – but some brief coverage can help save students from screwing up the w-coordinate when writing vertex shaders. The best (and brief) online explanations I’ve seen are here and here, and by “best” I mean “basically correct”. More on this topic later.

Summary: an average of three and a half stars out of five, though it depends. This book contains solid information, could be used as a textbook for teaching graphics, or possibly a fairly-reliable (though terse) reference. It looks tough to plow through if you’re on your own, and it tends to be more theoretic than practical. In the long term, this theoretical bent is a good thing for someone learning this area - a proper foundation will serve anyone well for life, vs. memorizing ever-evolving APIs – but the book does not feel strongly connected to present-day practice. For example, it barely discusses the various types of shaders – vertex, fragment, geometry, etc. The fragment shader gets a paragraph, and no entry in the index. GLSL is mentioned but also does not have an index entry. The geometry shader is never discussed. In fairness, vertex and fragment shaders are indeed used in the WebGL exercises, there’s just not much explanation. Again, it feels like an abridged textbook, where the instructors in class would spend time on how to actually program shaders. I look forward to a second edition that is more fleshed out.

GPGPU

GPGPU Programming for Games and Science, by David H. Eberly, August 2014 (book’s code websiteGoogle Preview and Table of Contentspublisher’s page)

This book is tangentially related to computer graphics, but I mention it here anyway. Unlike most books about GPGPU programming, this one does not use CUDA, but rather uses DirectX’s DirectCompute. I can’t fairly assess this book, as I still haven’t taken on GPGPU.

While the book is ostensibly about GPU programming, computer graphics sneaks in here and there, and that I can comment on. Chapter 4, called “GPU Computing”, is the heart of the book. However, it spends the first part talking about vertex, pixel, and geometry shaders, rasterization, perspective projection, etc. Presenting this architecture is meant as an example of how parallelism is used within the GPU. However, this intent seems to get a bit sidetracked, with the transformation matrix stack taking up the first 8 pages. While important, this set of transforms is not all that related to parallelism beyond “SIMD can be used to evaluate dot products”. For most general GPGPU problems you won’t need to know about rendering matrices. 8 pages is not enough to teach the subject, and in an intermediate text this area could have been left out as a given.

Chapter 6, “Linear and Affine Algebra”, is an 84 page standalone chapter on this topic. It starts out talking about template classes for this area, then plows through the theory in this field. While an important area for some applications, this chapter sticks out as fairly unrelated to the rest of the chapters. The author clearly loves the topic, but this much coverage (a fifth of the book) does not serve the reader well for the topic at hand. I was strongly reminded of the quote, “In writing, you must kill all your darlings”. You have to be willing to edit out irrelevant pieces, no matter how sound and how much you love them. The author notes in the introduction, “I doubt I could write a book without mathematics, so I included chapter 6 about vector and matrix algebra.” The nature of the physical book market is “make it thick” so that it looks definitive. Putting tangential content into a book does the customer who is paying and spending time to learn about GPGPU programming a disservice. I don’t blame the author in particular, nor even the publisher. Most technical books have no real editors assigned to them, “real” in the sense of someone asking hard questions such as, “can this section of the book be trimmed back?” We have to self-edit, and we all have our blind spots.

Overall I’m a bit apprehensive about truly reading this book to learn about GPGPU programming. I had hoped that it would be a solid guide, but its organization concerns me. It seems to go a few different directions, not having a clear “here’s what I’m going to cover and here’s what you’re going to learn” feel to it. A lot of time is spent with groundwork such as floating point rounding rules, basic SIMD, etc. – it’s not until 123 pages in that the GPU is mentioned. The book feels more like a collection of articles about various elements having to do with performing computations efficiently on various forms of hardware. That said, Chapter 7, “Sample Applications”, does offer a fairly wide range of computational tasks mapped to the GPU. It’s a chapter I’ll probably come back to if I need to implement these algorithms. The author is a well-respected veteran and I trust his code to be correct. He’s done wonderful work over the years in growing his Geometric Tools site – it’s a fantastic free resource (at one point I even tried to find external grants to support his work on the site - no luck there. A MacArthur Fellowship sent his way would be great). What might have made more sense is a focused, stripped down book, half of chapter 4 and all of chapter 7, offered for $10 as an ebook treatise.

cgthroughOGLComputer Graphics Through OpenGL: From Theory to Experiments, Second Edition, by Sumanta Guha, August 2014 (book’s websiteGoogle Preview and Table of Contentspublisher’s page)

This book is, unfortunately, currently broken, because of a faulty index. The index page numbers are off by quite a bit. For example, Sutherland-Hodgeman (which should be spelled Hodgman - Angel & Shreiner’s Interactive Computer Graphics, a book I generally like, also makes that goof; no biggie) in the index is listed as page 589, but actually appears on page 556 – a 33 page error. This problem appears to be a scale problem. Entries early in the book are correct, e.g. clipping is listed as page 33 and indeed appears there. Selection is listed on page 184 and appears on page 174, a 10 page error. Near the end, homogeneous coordinates are listed as 879 but are actually 826. By curve fitting using Excel, the equation is:

 actual page number = 0.9412 * index page number + 1.4594

Let’s get past the index and mention it no more. A workaround is to use Google Books to search for the correct page number instead.

Of the four books reviewed, this one has the nicest layout and presentation. Full color, wide format, with helpful figures in the margins. Stylistically, the author attempts a chummy style with frequent exclamation points. Expect passages such as, “By the way, do you even know what a floppy disc is, young reader?! If not, look it up on Wikipedia.” The author has a typographic conceit, heading various sections with arbitrary camelcase, e.g., ExpeRimenT, ExercisE, ExAmPlE. I can’t fully replicate the feel here, because the capitalized letters are actually lowercase but of varying font size. This might be a cute little flourish if the book was excellent. It’s not cute.

The book is in its second edition. Though the cover says “Comprehensive coverage of OpenGL 4.3″, what this means is that two extra chapters were added to the end of the book. Even then, these chapters are as much an introduction to OpenGL 2.0 as 4.3; for example, they are the first places GLSL gets discussed. I had a theory that the first edition of this book came out before 2004, which would explain the dependence on pre-shader OpenGL for the vast majority of the book. I was incorrect; the first edition came out in 2010. My impression overall is that the author misses the days of the fixed function pipeline. This is understandable, and I had the same dilemma designing an introductory course: when do you hit the students with shader programming? It’s possible early on, though mysterious. You need a fair bit of understanding of the transformations used, as well as what a shading model is, to really get traction. Old OpenGL, with its built-in shader model and simple, clear, and now-vastly-inefficient way of specifying triangles makes for an appealing teaching environment.

So, I understand the desire to not throw the students into the deep end on day one. However, given 919 pages to work with, GLSL should be mentioned much earlier than page 745, along with vertex and fragment shaders and all the rest. The book actually ends 75 pages later after introducing shaders, with the rest being appendices. So, it has 75 pages to cover everything that has happened to OpenGL since 2004. This is insufficient.

The bulk of the book includes tangential topics, such as scan-based polygon rasterization. Rasterization of polygons with concavities is not used by GPUs, so is mostly irrelevant, though possibly useful for teaching about parity. However, the algorithm is then presented incorrectly, worrying about singularities with ray/edge testing instead of using the proper rounding rules (in contrast, Eberly presents rasterization correctly, on page 133 of his GPGPU book). As I say, I skimmed this book, but noticed one strange grouping along the way: the perspective matrix and rational Bézier surfaces are covered in the same chapter. This feels like a Jeopardy! clue for Letters of the Alphabet, “Perspective and Bézier surfaces have this in common.” “What is w, Alex?” I shouldn’t joke, but I then uncovered such a deep flaw in the book that I, well, read on.

Antialiasing: the basic idea of pixel coverage is discussed as the solution, so that’s fine. Multisampling is skimmed over, being described as if it was supersampling. There is also a bit of filler on page 797 about how multisampling in OpenGL 4.3 is done exactly as described on page 527. There’s no reason to say this if there’s no change from “pre-shader OpenGL”. A few pages past this topic I noticed the accumulation buffer is covered. This functionality is rarely used nowadays and doesn’t appear in OpenGL ES, but again it can be useful for teaching about motion blur, antialiasing, etc. The book describes the accumulation buffer, but doesn’t explain what it is for – a missed opportunity.

Phong: the index does note Phong lighting vs. shading. The description of Phong shading is correct and concise, and its relationship to Phong lighting described properly. However, both Gouraud and Phong shading are not illustrated in any form (and this is a full-color book), e.g., showing specular highlighting and how it improves with per pixel evaluation. Phong lighting itself is explained, though the author does not note that what he’s covering is actually Blinn-Phong. Again, there is no simple image showing how varying the specular exponent changes the highlight. There’s an odd notation on Figure 11.14, “(not exact plots)” for the various cosine to a power curves formed by varying the exponent. Why not exact?

Clip coordinates: the coverage here is deeply incorrect, not just a typo or oversight. On page 703 the pipeline is given as perspective division followed by clipping; the correct way is clipping followed by perspective division. There is also an odd step 5, “Projection to the back of the canonical box”, but that’s a minor detail. The author does understand the incredible difficulties involved if you attempt to clip after performing perspective division (for starters, you have to deal with division by zero). He spends the next few pages creating some method to deal with “semi-infinite segments”, which he also discusses elsewhere when talking about clipping. I admit to not carefully wading through his presentation, as the standard way to clip works fine. Eleven pages later he resolves his difficulties by presenting the rendering pipeline again, with a revised step “Perspective division with mechanism to handle zero w-values” (his emphasis), still performed before clipping. He clearly loves projective spaces, having a 46 page appendix on the topic. Unfortunately, he missed Appendix A in Sutherland and Hodgman’s original paper, or Blinn and Newell’s followup. This is extremely upsetting to see. The author seems like a nice person and clearly knows a fair bit, but there appear to be at least one small but serious hole in his education. We certainly made goofs in our book, and there are sections which I’d love to improve, but we did our best to read through existing literature before inventing our own solutions.

I don’t think I need to give a rating. It’s unfortunate, and I’m more than a bit embarrassed and hesitant to post this review, but honestly can’t recommend the book to anyone (even with the index fixed). There looks to be much valid information in the text, but as soon as trust is severely lost, the book is no good to me.

Gamasutra Feature Articles

Lasting connections: Enduring games, enduring relationships

April 24, 2015 03:49 PM

"The game wants you to connect with the players around the table. That's why we gather for board games, right? These connections don't define us, but they are the finite moments of life." ...

c0de517e Rendering et alter

OT: The design space of fountain pens

by DEADC0DE (noreply@blogger.com) at April 24, 2015 04:38 PM

Met Stephen Hill at GDC this year, he casually mentioned that I should write an article about pens. Well Stephen maybe I WILL.

I try to live a reasonable life, but there are two things I do posses in more quantities I should: writing and photographic equipment. I would say that I collect them, but I don't keep these as a collector would, I actually use them with little regard, so I'm more just a compulsive buyer, I guess. 

But with much wasted money comes experience, or something.

- Why fountain pens

Calligraphy, duh! Line variation and reasons. Seriously though, they are different, and really it's a matter of taste... The feeling is different, they require less pressure, the ink is different... But nowadays rollerballs and gel pens have so many tips and technologies it's hard to compare. 
Also, on a purely utilitarian scale, I believe nothing can win a simple 0.5mm mechanical pencil...

Me writing this article.
Pen is a Namiki Vanishing Point ExtraFine
Notebook is a Midori Spiral Ring

So for the most part it is a personal choice, a matter of taste. I like them, they are elegant weapons for a more civilized age, and you might too. 
Now, without further ado, let's delve into this guide on how to start spending way too much money on pens.

- Nibs

First and foremost a fountain pen is about its nib. There are two main axes of nib selection: shape and material.

For shape, most pen brands will make three sizes of round tips: fine, medium and broad. Fancier brands might expand to extra fine, extra (or ultra or double) broad and maybe even ultra extra fine (sometimes also called needlepoint or accounting nib).

The catch here is that for the most part, these names carry little meaning. Especially on the finer scale the differences can be huge, traditionally Japanese nibs are finer, but some Japanese brands don't follow the rule.

A needlepoint nib (disassembled for repair), hand ground (Franklin-Christoph)

Italic, slab, oblique, cursive nibs are all variations of non round nibs, they produce a finer line in certain directions and a bolder one in others. Italic and slab are cut straight, with the italic being sharper (more difference between writing directions), crisper and harder to use. The oblique nib is cut at an angle. All these come in different sizes, usually specified as millimeters of their wider angle. Very wide stub nibs are also called "music" and often have more than a single ink slit, to keep the ink flowing.

Selection of Lamy steel nibs

More exotic nibs can be trickier to use and usually require better pens to work well. Bolder nibs lay down more ink, and thus stress the pen's ability of keeping a good, constant flow. Finer nibs are easier to break or misalign, they are harder to make and to make so they write smoothly. Very sharp italic nibs somewhat inherit the worst traits of both.

Consider also that broader nibs will use more ink (deplete faster), the ink will require more time to dry and can bleed more, but many people do like them better for fine writing as the properties of the ink (shading variation, sheen, color) show more with a wetter and more varied line.

Ink shading from an Italic nib
Image source: https://wonderpens.wordpress.com/tag/rhodia/

In terms of materials, there are really only two options: steel and gold. Both can then be plated in different materials (rhodium, ruthenium, pink gold, two-tone and so on) but that is only an aesthetic matter.

The functional difference between steel and gold is that the latter is softer, more flexible, thus it writes more smoothly and with more line variation. Steel is more durable and better for heavy handed writers.
Somewhat confusingly, both materials can be used to make flex and semi-flex nibs, which are thinner and specifically made to give lots of line variation. They are quite hard to use and suited mostly for calligraphy.

A Piltot/Namiki Falcon flex pen
Image source: https://www.youtube.com/watch?v=XMolEvB5EqA

Most pens have interchangeable nibs, and buying nibs alone is usually much cheaper than buying a full pen.

- Pen body

A big part in the choice of a pen is taken by its aesthetic, which is I guess entirely a matter of taste so I won't discuss it.

There are though a few functional considerations to keep in mind. Ergonomy of course is a big one. Bigger pens tend to be more comfortable but of course, less easy to carry around. Heavier pens might not be great for longer writing sessions, and balance can make a lot of difference.
For the most part, you'll have to try and see what fits you best. Remember to try any given pen with and without the cap posted, the balance will change significantly, with some pens designed to be posted while some others don't post very well.

The Franklin-Christoph 40 pocket need to be used with its cap posted,
it's way to short otherwise. Screw-on cap, clipless, can be converted to eyedropper

The filling mechanism and ink reservoir is also important. Most pens nowadays use plastic cartridges, most being "international standard". 
The second most widespread mechanism is the piston filler, which is quite convenient and usually has chambers that can carry more ink than a cartridge, but it won't allow you to carry spare ink as easily.

Now, you really will want to use bottled ink in your fountain pens, both because it's cheaper and it comes in a much wider selection, but having a cartridge pen won't stop you. Most of them can be fitted with "converters", special cartridges with a piston to suck ink from a bottle, and you can always refill a cartridge with a syringe (which I actually find less messy than dipping the nib in the bottle to refill).
Also, many (but not all) will work well as "eyedroppers", filling the cartridge chamber directly with ink (without a cartridge installed) and sealing it (a bit of with silicone grease on the screw)

There are other minor things to notice. As most pens are round, having a cap with a clip allows them not to roll, which might be something to consider even if you don't need to clip your pen to a notebook.

Nakaya "dorsal fin" model, an asymmetric design made to not roll even w/o a clip
Image source: http://www.leighreyes.com/?p=4313

The cap design and closing mechanism also matter, actually more than it might seem. Not only certain caps fit better posted than others, but certain designs are more prone to sucking some ink out every time you uncap. Screw on caps are less prone to this, but certain screws can be annoying to feel on the barrel of the pen, depending on how you hold it.

- Ink

A big reason to use fountain pens is that they allow to play with different inks. It might be actually a much more reasonable idea to collect and play with inks, than it is with different fountain pens.

Inks have lots of different attributes, even colors are not so simple as many inks can "shade", show variation (even drastic) as the pen lays down more or less ink on the page (according to pressure and speed), they can have sheen and even pigment or other particles embedded (these though are often more dangerous to use and can clog a pen if not properly handled)

Inks can be even more interesting than pens!
The Pen Addict is a good review site

They can be more or less lubricated, certain inks can flow well even in lesser pens while certain others tend to be more dry. If your pen is already on the dry side, you don't want to couple it with a dry ink, and vice-versa.

Different inks also have different drying times, and tendency to feather or bleed through paper. Good paper will also help absorb less, but that also means it can increase dry times.

It's not in general "safe" to mix different inks, albeit most of the time it won't cause havoc and you can easily clean your pen by just running it under cold water until it flushes clean. There are certain brands who make mixable inks, but it's rare.

- Recommendations

I will make a sweeping statement and say that there is no better "starter" fountain pen than a Lamy Safari (or Vista, a so called "demonstrator" - transparent version). Its aesthetic might not please everybody, but it's by far the best "writer" for the price, and it comes in a ridiculously wide selection of interchangeable nibs (they even make some optimized for left-handed writing).

A fairly recent contender to this throne is the TWSBI 850 and Mini, really great pens made to be fully disassembled easily. The Mini is probably the best compact pen you can buy today, it's a piston filler so it still holds quite a lot of ink too!

If you look for a great very extra-fine nib pen, I haven't so far found anything that beats the Pilot/Namiki Vanishing Point 18k gold nib (a.k.a. Capless Decimo). Right now it's my favorite pen, it's not very cheap, that's the only reason I didn't recommend it as starter. It's also pretty and unique. Some don't love its clip, with some effort it could be removed.

A&G Spalding and Bros make surprisingly good, cheap pens (considering the brand doesn't have a big history). Kaweco is cheaper brand recently gaining traction, but I don't like so far their nibs flow, especially on small pens you want -very- easy writing nibs, as these are to begin with not the most comfortable pens and applying pressure is fatiguing on them.

On the more expensive side, I would say to stay away from Montblanc and the other luxury brands, they are good pens but you pay because they are fancy more than because they are great. If you have lots of money or you want to make a really great gift, I'd personally go with a Nakaya, handmade and customized to your taste...

Medium-tier brands that I love, other than the already mentioned Namiki/Pilot (which also makes super expensive maki-e models, by the way) are Sailor and Platinum, both of them make great nibs (and true "Japanese" extra-fine ones) but somewhat more boring conventional "cigar" shaped pens. 
Franklin-Christoph is American brand which makes really unique, hand turned and not very expensive pens, worth a look.

There are of course many, many other great brands, certain fancy brands do make more "understated" models in their line which might turn out to be great, and vintage, used pens are also incredibly interesting, but all these I'd say would be less easy to recommend as a "first" pen.

After you get a pen you'll need paper and ink. Rhodia makes some great, inexpensive paper, but there are really many great brands. Field notes is really nice as well if you like small notebooks. Tomoe River paper is quite unique as well, but more a "fine writing" paper, not for daily use (will take time to dry especially for broader nibs).

I personally prefer spiral bound, A5 notebooks because they are easier to use on the go, they open fully and are more rigid, can be held one handed.
And if you are like me you don't love to have ruled or gridded paper, Rhodia and many other brands make notebooks in plain sheets or with less conspicuous dots instead of lines.

Lastly, inks. For Black I'll go with Aurora or the Platinum Carbon Black Ink, both are very black with great flow. The latter is pigmented which is very rare (another pigment ink is Sailor's Kiwa-Guro, that I haven't tried yet) it's nicer but it can settle in your pen if not used often and should be cleaned after depleting to avoid clogging, better use it in a cheaper pen you have no problems taking the nib apart for cleaning (which is usually fairly easy...).

For colored ink it's much, much harder, there are so many great options. I don't love plain blue ink and I usually go with either darker or lighter shades, one of my current favorites is the Private Reserve Naples Blue.

Sometimes I carry a red or more colourful ink, often in a broader nib for highlighting and so on. I find that orange/brown colors pair better with both black and blue than most reds. Noodler's Apache Sunset.

Lastly, if you want something super fancy, nothing is fancier than Herbin's Stormy Grey and Rouge Hematite limited edition inks (if you can still find them).

Incidentally, J.Herbin, Private Reserve and Noodler's together with Diamine are also the brands that make the most variety when it comes to colored inks.

Amazing (but not the smoothest ink ever).
Image source: http://www.gourmetpens.com/2014/11/review-j-herbin-stormy-grey-ink.html#.VQTK7VPF-xN



Geeks3D Forums

Ubuntu 15.04 Released, First Version To Feature systemd

April 24, 2015 02:18 PM

The final release of Ubuntu 15.04 is now available. A modest set of improvements are rolling out with this spring's Ubuntu. While this means the OS can't rival the heavy changelogs of releases past, the adage "don't fix what isn't broken" is clearly on...

Gamasutra Feature Articles

No, MS-DOS games weren't widescreen: Tips on correcting aspect ratio

April 24, 2015 01:54 PM

"Let us all agree that next time we present a screenshot of game from the '80s and early '90s, we should at least keep 4:3 images as 4:3 images!" ...

Game From Scratch

Unreal Engine Tutorial Part Three: Sprites

by Mike@gamefromscratch.com at April 24, 2015 01:21 PM

 

As you may have guessed from the title, it today’s tutorial we are going to look at working with Sprites using Unreal Engine.  We already looked briefly at creating a sprite in the previous tutorial, but today we are going to get much more in-depth.

 

Before you can create a sprite, you need to have a texture to work with.  Unreal Engine supports textures in the following formats:

  • .bmp
  • .float
  • .pcx
  • .png
  • .psd
  • .tga
  • .jpg
  • .dds and .hdr ( cubemaps only, not applicable to 2D )

 

That said, not all textures are created equal.  Some formats such as bmp, jpg and pcx do not support an alpha channel.  This means if you texture requires any transparency at all, you cannot use these formats.  Other formats, such as PSD ( Photoshop’s native format ) are absolutely huge.  Others such as BMP have very poor compression rates and should generally be avoid.  At the end of the day, this generally means that your 2D textures should probably be in png or tga formats.  Unreal also wants your textures to be in Power of Two resolutions.  Meaning that width/height should be 2,4,8,16,32 … 512, 1024, 2048, etc…  pixels in size.  It will work with other sized textures, but MIP maps will not be generated (not a big deal in 2D) and performance could suffer(a big deal).  Keep in mind, your sprite doesn’t need to use all of the texture, as you will see shortly.  So it’s better to have empty wasted space then a non Power of Two size.

 

* Personally I’ve experienced all kinds of problems using PNG, such as distorted backgrounds, while TGA has always worked flawlessly. 

 

Adding a Texture to your game

 

Adding a Texture is simple as selecting a destination folder on the left, then dragging and dropping the appropriate file type (from the list above) from Finder/Exporter to the Content Browser window, shown below:

image

 

Alternately, you can click New –> Import

image

 

Then navigate to the file you wish to use and select it. 

 

You texture should now appear in the Content Browser.

 

Texture Editor

 

Now that you have a texture loaded, you can bring it up in the Texture Editor by either double clicking or right clicking and selecting Edit.  Here is the texture editor in action.  It is a modeless window that can be left open indepently of the primary Unreal Engine window.

 

image

 

The Texture Editor enables you to make changes to your image, such as altering it’s brightness, saturation, etc…  you can also change compression amounts here.  However, for our 2D game, we have one very critical task…  turning off MIP maps.

What's a MIP Map?

History lesson time! MIP stands for multum in parvo, Latin for "much in little". Doesn't exactly answer the question does it? Ok, lets try again. Essentially a MIP map is an optimiziation trick. As things in the 3D scene get further and further from the camera, they need less and less detail. So while right up close to an object you may see enough detail to justify a 2048x2048 resolution texture. However, as the rendered object gets farther away in the scene, the texture resolution doesn't need to be nearly as high. Therefore game engines often use MIPMaps, multiple resolution versions of the same texture. So, as the required detail gets lower and lower, it can use a smaller texture and thus less resources.
You know when you are playing a game and as you move rapidly, often textures in the background "pop" in or out? This is the mipmapping system screwing up! Instead of seamlessly transitioning between versions, you as the user are watching the transition occur.


Support for MIP maps is pretty much automatic in Unreal Engine.  However in the case of a 2D game, you don’t want mipmaps!  The depth never changes, there should never be different resolution versions of each texture.  Therefore, we want to turn them off, and the Texture Editor is the place to do it.  Simply select Mip Gen Setting and select NoMipmaps.

image

 

Before you close the Texture Editor, be sure to hit the Save button.

image

 

Creating A Sprite

 

Now that we have a Texture, we can create a sprite.  This is important, as you can’t otherwise position or display a Texture on it’s own.  So, then, what is a Sprite?  Well the nutshell version is, it’s a graphic that can be positioned and transformed.  The name goes back to the olden days of computer hardware, where there was dedicated hardware for drawing images that could move.  Think back to PacMan…  Sprites would be things like PacMan himself and the Ghosts in the scene.

 

In practical Unreal Engine terms, a Sprite has a texture ( or a portion of a texture, as we will see shortly ) and positional information.  You can have multiple sprites using the same texture, you can have multiple sprites within a texture, and the sprites source within a texture can also change.  Don’t worry, this will make sense shortly. In the meantime, you can think of it this way… if you draw it in 2D in Unreal Engine… it’s probably a Sprite!

 

Once you have a Texture in your project, you can easily create a sprite using the entire texture by right clicking the Texture and selecting Create Sprite, like so:

image

 

You can also create a new sprite using New->Miscellaneous->Sprite

image

 

This will then open up the Sprite Editor.  If you created the Sprite using an existing texture, the texture will already be assigned.  Otherwise you have to do it manually.  Simply click the Texture in the Content Browser.  Then click the arrow icon in the Details panel of the Sprite Editor on the field named Source Texture:

image

 

Your texture should now appear like so:

image

 

You can pan and zoom the texture in the view window using the right mouse button and the scroll wheel.

 

Now remember earlier when I said “all or part of the texture”?  Well a Sprite can easily use a portion of a texture, and that’s set using the Edit Source Region mode:

image

 

This changes the view panel so you can now select a sub rectangle of the image to use as your sprite source.  For example, if you only wanted to use Megatrons head, you could change it like:

image

 

Then when you flip back to View, your texture will be:

image

 

When dealing with sprite sheets, this becomes a great deal more useful, as you will see shortly. 

 

There are a couple other critical functions in the Sprite Editor that we will cover later.  Most importantly, you can define collision polygons and control the physics type used.  We will look at these functions later on when we discuss physics. 

 

Two very important settings available here are:

image

 

Pixels Per Unit and Pivot Mode.

 

Pixels per unit is exactly what it says… a mapping from pixels to Unreal units, which default as mm.  So right now, each pixel is 2.56mm in size.  Pivot Mode on the other hand determines where a sprite is transformed relative to.  So when you say rotate 90 degrees, you are rotating 90 degrees around the sprites center by default.  Sometimes top left or bottom left can be easier to work with, this is where you would change it.

 

The final important point here is the Default Material, seen here:

image

 

This part is about to look a lot scarier than it is!  Just know up front, if you prefer, you can ignore this part of Unreal Engine completely!

 

Materials

 

Every mesh in Unreal Engine has a material attached, and when you peel back all of the layers, a Sprite is still ultimately a mesh… granted, a very simple one.  There are two default options available to you included in the engine, although depending on how you created your project, you may have to change your view settings to access them:

image

 

Then you will find the two provided materials for sprites:

image

 

The name kind of gives away the difference… DefaultLitSpriteMaterial takes into account lighting used in the scene.  DefaultSpriteMaterial ignores lighting completely.  Unless you are using dynamic lighting, generally you will most likely want the DefaultSpriteMaterial.  You can edit the Material by double clicking:

image

 

This is the Material Editor and it is used to create node based materials.  Basically it’s a visual shader programming language, behind the scenes it ultimately is generating a GLSL or HLSL shader in the end.  Truth is the process is way beyond the scope of what I can cover here and in most cases you will be fine with the default shader.  If you do want to get in to advanced graphic effects, you will have to dive deeper into the Material Editor.

 

Creating a Sprite

 

Now that we have our texture and made a Sprite from it, it’s time to instance a Sprite.  That is, add one to our scene.  This is about as simple as it gets, simply drag a Sprite from the Content Browser to the Scene, like so:

 

g1

 

Now that you’ve created a Sprite, you will notice that there area  number of details you can set in the Details panel:

image

 

All sprites by default share the same source sprite and material, but you can override it on an instance by instance basis.  For example, if you wanted a single sprite to be lit and all the others to be unlit, you can change the Material Override on that single sprite.  Obviously using Details you can also set the sprites positioning information and some other settings we probably wont need for now.

 

 

Next up, we will look at sprite animation using a flipbook.



Gamasutra Feature Articles

We're not even sure how we got through Steam Greenlight

April 24, 2015 11:09 AM

"So: we were most likely not anywhere close to the top 100 during a periodic batch. How, then, did we get Greenlit? We have a few possible theories." ...

Peter Molyneux: Talking to the press too early can be your undoing

April 24, 2015 10:58 AM

Peter Molyneux, discussing his techniques for iterative development at Reboot Develop, says that talking to the press about your current idea of what your game will be can be a mistake. ...

Geeks3D Forums

NVIDIA Quadro M6000 12GB Maxwell Workstation Graphics Tested Showing Solid Gains

April 24, 2015 09:29 AM

NVIDIA's Maxwell GPU architecture has has been well-received in the gaming world, thanks to cards like the GeForce GTX Titan X and the GeForce GTX 980. NVIDIA recently took time to bring that same Maxwell goodness over the workstation market as well an...



Gamasutra Feature Articles

Beyond the Pentakill: The future of competitive game design

April 24, 2015 08:08 AM

"I argue that people being jerks to each other is not a property intrinsic to competition, but rather due to the way we are framing the competition." ...

Gamasutra Feature Articles

Marvel vs. Telltale: Adventure studio to make games based on Marvel IP

April 23, 2015 11:49 PM

The prolific and license-happy developer of The Walking Dead has signed a deal with the ascendent comic book and Hollywood IP factory. ...

Is Nintendo courting an eSports audience with Splatoon?

April 23, 2015 10:03 PM

"Splatoon allows for adaptive playstyles, the game has elements of a sport, and with all the thought we've put into the things we've mentioned so far, I think it will appeal to eSports players. ...

Xbox business shows decline in Microsoft's latest results

April 23, 2015 09:31 PM

Lower prices for the Xbox One and lower sales overall for its consoles means that Xbox revenues are down 24 percent, and hardware sales are down 20 percent. ...

You started playing a story-based video game, and then this happened...

April 23, 2015 09:07 PM

"Once you start letting some random, sentient entity (e.g. a human) poke around within the confines of the narrative framework of your video game, things often get broken." ...

Smash Bros.' Sakurai speaks out against the 'DLC scam'

April 23, 2015 08:32 PM

"These days, the 'DLC scam' has become quite the epidemic, charging customers extra money to complete what was essentially an unfinished product." ...

Timothy Lottes

Source-Less Programming : 4

by Timothy Lottes (noreply@blogger.com) at April 23, 2015 09:01 PM

Still attempting to fully vet the design before the bootstrap reboot...

DAT words in the edit image need to maintain their source address in the live image This way on reload the live data can be copied over, and persistent data gets saved to disk. DAT annotations no longer have 30 bits of free space, instead they have a live address. When live address is zero. then DAT words won't maintain live data. This way read-only data can be self-repairing (as long as the annotations don't get modified). Going to use a different color for read-only DAT words. New persistent data DAT words will reference their edit-image hex value before reload (then get updated to the live address).

REL words always get changed on reload (self repairing). No need to keep the live address. REL is only used for relative branching x86 opcodes. Don't expect to have any run-time (non-edit-time) self-modifying of relative branch addresses. Given that branching to a relative branch opcode immedate is not useful, the LABEL of a REL word is only useful as a comment.

GET words also get changed on reload (self repairing). GET is only designed for opcodes and labeled constants. GET words will often be LABELed as a named branch/call target. Been thinking about removing GET, and instead making a new self-annotating word (display searches for a LABELed DAT word with the same image value, then displays the LABEL instead of HEX). This opens up the implicit possibility of mis-annotations. Would be rare for opcodes given they are large 32-bit values. But for annotating things like data structure immediate offsets, this will be a problem (4 is the second word offset in any structure).

ABS words always get changed on reload (self repairing). ABS words are targets for self-modifying code/data, so they also need LABELs. Reset on reload presents a problem in that ABS cannot be used to setup persistent data unless that persistent data is constant or only built/changed in the editor. But this limitation makes sense in the context that ABS addresses in live data structures can get invalidated by moving stuff around in memory. The purpose of ABS is edit-time relinking.

Geeks3D Forums

Qt Creator 3.4.0 Released

April 23, 2015 06:47 PM

Qt Creator 3.4.0 has been released with many new features. Qt Creator is a C/C++ IDE with specialized tools for developing Qt applications, and it works great for general-purpose projects as well. The new version comes with a C++ refactoring option to ...

Timothy Lottes

Source-Less Programming : 3

by Timothy Lottes (noreply@blogger.com) at April 23, 2015 06:30 AM

Annotation Encoding
Refined from last post, two 32-bit annotation words per binary image word,

FEDCBA9876543210FEDCBA9876543210
================================
00EEEEEEDDDDDDCCCCCCBBBBBBAAAAAA - LABEL : 5 6-bit chr string ABCDE


FEDCBA9876543210FEDCBA9876543210
================================
..............................00 - DAT : hex data
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA01 - GET : get word from address A*4
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA02 - ABS : absolute address to A*4
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA03 - REL : relative address to A*4


Going to switch to just 2 lines per word displayed in the editor, Only DAT annotations show hex value, other types show LABEL of referenced address in the place of the hex value. So no need for an extra note. In practice will be using some amount of binary image memory to build up a dictionary of DAT words representing all the common somewhat forth like opcodes, then GET words in the editor to build up source.

Need to redo the bootloader from floppy to harddrive usage, and switch even the bootloader's 16-bit x86 code to 32-bit aligned LABEL'ed stuff so the final editor can edit the bootloader. Prior was avoiding manually assembling the 16-bit x86 code in the boot loader, but might as well ditch NASM and use something else to bootstrap everything.