Planet Gamedev

Game AI for Developers

BROADCAST: Crowd Animation Techniques and Tools from Visual Effects (April 30th)

by Alex J. Champandard at April 25, 2015 11:00 PM

BROADCAST: Crowd Animation Techniques and Tools from Visual Effects (April 30th)

This upcoming broadcast on Thursday, April 30th at 19:00 UTC will take place online within your browser using streaming audio/video:

“This broadcast with Michaël Rouillé, CTO at Golaem, will explain the crowd simulation techniques and tools required to animate convincing crowds in many films, TV series and adverts. He'll also discuss how these ideas and tools can be applied within real-time simulations and games.”

To subscribe for email reminders and check the exact time in your current timezone, visit this broadcast's page on AiGameDev.com.

Gamasutra Feature Articles

You can get U.S. government grant money for your educational game

April 24, 2015 09:50 PM

"Eight years ago, our program's portfolio didn't include a single educational-themed game or project. Today, about half of those projects are educational games." ...

Video: VR game design for indies

April 24, 2015 09:24 PM

As part of the GDC 2015 Independent Games Summit, a panel of indie developers with VR development experience try and shed some light on the idiosyncracies of making VR games. ...

In defense of Valve's new Steam Workshop storefronts

April 24, 2015 08:07 PM

"I'm currently working on a mod right now. I plan to release my mod for free once it's done. But as a creator, it's also nice to know that there may be an option to make some income off of future projects." ...

Obituary: Artist Francis Tsai

April 24, 2015 07:06 PM

Artist who worked on games such as Myst III and the Tomb Raider franchise succumbs to ALS after a five-year struggle. ...

Real-Time Rendering

Why not?

by Eric at April 24, 2015 07:04 PM

I like to ask researchers whether they think the release of code should be encouraged, if not required, for technical papers. My argument (stolen from somewhere) is, “would you allow someone to publish an analysis of Hamlet but not allow anyone to see Hamlet itself?” The main argument for publishing the code (beyond helping the world as a whole) is that people can check your work, which I hear is a part of this science stuff in “computer science.”
       
Often they’re against it. The two reasons I hear are “my code sucks” and “we’ve patented the technique.” I can also imagine, “I don’t want those commercial fatcats stealing my code,” to which I say, “put some ridiculous license on it, then.” If the reason is, “I want to publish to enhance my resume and reputation, but I also want to keep it all secret because I’m going to make money off it,” then choose A or B, you can’t have both (or shouldn’t, in my Utopian fantasy world).

Don’t worry about code quality. I love “there are codebases that suck, and there are codebases that aren’t used“. This quote was by a lead programmer on one of the best selling videogame development platforms, Unity3D; he got it from someone else. Show us the code, we won’t laugh (much). It doesn’t have to be easy to build. For example, MeshLab, for me at least, is about impossible to build, and has (or had – they’ve improved considerably over the years) some horrific bugs, but I still appreciate that the code is available to look at. I also use the program a lot, I just reached my hundredth use of it this week.
       
It takes a few minutes to slap your source files onto Github and costs nothing. If you’re worried about code quality, don’t – you’re in good company, about 90% of all code on Github is crap (Sturgeon’s Law), including my own (the executable of which gets like 15,000 downloads a month). Notch’s $2.5 billion code for Minecraft sucks. Let it go.
      
Patents: I admit to not liking most software patents, perhaps all. But that’s irrelevant, or should be. If you’re embarrassed to admit you have a patent on some algorithm, that shouldn’t stand in the way of others understanding your research – deal with your shame. The point of a patent is that you are revealing the process. In return your idea is protected for a number of years. This is as opposed to a trade secret, where the process is kept quiet. A patent stops others from using your idea without paying you a licensing fee. However, your part of the bargain is to explain the idea. A trade secret risks someone reverse engineering your clever idea, for which you have little protection. Obvious, but people seem to forget that.
      
I expect these arguments are entirely convincing and code publication still won’t happen, due to pride and lawyers. No one likes to show off their dirty laundry. And lawyers will see no benefit to revealing code: “What’s this ‘research’ stuff you’re talking about? We’re making I.P. here, not research. Releasing code will increase the risk of undetected infringement by others of our I.P., or, worse yet, we might be found to be infringing on someone else’s algorithm patent.”
      
Ah well, I tried. Now get off my lawn.



Gamasutra Feature Articles

Tears in rain: Remembering the Blade Runner game

April 24, 2015 05:11 PM

"There wasn't anything quite like Blade Runner back when it launched in 1997. Worse yet, there still isn't. The game deserves to be influential but remains virtually unknown." ...

Real-Time Rendering

New CRC Books

by Eric at April 24, 2015 04:05 PM

Well, newish books, from the past year. By the way, I’ve also updated our books list with all relevant new graphics books I could find. Let me know if I missed any.

This post reviews four books from CRC Press in the past year. Why CRC Press? Because they offered to send me books for review and I asked for these. I’ve listed the four books reviewed in my own order of preference, best first. Writing a book is a ton of work; I admire anyone who takes it on. I honestly dread writing a few of these reviews. Still, at the risk of being disliked, I feel obligated to give my impressions, since I was sent copies specifically for review, and I should not break that trust. These are my opinions, not my cat’s, and they could well differ from yours. Our own book would get four out of five stars by my reckoning, and lower as it ages. I’m a tough critic.

I’m also an unpaid one: I spent a few hours with each book, but certainly did not read each cover to cover (though I hope to find the time to do so with Game Engine Architecture for topics I know nothing about). So, beyond a general skim, I decided to choose a few graphics-related operations in advance and see how well each book covered them. The topics:

  • Antialiasing, since it’s important to modern applications
  • Phong shading vs. lighting, since they’re different
  • Clip coordinates, which is what vertex shaders produce

Game Engine ArchitectureGame Engine Architecture, Second Edition, by Jason Gregory, August 2014 (book’s extensive websiteGoogle Preview and Table of Contents)

Overall this is a pretty great book. It’s not meant as a graphics programming guide; rather, it’s more a course about all the aspects of actually programming a videogame. I’m impressed with its quick summaries of hundreds of different algorithms, techniques, and tools and what each is used for. It performs a valuable service, alerting the reader that a topic even exists and giving some sense of what it’s about, all in plain English. The main problem with writing about current practices is that the book is about two years old, so of course some newer techniques and tools are not covered. However, it gets you about 90% up to speed. The book is not full color, rather has color plates, and that’s just as well. Full color throughout would have been expensive and made the book quite heavy (possibly unpublishable) without adding a lot of value.

Antialiasing: generally good coverage, though it assumes the reader already knows what jaggies actually are. Discusses MSAA and FXAA, and notes the idea of MLAA. MSAA is described correctly and clearly. FSAA is covered briefly and (properly) dismissed. CSAA is covered, since at the time it was a thing. SMAA is not discussed, since it hadn’t really been picked up by games yet at the time of writing. There’s a minor typo on page 506, “4 X MLAA” when MSAA is meant.

Phong: the term Phong doesn’t appear in the index. Perhaps this is fair enough for Phong shading, which is often replaced with the more descriptive term “per pixel shading”. I blame my age and schooling for considering these to be important terms. This book has a bit of confusion on the subject, however, mixing per pixel evaluation with the implication that texture mapping fixes Gouraud shading artifacts (p. 462). This is too bad – I want to like everything about this book, since it gets so much correct. Phong illumination is not in the index, nor is Blinn-Phong. I did finally find Blinn-Phong and Phong under “lighting” in the index. In general the index is somewhat weak, as it has less cross-referencing than I would like. Presentation of Blinn-Phong is short and succinct, which is appropriate for the survey nature of this book. A set of thumbnails showing the effect of changing the exponent would have been useful. A long Wikipedia URL is given for more information; better would have been to say “Search on ‘Blinn-Phong’ on Wikipedia”, since no one will type in the URL.

Clip coordinates: Clip coordinates for a perspective view usually have a W value that’s not 1, and clipping is done on points that have X, Y, or Z values that are outside the range [-W,W] (when W is positive). Clip coordinates are what the vertex shader produces, so are important to understand properly. Unfortunately, this book gets this topic a bit wrong, but so do most texts. This text mixes clip space with Normalized Device Coordinates (NDC). This is a common “shorthand” used to explain clipping, but something of a false savings. We as humans tend to think about clipping against the NDC coordinates, but clip space is where clipping actually happens, before dividing by W. The book does point out something that is (surprisingly) rarely mentioned in other books, that along the z-axis NDC goes from 0 to 1 for DirectX.

Summary: despite my criticisms, four out of five stars, maybe higher. It covers a huge number of subjects, has much practical advice (e.g., performance and debugging tool recommendations), and is written in a clear and intelligent style. The author clearly cares about his subject and does his best to help the reader learn all about it. As important, he cuts the fluff – I didn’t see any signs of pet topics he cares deeply about that mostly don’t matter to the field. Finally, at $62.96 for a thousand-plus page book, a great price per page.

IntroToCG Introduction to Computer Graphics: A Practical Learning Approach, by Fabio Ganovelli, Massimiliano Corsini, Sumanta Pattanaik, and Marco Di Benedetto, October 2014 (Google Preview and Table of Contentsauthors’ websitepublisher’s page)

This book is about computer graphics in general. It has a focus on interactive techniques and uses WebGL for exercises (good plan!), but also tries to give a wider view of the field. Theory is favored over practice. One factor in favor of this book is that I haven’t (yet) found any serious errors. I would expect no glaring errors from these authors, researchers all. However, there are omissions or short explanations where a bit more ink would have been useful, along with a number of typos for important terms. At 375 pages (not including table of contents), this book overall feels condensed, given its scope. I sometimes found it terse and quick to jump to equations without enough background. To its credit, there are many helpful figures. The book suffers from not being fully in color, rather including just some color plates. The small book page size makes the text feel a bit crowded.

Antialiasing: somewhat abstract coverage, first talking about line rasterization in HSV space. Mentions full-screen antialiasing, mislabelling it FSSA, but fails to note that this is rarely done in practice. Important antialiasing techniques for interactive rendering, such as MSAA and FXAA/SMAA/MLAA, are not mentioned.

Phong: properly indexed and fully covered, and a warning given to the reader to not confuse shading with illumination. The difference between Phong and Blinn-Phong is covered, though it does not discuss that the exponent in each has a considerably different effect (Game Engine Architecture notes the exponent is “slightly different”, when in fact it’s about a factor of 4 different – see “R.E versus N.H Specular Highlights,” in Graphics Gems IV). Oddly, fragment and vertex shaders are not listed in the index, though fragment shaders are presented in the text for the exercises. Typo, repeated in the index: “Match banding” instead of “Mach banding”.

Clip coordinates: not incorrect, just omitted. Clip space is briefly mentioned on page 117 and the text properly notes that it is not the same as NDC. Much else along the pipeline is dealt with in some depth, but clipping in homogeneous space is given a sentence. There is an interesting pipeline figure on page 121, but clipping is left out. DirectX’s range of [0,1] for the Z axis of its space is not mentioned. Classical clipping algorithms such as Sutherland-Hodgman are covered, but without mention of clip space vs. NDC space. Proper clipping for perspective views is feeling like a lost art to me. It’s an easy topic to skip – the GPU does all the clipping nowadays – but some brief coverage can help save students from screwing up the w-coordinate when writing vertex shaders. The best (and brief) online explanations I’ve seen are here and here, and by “best” I mean “basically correct”. More on this topic later.

Summary: an average of three and a half stars out of five, though it depends. This book contains solid information, could be used as a textbook for teaching graphics, or possibly a fairly-reliable (though terse) reference. It looks tough to plow through if you’re on your own, and it tends to be more theoretic than practical. In the long term, this theoretical bent is a good thing for someone learning this area - a proper foundation will serve anyone well for life, vs. memorizing ever-evolving APIs – but the book does not feel strongly connected to present-day practice. For example, it barely discusses the various types of shaders – vertex, fragment, geometry, etc. The fragment shader gets a paragraph, and no entry in the index. GLSL is mentioned but also does not have an index entry. The geometry shader is never discussed. In fairness, vertex and fragment shaders are indeed used in the WebGL exercises, there’s just not much explanation. Again, it feels like an abridged textbook, where the instructors in class would spend time on how to actually program shaders. I look forward to a second edition that is more fleshed out.

GPGPU

GPGPU Programming for Games and Science, by David H. Eberly, August 2014 (book’s code websiteGoogle Preview and Table of Contentspublisher’s page)

This book is tangentially related to computer graphics, but I mention it here anyway. Unlike most books about GPGPU programming, this one does not use CUDA, but rather uses DirectX’s DirectCompute. I can’t fairly assess this book, as I still haven’t taken on GPGPU.

While the book is ostensibly about GPU programming, computer graphics sneaks in here and there, and that I can comment on. Chapter 4, called “GPU Computing”, is the heart of the book. However, it spends the first part talking about vertex, pixel, and geometry shaders, rasterization, perspective projection, etc. Presenting this architecture is meant as an example of how parallelism is used within the GPU. However, this intent seems to get a bit sidetracked, with the transformation matrix stack taking up the first 8 pages. While important, this set of transforms is not all that related to parallelism beyond “SIMD can be used to evaluate dot products”. For most general GPGPU problems you won’t need to know about rendering matrices. 8 pages is not enough to teach the subject, and in an intermediate text this area could have been left out as a given.

Chapter 6, “Linear and Affine Algebra”, is an 84 page standalone chapter on this topic. It starts out talking about template classes for this area, then plows through the theory in this field. While an important area for some applications, this chapter sticks out as fairly unrelated to the rest of the chapters. The author clearly loves the topic, but this much coverage (a fifth of the book) does not serve the reader well for the topic at hand. I was strongly reminded of the quote, “In writing, you must kill all your darlings”. You have to be willing to edit out irrelevant pieces, no matter how sound and how much you love them. The author notes in the introduction, “I doubt I could write a book without mathematics, so I included chapter 6 about vector and matrix algebra.” The nature of the physical book market is “make it thick” so that it looks definitive. Putting tangential content into a book does the customer who is paying and spending time to learn about GPGPU programming a disservice. I don’t blame the author in particular, nor even the publisher. Most technical books have no real editors assigned to them, “real” in the sense of someone asking hard questions such as, “can this section of the book be trimmed back?” We have to self-edit, and we all have our blind spots.

Overall I’m a bit apprehensive about truly reading this book to learn about GPGPU programming. I had hoped that it would be a solid guide, but its organization concerns me. It seems to go a few different directions, not having a clear “here’s what I’m going to cover and here’s what you’re going to learn” feel to it. A lot of time is spent with groundwork such as floating point rounding rules, basic SIMD, etc. – it’s not until 123 pages in that the GPU is mentioned. The book feels more like a collection of articles about various elements having to do with performing computations efficiently on various forms of hardware. That said, Chapter 7, “Sample Applications”, does offer a fairly wide range of computational tasks mapped to the GPU. It’s a chapter I’ll probably come back to if I need to implement these algorithms. The author is a well-respected veteran and I trust his code to be correct. He’s done wonderful work over the years in growing his Geometric Tools site – it’s a fantastic free resource (at one point I even tried to find external grants to support his work on the site - no luck there. A MacArthur Fellowship sent his way would be great). What might have made more sense is a focused, stripped down book, half of chapter 4 and all of chapter 7, offered for $10 as an ebook treatise.

cgthroughOGLComputer Graphics Through OpenGL: From Theory to Experiments, Second Edition, by Sumanta Guha, August 2014 (book’s websiteGoogle Preview and Table of Contentspublisher’s page)

This book is, unfortunately, currently broken, because of a faulty index. The index page numbers are off by quite a bit. For example, Sutherland-Hodgeman (which should be spelled Hodgman - Angel & Shreiner’s Interactive Computer Graphics, a book I generally like, also makes that goof; no biggie) in the index is listed as page 589, but actually appears on page 556 – a 33 page error. This problem appears to be a scale problem. Entries early in the book are correct, e.g. clipping is listed as page 33 and indeed appears there. Selection is listed on page 184 and appears on page 174, a 10 page error. Near the end, homogeneous coordinates are listed as 879 but are actually 826. By curve fitting using Excel, the equation is:

 actual page number = 0.9412 * index page number + 1.4594

Let’s get past the index and mention it no more. A workaround is to use Google Books to search for the correct page number instead.

Of the four books reviewed, this one has the nicest layout and presentation. Full color, wide format, with helpful figures in the margins. Stylistically, the author attempts a chummy style with frequent exclamation points. Expect passages such as, “By the way, do you even know what a floppy disc is, young reader?! If not, look it up on Wikipedia.” The author has a typographic conceit, heading various sections with arbitrary camelcase, e.g., ExpeRimenT, ExercisE, ExAmPlE. I can’t fully replicate the feel here, because the capitalized letters are actually lowercase but of varying font size. This might be a cute little flourish if the book was excellent. It’s not cute.

The book is in its second edition. Though the cover says “Comprehensive coverage of OpenGL 4.3″, what this means is that two extra chapters were added to the end of the book. Even then, these chapters are as much an introduction to OpenGL 2.0 as 4.3; for example, they are the first places GLSL gets discussed. I had a theory that the first edition of this book came out before 2004, which would explain the dependence on pre-shader OpenGL for the vast majority of the book. I was incorrect; the first edition came out in 2010. My impression overall is that the author misses the days of the fixed function pipeline. This is understandable, and I had the same dilemma designing an introductory course: when do you hit the students with shader programming? It’s possible early on, though mysterious. You need a fair bit of understanding of the transformations used, as well as what a shading model is, to really get traction. Old OpenGL, with its built-in shader model and simple, clear, and now-vastly-inefficient way of specifying triangles makes for an appealing teaching environment.

So, I understand the desire to not throw the students into the deep end on day one. However, given 919 pages to work with, GLSL should be mentioned much earlier than page 745, along with vertex and fragment shaders and all the rest. The book actually ends 75 pages later after introducing shaders, with the rest being appendices. So, it has 75 pages to cover everything that has happened to OpenGL since 2004. This is insufficient.

The bulk of the book includes tangential topics, such as scan-based polygon rasterization. Rasterization of polygons with concavities is not used by GPUs, so is mostly irrelevant, though possibly useful for teaching about parity. However, the algorithm is then presented incorrectly, worrying about singularities with ray/edge testing instead of using the proper rounding rules (in contrast, Eberly presents rasterization correctly, on page 133 of his GPGPU book). As I say, I skimmed this book, but noticed one strange grouping along the way: the perspective matrix and rational Bézier surfaces are covered in the same chapter. This feels like a Jeopardy! clue for Letters of the Alphabet, “Perspective and Bézier surfaces have this in common.” “What is w, Alex?” I shouldn’t joke, but I then uncovered such a deep flaw in the book that I, well, read on.

Antialiasing: the basic idea of pixel coverage is discussed as the solution, so that’s fine. Multisampling is skimmed over, being described as if it was supersampling. There is also a bit of filler on page 797 about how multisampling in OpenGL 4.3 is done exactly as described on page 527. There’s no reason to say this if there’s no change from “pre-shader OpenGL”. A few pages past this topic I noticed the accumulation buffer is covered. This functionality is rarely used nowadays and doesn’t appear in OpenGL ES, but again it can be useful for teaching about motion blur, antialiasing, etc. The book describes the accumulation buffer, but doesn’t explain what it is for – a missed opportunity.

Phong: the index does note Phong lighting vs. shading. The description of Phong shading is correct and concise, and its relationship to Phong lighting described properly. However, both Gouraud and Phong shading are not illustrated in any form (and this is a full-color book), e.g., showing specular highlighting and how it improves with per pixel evaluation. Phong lighting itself is explained, though the author does not note that what he’s covering is actually Blinn-Phong. Again, there is no simple image showing how varying the specular exponent changes the highlight. There’s an odd notation on Figure 11.14, “(not exact plots)” for the various cosine to a power curves formed by varying the exponent. Why not exact?

Clip coordinates: the coverage here is deeply incorrect, not just a typo or oversight. On page 703 the pipeline is given as perspective division followed by clipping; the correct way is clipping followed by perspective division. There is also an odd step 5, “Projection to the back of the canonical box”, but that’s a minor detail. The author does understand the incredible difficulties involved if you attempt to clip after performing perspective division (for starters, you have to deal with division by zero). He spends the next few pages creating some method to deal with “semi-infinite segments”, which he also discusses elsewhere when talking about clipping. I admit to not carefully wading through his presentation, as the standard way to clip works fine. Eleven pages later he resolves his difficulties by presenting the rendering pipeline again, with a revised step “Perspective division with mechanism to handle zero w-values” (his emphasis), still performed before clipping. He clearly loves projective spaces, having a 46 page appendix on the topic. Unfortunately, he missed Appendix A in Sutherland and Hodgman’s original paper, or Blinn and Newell’s followup. This is extremely upsetting to see. The author seems like a nice person and clearly knows a fair bit, but there appear to be at least one small but serious hole in his education. We certainly made goofs in our book, and there are sections which I’d love to improve, but we did our best to read through existing literature before inventing our own solutions.

I don’t think I need to give a rating. It’s unfortunate, and I’m more than a bit embarrassed and hesitant to post this review, but honestly can’t recommend the book to anyone (even with the index fixed). There looks to be much valid information in the text, but as soon as trust is severely lost, the book is no good to me.

Gamasutra Feature Articles

Lasting connections: Enduring games, enduring relationships

April 24, 2015 03:49 PM

"The game wants you to connect with the players around the table. That's why we gather for board games, right? These connections don't define us, but they are the finite moments of life." ...



c0de517e Rendering et alter

OT: The design space of fountain pens

by DEADC0DE (noreply@blogger.com) at April 24, 2015 04:38 PM

Met Stephen Hill at GDC this year, he casually mentioned that I should write an article about pens. Well Stephen maybe I WILL.

I try to live a reasonable life, but there are two things I do posses in more quantities I should: writing and photographic equipment. I would say that I collect them, but I don't keep these as a collector would, I actually use them with little regard, so I'm more just a compulsive buyer, I guess. 

But with much wasted money comes experience, or something.

- Why fountain pens

Calligraphy, duh! Line variation and reasons. Seriously though, they are different, and really it's a matter of taste... The feeling is different, they require less pressure, the ink is different... But nowadays rollerballs and gel pens have so many tips and technologies it's hard to compare. 
Also, on a purely utilitarian scale, I believe nothing can win a simple 0.5mm mechanical pencil...

Me writing this article.
Pen is a Namiki Vanishing Point ExtraFine
Notebook is a Midori Spiral Ring

So for the most part it is a personal choice, a matter of taste. I like them, they are elegant weapons for a more civilized age, and you might too. 
Now, without further ado, let's delve into this guide on how to start spending way too much money on pens.

- Nibs

First and foremost a fountain pen is about its nib. There are two main axes of nib selection: shape and material.

For shape, most pen brands will make three sizes of round tips: fine, medium and broad. Fancier brands might expand to extra fine, extra (or ultra or double) broad and maybe even ultra extra fine (sometimes also called needlepoint or accounting nib).

The catch here is that for the most part, these names carry little meaning. Especially on the finer scale the differences can be huge, traditionally Japanese nibs are finer, but some Japanese brands don't follow the rule.

A needlepoint nib (disassembled for repair), hand ground (Franklin-Christoph)

Italic, slab, oblique, cursive nibs are all variations of non round nibs, they produce a finer line in certain directions and a bolder one in others. Italic and slab are cut straight, with the italic being sharper (more difference between writing directions), crisper and harder to use. The oblique nib is cut at an angle. All these come in different sizes, usually specified as millimeters of their wider angle. Very wide stub nibs are also called "music" and often have more than a single ink slit, to keep the ink flowing.

Selection of Lamy steel nibs

More exotic nibs can be trickier to use and usually require better pens to work well. Bolder nibs lay down more ink, and thus stress the pen's ability of keeping a good, constant flow. Finer nibs are easier to break or misalign, they are harder to make and to make so they write smoothly. Very sharp italic nibs somewhat inherit the worst traits of both.

Consider also that broader nibs will use more ink (deplete faster), the ink will require more time to dry and can bleed more, but many people do like them better for fine writing as the properties of the ink (shading variation, sheen, color) show more with a wetter and more varied line.

Ink shading from an Italic nib
Image source: https://wonderpens.wordpress.com/tag/rhodia/

In terms of materials, there are really only two options: steel and gold. Both can then be plated in different materials (rhodium, ruthenium, pink gold, two-tone and so on) but that is only an aesthetic matter.

The functional difference between steel and gold is that the latter is softer, more flexible, thus it writes more smoothly and with more line variation. Steel is more durable and better for heavy handed writers.
Somewhat confusingly, both materials can be used to make flex and semi-flex nibs, which are thinner and specifically made to give lots of line variation. They are quite hard to use and suited mostly for calligraphy.

A Piltot/Namiki Falcon flex pen
Image source: https://www.youtube.com/watch?v=XMolEvB5EqA

Most pens have interchangeable nibs, and buying nibs alone is usually much cheaper than buying a full pen.

- Pen body

A big part in the choice of a pen is taken by its aesthetic, which is I guess entirely a matter of taste so I won't discuss it.

There are though a few functional considerations to keep in mind. Ergonomy of course is a big one. Bigger pens tend to be more comfortable but of course, less easy to carry around. Heavier pens might not be great for longer writing sessions, and balance can make a lot of difference.
For the most part, you'll have to try and see what fits you best. Remember to try any given pen with and without the cap posted, the balance will change significantly, with some pens designed to be posted while some others don't post very well.

The Franklin-Christoph 40 pocket need to be used with its cap posted,
it's way to short otherwise. Screw-on cap, clipless, can be converted to eyedropper

The filling mechanism and ink reservoir is also important. Most pens nowadays use plastic cartridges, most being "international standard". 
The second most widespread mechanism is the piston filler, which is quite convenient and usually has chambers that can carry more ink than a cartridge, but it won't allow you to carry spare ink as easily.

Now, you really will want to use bottled ink in your fountain pens, both because it's cheaper and it comes in a much wider selection, but having a cartridge pen won't stop you. Most of them can be fitted with "converters", special cartridges with a piston to suck ink from a bottle, and you can always refill a cartridge with a syringe (which I actually find less messy than dipping the nib in the bottle to refill).
Also, many (but not all) will work well as "eyedroppers", filling the cartridge chamber directly with ink (without a cartridge installed) and sealing it (a bit of with silicone grease on the screw)

There are other minor things to notice. As most pens are round, having a cap with a clip allows them not to roll, which might be something to consider even if you don't need to clip your pen to a notebook.

Nakaya "dorsal fin" model, an asymmetric design made to not roll even w/o a clip
Image source: http://www.leighreyes.com/?p=4313

The cap design and closing mechanism also matter, actually more than it might seem. Not only certain caps fit better posted than others, but certain designs are more prone to sucking some ink out every time you uncap. Screw on caps are less prone to this, but certain screws can be annoying to feel on the barrel of the pen, depending on how you hold it.

- Ink

A big reason to use fountain pens is that they allow to play with different inks. It might be actually a much more reasonable idea to collect and play with inks, than it is with different fountain pens.

Inks have lots of different attributes, even colors are not so simple as many inks can "shade", show variation (even drastic) as the pen lays down more or less ink on the page (according to pressure and speed), they can have sheen and even pigment or other particles embedded (these though are often more dangerous to use and can clog a pen if not properly handled)

Inks can be even more interesting than pens!
The Pen Addict is a good review site

They can be more or less lubricated, certain inks can flow well even in lesser pens while certain others tend to be more dry. If your pen is already on the dry side, you don't want to couple it with a dry ink, and vice-versa.

Different inks also have different drying times, and tendency to feather or bleed through paper. Good paper will also help absorb less, but that also means it can increase dry times.

It's not in general "safe" to mix different inks, albeit most of the time it won't cause havoc and you can easily clean your pen by just running it under cold water until it flushes clean. There are certain brands who make mixable inks, but it's rare.

- Recommendations

I will make a sweeping statement and say that there is no better "starter" fountain pen than a Lamy Safari (or Vista, a so called "demonstrator" - transparent version). Its aesthetic might not please everybody, but it's by far the best "writer" for the price, and it comes in a ridiculously wide selection of interchangeable nibs (they even make some optimized for left-handed writing).

A fairly recent contender to this throne is the TWSBI 850 and Mini, really great pens made to be fully disassembled easily. The Mini is probably the best compact pen you can buy today, it's a piston filler so it still holds quite a lot of ink too!

If you look for a great very extra-fine nib pen, I haven't so far found anything that beats the Pilot/Namiki Vanishing Point 18k gold nib (a.k.a. Capless Decimo). Right now it's my favorite pen, it's not very cheap, that's the only reason I didn't recommend it as starter. It's also pretty and unique. Some don't love its clip, with some effort it could be removed.

A&G Spalding and Bros make surprisingly good, cheap pens (considering the brand doesn't have a big history). Kaweco is cheaper brand recently gaining traction, but I don't like so far their nibs flow, especially on small pens you want -very- easy writing nibs, as these are to begin with not the most comfortable pens and applying pressure is fatiguing on them.

On the more expensive side, I would say to stay away from Montblanc and the other luxury brands, they are good pens but you pay because they are fancy more than because they are great. If you have lots of money or you want to make a really great gift, I'd personally go with a Nakaya, handmade and customized to your taste...

Medium-tier brands that I love, other than the already mentioned Namiki/Pilot (which also makes super expensive maki-e models, by the way) are Sailor and Platinum, both of them make great nibs (and true "Japanese" extra-fine ones) but somewhat more boring conventional "cigar" shaped pens. 
Franklin-Christoph is American brand which makes really unique, hand turned and not very expensive pens, worth a look.

There are of course many, many other great brands, certain fancy brands do make more "understated" models in their line which might turn out to be great, and vintage, used pens are also incredibly interesting, but all these I'd say would be less easy to recommend as a "first" pen.

After you get a pen you'll need paper and ink. Rhodia makes some great, inexpensive paper, but there are really many great brands. Field notes is really nice as well if you like small notebooks. Tomoe River paper is quite unique as well, but more a "fine writing" paper, not for daily use (will take time to dry especially for broader nibs).

I personally prefer spiral bound, A5 notebooks because they are easier to use on the go, they open fully and are more rigid, can be held one handed.
And if you are like me you don't love to have ruled or gridded paper, Rhodia and many other brands make notebooks in plain sheets or with less conspicuous dots instead of lines.

Lastly, inks. For Black I'll go with Aurora or the Platinum Carbon Black Ink, both are very black with great flow. The latter is pigmented which is very rare (another pigment ink is Sailor's Kiwa-Guro, that I haven't tried yet) it's nicer but it can settle in your pen if not used often and should be cleaned after depleting to avoid clogging, better use it in a cheaper pen you have no problems taking the nib apart for cleaning (which is usually fairly easy...).

For colored ink it's much, much harder, there are so many great options. I don't love plain blue ink and I usually go with either darker or lighter shades, one of my current favorites is the Private Reserve Naples Blue.

Sometimes I carry a red or more colourful ink, often in a broader nib for highlighting and so on. I find that orange/brown colors pair better with both black and blue than most reds. Noodler's Apache Sunset.

Lastly, if you want something super fancy, nothing is fancier than Herbin's Stormy Grey and Rouge Hematite limited edition inks (if you can still find them).

Incidentally, J.Herbin, Private Reserve and Noodler's together with Diamine are also the brands that make the most variety when it comes to colored inks.

Amazing (but not the smoothest ink ever).
Image source: http://www.gourmetpens.com/2014/11/review-j-herbin-stormy-grey-ink.html#.VQTK7VPF-xN

Geeks3D Forums

Ubuntu 15.04 Released, First Version To Feature systemd

April 24, 2015 02:18 PM

The final release of Ubuntu 15.04 is now available. A modest set of improvements are rolling out with this spring's Ubuntu. While this means the OS can't rival the heavy changelogs of releases past, the adage "don't fix what isn't broken" is clearly on...

Gamasutra Feature Articles

No, MS-DOS games weren't widescreen: Tips on correcting aspect ratio

April 24, 2015 01:54 PM

"Let us all agree that next time we present a screenshot of game from the '80s and early '90s, we should at least keep 4:3 images as 4:3 images!" ...



Game From Scratch

Unreal Engine Tutorial Part Three: Sprites

by Mike@gamefromscratch.com at April 24, 2015 01:21 PM

 

As you may have guessed from the title, it today’s tutorial we are going to look at working with Sprites using Unreal Engine.  We already looked briefly at creating a sprite in the previous tutorial, but today we are going to get much more in-depth.

 

Before you can create a sprite, you need to have a texture to work with.  Unreal Engine supports textures in the following formats:

  • .bmp
  • .float
  • .pcx
  • .png
  • .psd
  • .tga
  • .jpg
  • .dds and .hdr ( cubemaps only, not applicable to 2D )

 

That said, not all textures are created equal.  Some formats such as bmp, jpg and pcx do not support an alpha channel.  This means if you texture requires any transparency at all, you cannot use these formats.  Other formats, such as PSD ( Photoshop’s native format ) are absolutely huge.  Others such as BMP have very poor compression rates and should generally be avoid.  At the end of the day, this generally means that your 2D textures should probably be in png or tga formats.  Unreal also wants your textures to be in Power of Two resolutions.  Meaning that width/height should be 2,4,8,16,32 … 512, 1024, 2048, etc…  pixels in size.  It will work with other sized textures, but MIP maps will not be generated (not a big deal in 2D) and performance could suffer(a big deal).  Keep in mind, your sprite doesn’t need to use all of the texture, as you will see shortly.  So it’s better to have empty wasted space then a non Power of Two size.

 

* Personally I’ve experienced all kinds of problems using PNG, such as distorted backgrounds, while TGA has always worked flawlessly. 

 

Adding a Texture to your game

 

Adding a Texture is simple as selecting a destination folder on the left, then dragging and dropping the appropriate file type (from the list above) from Finder/Exporter to the Content Browser window, shown below:

image

 

Alternately, you can click New –> Import

image

 

Then navigate to the file you wish to use and select it. 

 

You texture should now appear in the Content Browser.

 

Texture Editor

 

Now that you have a texture loaded, you can bring it up in the Texture Editor by either double clicking or right clicking and selecting Edit.  Here is the texture editor in action.  It is a modeless window that can be left open indepently of the primary Unreal Engine window.

 

image

 

The Texture Editor enables you to make changes to your image, such as altering it’s brightness, saturation, etc…  you can also change compression amounts here.  However, for our 2D game, we have one very critical task…  turning off MIP maps.

What's a MIP Map?

History lesson time! MIP stands for multum in parvo, Latin for "much in little". Doesn't exactly answer the question does it? Ok, lets try again. Essentially a MIP map is an optimiziation trick. As things in the 3D scene get further and further from the camera, they need less and less detail. So while right up close to an object you may see enough detail to justify a 2048x2048 resolution texture. However, as the rendered object gets farther away in the scene, the texture resolution doesn't need to be nearly as high. Therefore game engines often use MIPMaps, multiple resolution versions of the same texture. So, as the required detail gets lower and lower, it can use a smaller texture and thus less resources.
You know when you are playing a game and as you move rapidly, often textures in the background "pop" in or out? This is the mipmapping system screwing up! Instead of seamlessly transitioning between versions, you as the user are watching the transition occur.


Support for MIP maps is pretty much automatic in Unreal Engine.  However in the case of a 2D game, you don’t want mipmaps!  The depth never changes, there should never be different resolution versions of each texture.  Therefore, we want to turn them off, and the Texture Editor is the place to do it.  Simply select Mip Gen Setting and select NoMipmaps.

image

 

Before you close the Texture Editor, be sure to hit the Save button.

image

 

Creating A Sprite

 

Now that we have a Texture, we can create a sprite.  This is important, as you can’t otherwise position or display a Texture on it’s own.  So, then, what is a Sprite?  Well the nutshell version is, it’s a graphic that can be positioned and transformed.  The name goes back to the olden days of computer hardware, where there was dedicated hardware for drawing images that could move.  Think back to PacMan…  Sprites would be things like PacMan himself and the Ghosts in the scene.

 

In practical Unreal Engine terms, a Sprite has a texture ( or a portion of a texture, as we will see shortly ) and positional information.  You can have multiple sprites using the same texture, you can have multiple sprites within a texture, and the sprites source within a texture can also change.  Don’t worry, this will make sense shortly. In the meantime, you can think of it this way… if you draw it in 2D in Unreal Engine… it’s probably a Sprite!

 

Once you have a Texture in your project, you can easily create a sprite using the entire texture by right clicking the Texture and selecting Create Sprite, like so:

image

 

You can also create a new sprite using New->Miscellaneous->Sprite

image

 

This will then open up the Sprite Editor.  If you created the Sprite using an existing texture, the texture will already be assigned.  Otherwise you have to do it manually.  Simply click the Texture in the Content Browser.  Then click the arrow icon in the Details panel of the Sprite Editor on the field named Source Texture:

image

 

Your texture should now appear like so:

image

 

You can pan and zoom the texture in the view window using the right mouse button and the scroll wheel.

 

Now remember earlier when I said “all or part of the texture”?  Well a Sprite can easily use a portion of a texture, and that’s set using the Edit Source Region mode:

image

 

This changes the view panel so you can now select a sub rectangle of the image to use as your sprite source.  For example, if you only wanted to use Megatrons head, you could change it like:

image

 

Then when you flip back to View, your texture will be:

image

 

When dealing with sprite sheets, this becomes a great deal more useful, as you will see shortly. 

 

There are a couple other critical functions in the Sprite Editor that we will cover later.  Most importantly, you can define collision polygons and control the physics type used.  We will look at these functions later on when we discuss physics. 

 

Two very important settings available here are:

image

 

Pixels Per Unit and Pivot Mode.

 

Pixels per unit is exactly what it says… a mapping from pixels to Unreal units, which default as mm.  So right now, each pixel is 2.56mm in size.  Pivot Mode on the other hand determines where a sprite is transformed relative to.  So when you say rotate 90 degrees, you are rotating 90 degrees around the sprites center by default.  Sometimes top left or bottom left can be easier to work with, this is where you would change it.

 

The final important point here is the Default Material, seen here:

image

 

This part is about to look a lot scarier than it is!  Just know up front, if you prefer, you can ignore this part of Unreal Engine completely!

 

Materials

 

Every mesh in Unreal Engine has a material attached, and when you peel back all of the layers, a Sprite is still ultimately a mesh… granted, a very simple one.  There are two default options available to you included in the engine, although depending on how you created your project, you may have to change your view settings to access them:

image

 

Then you will find the two provided materials for sprites:

image

 

The name kind of gives away the difference… DefaultLitSpriteMaterial takes into account lighting used in the scene.  DefaultSpriteMaterial ignores lighting completely.  Unless you are using dynamic lighting, generally you will most likely want the DefaultSpriteMaterial.  You can edit the Material by double clicking:

image

 

This is the Material Editor and it is used to create node based materials.  Basically it’s a visual shader programming language, behind the scenes it ultimately is generating a GLSL or HLSL shader in the end.  Truth is the process is way beyond the scope of what I can cover here and in most cases you will be fine with the default shader.  If you do want to get in to advanced graphic effects, you will have to dive deeper into the Material Editor.

 

Creating a Sprite

 

Now that we have our texture and made a Sprite from it, it’s time to instance a Sprite.  That is, add one to our scene.  This is about as simple as it gets, simply drag a Sprite from the Content Browser to the Scene, like so:

 

g1

 

Now that you’ve created a Sprite, you will notice that there area  number of details you can set in the Details panel:

image

 

All sprites by default share the same source sprite and material, but you can override it on an instance by instance basis.  For example, if you wanted a single sprite to be lit and all the others to be unlit, you can change the Material Override on that single sprite.  Obviously using Details you can also set the sprites positioning information and some other settings we probably wont need for now.

 

 

Next up, we will look at sprite animation using a flipbook.

Gamasutra Feature Articles

We're not even sure how we got through Steam Greenlight

April 24, 2015 11:09 AM

"So: we were most likely not anywhere close to the top 100 during a periodic batch. How, then, did we get Greenlit? We have a few possible theories." ...

Peter Molyneux: Talking to the press too early can be your undoing

April 24, 2015 10:58 AM

Peter Molyneux, discussing his techniques for iterative development at Reboot Develop, says that talking to the press about your current idea of what your game will be can be a mistake. ...

Geeks3D Forums

NVIDIA Quadro M6000 12GB Maxwell Workstation Graphics Tested Showing Solid Gains

April 24, 2015 09:29 AM

NVIDIA's Maxwell GPU architecture has has been well-received in the gaming world, thanks to cards like the GeForce GTX Titan X and the GeForce GTX 980. NVIDIA recently took time to bring that same Maxwell goodness over the workstation market as well an...

Gamasutra Feature Articles

Beyond the Pentakill: The future of competitive game design

April 24, 2015 08:08 AM

"I argue that people being jerks to each other is not a property intrinsic to competition, but rather due to the way we are framing the competition." ...

iPhone Development Tutorials and Programming Tips

Open Source Component Providing A UINavigationController With A Parallax Effect

by Johann at April 24, 2015 06:10 AM

Earlier this month I mentioned a component allowing you to display images in a table view with a parallax effect.

Here’s an interesting component submitted by Fraser Scott-Morrison allowing you to easily display a neat parallax effect when pushing and popping between view controllers called IHParallaxNavigationController.

IHParallaxNavigationController can be used within storyboards as your UINavigationController, and your users will shown a background specified by you displayed with a parallax effect.

Here is an animation from the readme showing IHParallaxNavigationController in action:
IHPArallaxNavigationControllerDemo

You can find IHParallaxNavigationController on Github here.

A nice easy to use custom navigation controller with a neat parallax effect.

Original article: Open Source Component Providing A UINavigationController With A Parallax Effect

©2015 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.



Gamasutra Feature Articles

Marvel vs. Telltale: Adventure studio to make games based on Marvel IP

April 23, 2015 11:49 PM

The prolific and license-happy developer of The Walking Dead has signed a deal with the ascendent comic book and Hollywood IP factory. ...

Is Nintendo courting an eSports audience with Splatoon?

April 23, 2015 10:03 PM

"Splatoon allows for adaptive playstyles, the game has elements of a sport, and with all the thought we've put into the things we've mentioned so far, I think it will appeal to eSports players. ...

Xbox business shows decline in Microsoft's latest results

April 23, 2015 09:31 PM

Lower prices for the Xbox One and lower sales overall for its consoles means that Xbox revenues are down 24 percent, and hardware sales are down 20 percent. ...

You started playing a story-based video game, and then this happened...

April 23, 2015 09:07 PM

"Once you start letting some random, sentient entity (e.g. a human) poke around within the confines of the narrative framework of your video game, things often get broken." ...

Smash Bros.' Sakurai speaks out against the 'DLC scam'

April 23, 2015 08:32 PM

"These days, the 'DLC scam' has become quite the epidemic, charging customers extra money to complete what was essentially an unfinished product." ...

Get a job: Be a Systems Designer for Sucker Punch

April 23, 2015 08:17 PM

InFamous: Second Son creator Sucker Punch is looking to bring on a systems designer to work on building out features alongside the game design team in the studio's Bellevue, WA office. ...

The PSP is dead, and developers are still releasing games for it

April 23, 2015 08:06 PM

Sony discontinued its PlayStation Portable last summer, but that isn't stopping Victor Ireland's Gaijinworks from releasing two new (to North America) PSP games -- potentially in physical UMD form. ...

Timothy Lottes

Source-Less Programming : 4

by Timothy Lottes (noreply@blogger.com) at April 23, 2015 09:01 PM

Still attempting to fully vet the design before the bootstrap reboot...

DAT words in the edit image need to maintain their source address in the live image This way on reload the live data can be copied over, and persistent data gets saved to disk. DAT annotations no longer have 30 bits of free space, instead they have a live address. When live address is zero. then DAT words won't maintain live data. This way read-only data can be self-repairing (as long as the annotations don't get modified). Going to use a different color for read-only DAT words. New persistent data DAT words will reference their edit-image hex value before reload (then get updated to the live address).

REL words always get changed on reload (self repairing). No need to keep the live address. REL is only used for relative branching x86 opcodes. Don't expect to have any run-time (non-edit-time) self-modifying of relative branch addresses. Given that branching to a relative branch opcode immedate is not useful, the LABEL of a REL word is only useful as a comment.

GET words also get changed on reload (self repairing). GET is only designed for opcodes and labeled constants. GET words will often be LABELed as a named branch/call target. Been thinking about removing GET, and instead making a new self-annotating word (display searches for a LABELed DAT word with the same image value, then displays the LABEL instead of HEX). This opens up the implicit possibility of mis-annotations. Would be rare for opcodes given they are large 32-bit values. But for annotating things like data structure immediate offsets, this will be a problem (4 is the second word offset in any structure).

ABS words always get changed on reload (self repairing). ABS words are targets for self-modifying code/data, so they also need LABELs. Reset on reload presents a problem in that ABS cannot be used to setup persistent data unless that persistent data is constant or only built/changed in the editor. But this limitation makes sense in the context that ABS addresses in live data structures can get invalidated by moving stuff around in memory. The purpose of ABS is edit-time relinking.

Gamasutra Feature Articles

Blog: Working with Valve to build one of the first paid Skyrim mods

April 23, 2015 07:42 PM

"I was approached by Valve on behalf of Bethesda," writes one dev invited to make a Half-Life-themed Skyrim paid mod. "I decided to take a crack at implementing Gordon Freeman's crowbar." ...



The Unity task system: An AI controller

April 23, 2015 06:51 PM

"I am going to share the system I use to control the AI of the characters in my current game. Once you grok the idea you can much more easily tweak it to your needs." ...

Geeks3D Forums

Qt Creator 3.4.0 Released

April 23, 2015 06:47 PM

Qt Creator 3.4.0 has been released with many new features. Qt Creator is a C/C++ IDE with specialized tools for developing Qt applications, and it works great for general-purpose projects as well. The new version comes with a C++ refactoring option to ...

Gamasutra Feature Articles

Don't Miss: The postmortem of Never Alone's cross-cultural design

April 23, 2015 06:25 PM

Lead designer Grant Roberts offers a very thorough breakdown of what went right (and wrong) with the development of E-Line Media/Upper One Games' puzzle game about Alaska Native culture: Never Alone. ...



Geeks3D Forums

NVLink GPU interconnect unleashes performance for scientific & other applicatio

April 23, 2015 06:22 PM

Data in the Fast Lane: How NVLink Unleashes Application Performance

" ... To avoid “traffic jams” in applications, we invented a fast interconnect between the CPU and GPU, and among GPUs. It’s called NVLink.

It’s the world’s first high-speed interconne...

Gamasutra Feature Articles

Game mods can now be sold on the Steam Workshop for real money

April 23, 2015 06:07 PM

UPDATE Valve is letting users charge real money for game mods on the Steam Workshop, starting with Skyrim and rolling out to other games (contingent on the approval of their developers) in the weeks ahead. ...

Now that I run my studio, I understand the 'insanity' of my old bosses

April 23, 2015 05:19 PM

"Now that I'm a CEO and a business owner, the tables, of course, are turned. What I realize now is that business is not the smoothly running ship that we idealize it to be." ...

Learn from The Talos Principle and RuneScape devs at GDC Europe

April 23, 2015 04:06 PM

Croteam's Alen Ladavac and Jagex's RuneScape VP Phil Mansell join the lineup of speakers exploring the art and business of games at GDC Europe 2015 this August in Cologne. ...

8 lessons I learned from making a Twine game with less than 300 words

April 23, 2015 04:05 PM

"The reason I think that I was successful in finishing a game comes down to the limitations. Knowing that I only had 300 words made it easier to cut things from the game and reduce the development workload." ...

Never Alone wins Game of the Year at 12th Games for Change Festival

April 23, 2015 11:43 AM

Never Alone has been awarded Game of the Year for the 12th Games for Change Festival. ...

How a closed beta helped us improve our game: Findings and tips

April 23, 2015 11:00 AM

"Our closed beta helped us to identify new issues and confirm the list of changes that we already discussed within the team but needed a fresh perspective for. It was a very helpful and illuminating experience." ...

Assassin's Creed creator's new project is Ancestors: The Humankind Odyssey

April 23, 2015 09:37 AM

Patrice Desilets, Assassin's Creed director and now CEO and creative director of his own company Panache Games, announces a new project; Ancestors: The Humankind Odyssey. ...

In-app purchases for indie mobile games: A smart freemium strategy

April 23, 2015 08:00 AM

"You need to make sure that your in-app purchase strategy isn't overly complex. If you start small and think of your IAPs as a way to engage new users, you'll be well on your way to success." ...

If you build it: Colossal Order on Cities: Skylines modding

April 23, 2015 08:00 AM

We talk to Damien Morello of Colossal Order about the wild success of Cities: Skylines modding scene, and what it means for future development. ...

iPhone Development Tutorials and Programming Tips

Custom Open Source iOS Slider/Stepper Component Using UIKit Dynamics

by Johann at April 23, 2015 06:00 AM

I’ve mentioned a number of slider components, most recently a slider enhanced with animated tick marks.

Here’s an open source control submitted by Rehat Kathuria providing an innovative slider control that utilizes UIKitDynamics called SnappingSlider.

SnappingSlider could be used to replace your slider or step controls, and works by having the user adjust values by sliding the control to the left or right, and then snaps back to the middle when the user is done with their adjustments. UIKitDynamics are used to produce the nice snap back effect similar to a scrollview.

Here’s an image showing SnappingSlider in action:
SnappingSlider

You can find SnappingSlider on Github here.

A nice custom slider component.

Original article: Custom Open Source iOS Slider/Stepper Component Using UIKit Dynamics

©2015 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Timothy Lottes

Source-Less Programming : 3

by Timothy Lottes (noreply@blogger.com) at April 23, 2015 06:30 AM

Annotation Encoding
Refined from last post, two 32-bit annotation words per binary image word,

FEDCBA9876543210FEDCBA9876543210
================================
00EEEEEEDDDDDDCCCCCCBBBBBBAAAAAA - LABEL : 5 6-bit chr string ABCDE


FEDCBA9876543210FEDCBA9876543210
================================
..............................00 - DAT : hex data
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA01 - GET : get word from address A*4
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA02 - ABS : absolute address to A*4
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA03 - REL : relative address to A*4


Going to switch to just 2 lines per word displayed in the editor, Only DAT annotations show hex value, other types show LABEL of referenced address in the place of the hex value. So no need for an extra note. In practice will be using some amount of binary image memory to build up a dictionary of DAT words representing all the common somewhat forth like opcodes, then GET words in the editor to build up source.

Need to redo the bootloader from floppy to harddrive usage, and switch even the bootloader's 16-bit x86 code to 32-bit aligned LABEL'ed stuff so the final editor can edit the bootloader. Prior was avoiding manually assembling the 16-bit x86 code in the boot loader, but might as well ditch NASM and use something else to bootstrap everything.



Gamasutra Feature Articles

Facebook mum on large-scale Oculus Rift shipments for 2015

April 22, 2015 10:09 PM

"Oculus is very much in the development stage," says CFO David Wehner, when asked by an analyst whether or not we can expect a product soon. ...

Global game market to grow to $91.5 billion as China overtakes U.S.

April 22, 2015 08:56 PM

Analyst firm Newzoo reports that China's ascendency is coming faster than expected -- but growth all around the globe is powering a big games business. ...

Timothy Lottes

Source-Less Programming : 2

by Timothy Lottes (noreply@blogger.com) at April 22, 2015 09:37 PM

Continuing with what will either be an insanely great or amazingly stupid project...

Making slow progress with bits of free-time after work, far enough thinking through the full editor design to continue building. Decided to ditch 64-bit long mode for 32-bit protected mode. Not planning on using the CPU for much other than driving more parallel friendly hardware... so this is mostly a question of limiting complexity. Don't need 16 registers and the REX prefix is too ugly for me to waste time on any more. The 32-bit mode uses much more friendly mov reg,[imm32] absolute addressing, also with ability to use [EBP+imm32] without an SIB byte (another thing I mostly avoid). Unfortunately still need relative addresses for branching. 32-bit protected mode thankfully doesn't require page tables unlike 64-bit long mode. Can still pad out instructions to 32-bits via reduntant segment selectors.

Source-Less Analog to Compile-Time Math?
Compile-time math is mostly for the purpose of self-documenting code: "const uint32_t willForgetHowICameUpWithThisNumber = startHere + 16 * sizeof(lemons);". The source-less analog is to write out the instructions to compute the value, execute that code at edit time, then have anotations for 32-bit data words which automatically pull from the result when building 32-bit words for opcode immediates for the new binary image.

Reduced Register Usage Via Self Modifying Code
Sure, kills the trace cache in two ways, what do I care. Sometimes the easist way to do something complex is to just modify the opcode immediates before calling into the function...

What Will Annotations Look Like?
The plan so far is for the editor to display a grid of 8x8 32-bit words. Each word is colored according to a tag annotation {data, absolute address, relative address, pull value}. Each word has two extra associated annotations {LABEL, NOTE}. Both are 5 6-bit character strings. Words in grid get drawn showing {LABEL, HEX VALUE, NOTE} as follows,

LABEL
00000000
NOTE


The LABEL provides a name for an address in memory (data or branch address). Words tagged with absolute or relative addresses or pull value show in the NOTE field the LABEL of the memory address they reference. Words tagged with data use NOTE to describe the opcode, or the immediate value. Editor when inserting a NOTE can grab the data value from other words with the same NOTE (so only need to manually assemble an opcode once). Edit-time insert new words, delete words, and move blocks of words, all just relink the entire edit copy of the binary image. ESC key updates a version number in the edit copy, which the excuting copy sees triggering it to replace itself with the edit copy.

Boot Strapping
I'm bootstrapping the editor in NASM in a way that I'll be able to see and edit later at run-time. This is a time consuming process to get started because instead of using NASM to assemble code, I need to manually write the machine code to get the 32-bit padded opcodes. Once enough of the editor is ready, I need a very tiny IDE/PATA driver to be able to store to the disk image. Then I can finish the rest of the editor in the editor. Then I'll also be self hosted outside the emulator and running directly on an old PC with a non-USB keyboard, but with a proper PCIe slot...



Gamasutra Feature Articles

Get a job: Be a Level Designer for The Workshop

April 22, 2015 08:17 PM

The Workshop, which worked on The Evil Within and Sorcery, seeks to bring on a level designer to building CryEngine/Unreal levels alongside the team in its Marina Del Rey, CA office. ...

You can now apply directly to Valve for a free Vive VR dev kit

April 22, 2015 07:57 PM

Game makers interested in mucking around with a developer edition of Valve and HTC's Vive virtual reality headset can now apply for a free dev kit via a simple Steam sign-up page. ...

Video: Howard Scott Warshaw's classic postmortem of Yars' Revenge

April 22, 2015 07:23 PM

Game industry veteran Howard Scott Warshaw explains how he created the seminal Atari 2600 game Yars' Revenge and other important Atari titles -- including E.T. -- in this GDC 2015 talk. ...

Don't Miss: Five PR tips indies really shouldn't read

April 22, 2015 06:48 PM

Back in 2013 discoverability was a growing problem and "indie marketing guides" were legion, so Vlambeer's Rami Ismail created this tongue-in-cheek list of horribly misunderstood indie game PR tips. ...

Game From Scratch

An Hour with Blender: Learning 3D Modeling

by Mike@gamefromscratch.com at April 22, 2015 06:08 PM

 

The following is a companion post containing the hotkeys used in the following video.  It is an hour in duration and attempts to teach the basics of 3D modeling in Blender.  It is a companion to this video which is also an hour in duration and introduces the viewer to Blender.  Of course if you prefer text based tutorials, we’ve got you covered there too!

 

The Video

 

The HotKeys

 

Blender Hotkeys

Action

 Hotkey

Switch mode ( object, edit ) Tab
Switch Edit Mode (Vertex, Edge, Face) Ctrl + Tab
Switch to Edit Vertex Mode Ctrl + Tab + 1
Switch to Edit Edge Mode Ctrl + Tab + 2
Switch to Edit Face Mode Ctrl + Tab + 3
   
Rotate R
Scale S
Translate/Move/Grab G
   
Select Object RMB
Select Multiple Shift + RMB
Select All/Clear Selected A
Select Edge Loop Alt + RMB
Box select B
Circle Select C
Lasso Select Ctrl + RMB
   
X Ray Display Mode Z
   
Specials Menu (Common operations) W
Vertex Menu Ctrl + V
Edge Menu Ctrl + E
Face Menu Ctrl + F
   
Extrude E
Bevel Ctrl + B
Knife Tool K
Connect Vertex J
Fill/Create Face F
Insert Edge Loop Ctrl + R

Gamasutra Feature Articles

Q& A: Wrist-mounted game design on the Apple Watch

April 22, 2015 05:52 PM

Gamasutra speaks to I Am Bread developer Bossa Studios to understand why they're making an Apple launch game, Spy Watch, and what they've learned about making good smartwatch games. ...



Timothy Lottes

Look No Triangles : Scatter vs Gather

by Timothy Lottes (noreply@blogger.com) at April 22, 2015 06:49 PM

There are a bunch of people working-on and succeeding in non-triangle rendering. With GPU perf still climbing IMO it is possible to return to the golden age of a different kind of software rendering: the kind done in a pipeline built out of compute shaders.

In my sphere tracing of math based SDF fields I was purely ALU bound, tracing to the limit of floating point precision. The largest performance win was found by doing a many-level hierarchical trace (starting with very coarse grain empty space skipping). But the limit of all this is just a log reduction of the number of steps in the search, still requires many search steps per pixel. And when doing a memory based trace (instead of a math based trace) the search is just a very long latency chain with divergent access patterns. Tracing via searching on the GPU hits a wall. To make matters worse when tracing, the ALU units are loaded up with work involved in tracing, instead of something useful.

The alternative to this is to switch to a mostly scatter based design. A large amount of the tree structure traversed each frame in a gather based approach is similar across frames. Why not just have the tree stored mostly expanded in memory based on the needs of the view. Then expand or collapse the tree based on the new visibility needs of the next frame. Rendering is then a mostly scatter process which reads leaves in the tree once. Reads of memory can now be coherent, and ALU can be used for things more interesting than search. Scatter will be somewhat divergent, but that cost can be managed by loading up enough useful ALU work in parallel. There are a lot of ways to skin this. Nodes of the tree can be bricks. Bricks can be converted into little view based depth sprites, then binned into tiles and composited. Seems as if bricks converted into triangle meshes and rasterized is the popular path now, but still using the CPU to feed everything. This could get much more interesting when the GPU is generating the cached geometry bricks: artistically controlled procedual volume generation...

Gamasutra Feature Articles

Unity devs: Blizzard's former chief creative might pay you a visit

April 22, 2015 05:47 PM

Almost a year after quitting his position as Blizzard's chief creative officer, the former World of Warcraft frontman is taking a speaking tour of Unity studios and developer conferences. ...

Game Design Aspect of the Month

What is Agency?

by noreply@blogger.com (Sande Chen) at April 22, 2015 05:43 PM

In this article, independent developer Gabby Taylor stresses the importance of player agency.

It’s human nature to want to make an impact, to matter, to leave your mark on the world you will someday leave behind. For most of us, however, any or all of these can only be accomplished in a digital world. That leaves us, as game developers, to create that world as best we can. This involves the usual fare of suspending disbelief, making the macho characters we all wish we could be, and a catchy narrative. Right? … Right?

Nope.

We tend to underestimate a little something called ‘agency’, which is the actual ability to make decisions. Without that, we just exist on rails and it may as well be an interactive movie. We can have the sexiest/most macho character ever take down hundreds of evil dragons and solve all the world’s problems, but it won’t feel like we did anything without the ability to make the decision to perform each action, which is the whole point. I mean, sure, it’ll be pretty badass, but there will still be the unmet need to make an impact, or to matter, even if it is only briefly.

So, how do we give players sufficient agency? There are two main components: decision and consequence, both of which are created within the plot and overall design of a game. While the idea of decisions and appropriate consequences may be simple, they have a huge impact. Let’s go through an example:

Without Agency: NPC runs up screaming about a dragon attacking the poor, helpless village. You run in and slay the dragon, using up all your supplies just to stay alive. You may or may not be rewarded proportionately, or at all, and your efforts may or may not even be acknowledged by the local or general populace. You move on to the next thing.

With Agency: NPC runs up screaming about a dragon attacking the poor, helpless village. You could run in and slay the dragon, even knowing that it’s really dangerous and you have limited supplies to extend your life, and when it’s over be showered in praise, gratitude, and rewards (or just given more quests to help clean this mess up). You could choose to sneak throughout the village and plunder it for all it’s worth and have more supplies but far more negative future interactions with anyone who happened to catch a glimpse of you (and survived). You could choose to run the same direction as the NPC, and let the village burn (or not, you never know). You could choose to give the village a wide berth and continue on your way and the fleeing NPC will hate you forever and people will mourn the loss of an entire village and how was no one there to stop this calamity (you don’t happen to know anything about that, do you??).

Either way you could join in an epic battle to save villagers from a big, mean dragon, and you might be rewarded. Both ways of going about this are fun, without a doubt. With agency, however, there is a lot more of the player allowed in the game. His or her personality can shine through, allowing him or her to be more immersed in the experience and fulfill their need to make an impact, be it to good or bad effect. When it comes down to it, that’s all a player character really is: an empty vessel waiting to be filled with what makes the player who they are. The more we allow for that, as opposed to crowding out the player with our narrative, the more the player can walk away satisfied that they did something, that they mattered, and maybe have the confidence they previously lacked to meet their potential for impacting the real world around them.

Gabby Taylor is a game designer, writer, and artist for indie studio GreyKüb. She began doing art for games in 2010, and expanded to design and writing in 2012. Since then, she’s been part of several games on the market and is currently working on a few mods and another game called Avalon. When she’s not developing games, Gabby spends her time woodworking, working on cars and motorcycles, and spreading her love of game development.



Gamasutra Feature Articles

How I optimized my HTML5 game from the start of development

April 22, 2015 04:26 PM

"I wanted the game to be playable on mobile, but making a game with so many enemies, large maps and full of effects was hardly compatible with current mobile devices, unless it was optimized from the start." ...

Exploring the golden cohort: Your best free-to-play users

April 22, 2015 03:15 PM

"The results have shown that your Golden Cohort players will: Spend more in terms of frequency of transactions and be prone to spending on the more expensive items in the store." ...

Fixing Unity's tendency for object coupling: The MessageBus

April 22, 2015 01:18 PM

"Unity forces coupling straight from their first tutorial. That's absurd. In order to solve this, we've created a Messaging system that allows objects to broadcast messages when events happen." ...

Progression control in SimCity BuildIt: A design analysis

April 22, 2015 08:04 AM

"The means of controlling the progression in SimCity BuildIt are subtle but very effective. The designers have tuned it in a way to ease the player smoothly into the habit and hobby phase." ...

Video blog: Games, puzzles, and contests

April 22, 2015 08:04 AM

Designer and lecturer Lewis Pulsipher runs down the distinctions: "pure games never have always-correct solutions to the whole game" while "you cannot lose to a puzzle, though you may give up." ...

GPGPU.org

OpenCL-Z Android

by Mark Harris at April 22, 2015 07:16 AM

Developers have been using utility tools such as CPU-Z, GPU-Z, CUDA-Z, OpenCL-Z for a long time. These tools provide platform and hardware information in details and help developers quickly understand the hardware capabilities.

Recently, OpenCL has been supported by most of the latest mobile phones/tablets, as the mobile GPUs are gaining more compute power. OpenCL-A Android can help developer to quickly detect the availability of the OpenCL on a device, and get information about OpenCL-capable platform and devices.

In addition to detecting the OpenCL capability and getting device information, the OpenCL-Z Android is also able to measure the raw compute power in terms of ALU peak GFLOPS performance and memory bandwidth performance. These numbers would be useful for developers who want to take advantage of GPU compute capability of the modern GPU. The developers can roughly predict the performance of a certain algorithm targeting on a specific platform, or compare the raw compute performance among platforms.

The OpenCL-Z Android is a free software and it is now available on Google Play:
Download link at Google Play

The major features of OpenCL-Z Android:
– detect OpenCL availability;
– detect OpenCL driver library;
– display detailed OpenCL platform information;
– display detailed OpenCL device information;
– measure the raw compute performance and memory system bandwidth;
– export OpenCL information to sdcard;
– share OpenCL information with other applications, such as e-mail clients, note applications, social media and so on.

The OpenCL-Z Android has been tested on mobile devices with Qualcomm Snapdragon 8064, 8974, 8084, 8994 chipsets (with Adreno 305, 320, 330, 420, 430 GPUs), Samsung Exynos 5420, 5433 chipsets (with Mali T628, T760 GPUs), MediaTek MT6752 chipset (with Mali T760 GPU), Rockchip RK3288 (with Mali T764 GPU).

The OpenCL-Z Android should be able to support other chipsets. If your device is known to have OpenCL support, but this tool fails to detect it, please contact the developer of OpenCL-Z.

The author of OpenCL-Z is also trying to create a relatively complete list of mobile devices that support OpenCL, the list can be found at the OpenCL-Z official website . If you see any device supporting OpenCL not on that list, please send the author an email and help the list grow.