Planet Gamedev

iPhone Development Tutorials and Programming Tips

Top iOS Development Resources For Week Ended August 31st, 2014

by Johann at September 01, 2014 06:02 AM

Post Category    Featured iPhone Development Resources,News

Welcome back to our feature of the most popular new and updated iOS developer resources mentioned on the site from the last week.

The top resource this week is an open source component providing a drop-in browser that features tabbed browsing, common browser buttons, an address bar and more.

Here are the resources:

1. Banshee – An open source drop-in browser component built upon UIWebView with a wide feature set. (share on twitter) (featured here)

2. Creating A Run Tracker Tutorial – A tutorial inspired by the Runkeeper app featuring on-map route tracking and recording.(share on twitter) (featured here)

3. ParallaxBlur – An open source component for creating table views featuring an image shown in parallax and sticky text headers.  (share on twitter) (featured here)

Thanks for reading!

Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Top iOS Development Resources For Week Ended August 31st, 2014

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

c0de517e Rendering et alter

Stuff that every programmer should know: Data Visualization

by DEADC0DE ( at August 31, 2014 04:03 PM

If you're a programmer and you don't have visualization as one of your main tools in your belt, then good news, you just found how to easily improve your skill set. Really it should be taught in any programming course.

Note: This post won't get you from zero to visualization expert, but hopefully it can pique your curiosity and will provide plenty of references for further study.

Visualizing data has two main advantages compared to looking at the same data in a tabular form. 

The first is that we can pack more data in a graph that we can get by looking at numbers on screen, even more if we make our visualizations interactive, allowing explorations inside a data set. Our visual bandwidth is massive!

This is useful also because it means we can avoid (or rely less on) summarization techniques (statistics) that are always by their nature "lossy" and can easily hide important details (the Anscombe's quartet is the usual example).

Anscombe's quartet, from wikipedia. Data has the same statistics, but clearly different when visualized

The second advantage, which is even more important, is that we can reason about the data much better in a visual form. 

0.2, 0.74, 0.99, 0.87, 0.42, -0.2, -0.74, -0.99, -0.87, -0.42, 0.2

What's that? How long do you have to think to recognize a sine in numbers? You might start reasoning about the simmetries, 0.2, -0.2, 0.74, -0.74, then the slope and so on, if you're very bright. But how long do you think it would take to recognize the sine plotting that data on a graph?

It's a difference of orders of magnitude. Like in a B-movie scifi, you've been using only 10% of your brain (not really), imagine if we could access 100%, interesting things begin to happen.

I think most of us do know that visualization is powerful, because we can appreciate it when we work with it, for example in a live profiler.
Yet I've rarely seen people dumping data from programs into graphing software and I've rarely seen programmers that actually know the science of data visualization.

Visualizing program behaviour is even more important in the context of rendering engineers or any code that doesn't just either fail hard or work right.
We can easily implement algorithms that are wrong but doesn't produce a completely broken output. It might be just slower (i.e. to converge) than it needs to be, or more noisy, or just not quite "right" and cause our artists to try to adjust for our mistakes by authoring fixes in the art (this happens -all- the time) and so on.
And there are even situations where the output is completely broken, but it's just not obvious from looking at a tabular output, a great example for this would be in the structure of LCG random numbers.

This random number generator doesn't look good, but you won't tell from a table of its numbers...

- Good visualizations

The main objective of visualization is to be meaningful. That means choosing the right data to study a problem, and displaying it in the right projection (graph, scale, axes...).

The right data is the one that is interesting, it shows the features of our problem. What questions are we answering (purpose)? What data we need to display?

The right projection is the one that shows such features in an unbiased, perceptually linear way, and that makes different dimensions comparable and possibly orthogonal. How do we reveal the knowledge that data is hiding? Is it x or 1/x? Log(x)? Should we study the ratio between quantities or absolute difference and so on.

Information about both data and scale comes at first from domain expertise. A light (or sound) intensity probably should go on a logarithmic scale, maybe a dot product should be displayed as the angle between its vectors, many quantities have a physical interpretation and a perceptual interpretation or a geometrical one and so on.

But even more interestingly, information about data can come from the data itself, by exploration. In an interactive environment it's easy to just dump a lot of data to observe, notice certain patterns and refine the graphs and data acquisition to "zoom in" particular aspects. Interactivity is the key (as -always- in programming).

- Tools of the trade

When you delve a bit into visualization you'll find that there are two fairly distinct camps.

One is visualization of categorical data, often discrete, with the main goal of finding clusters and relationships. 
This is quite popular today because it can drive business analytics, operate on big data and in general make money (or pretty websites). Scatterplot matrices, parallel coordinate plots (very popular), Glyph plots (star plots) are some of the main tools.

Scatterplot, nifty to understand what dimensions are interesting in a many-dimensional dataset

The other camp is about visualization of continuos data, often in the context of scientific visualization, where we are interested in representing our quantities without distortion, in a way that the are perceptually linear.

This usually employs mostly position as a visual cue, thus 2d or 3d line/surface or point plots.
These become harder with the increase of dimensionality of our data as it's hard to go beyond three dimensions. Color intensity and "widgets" could be used to add a couple more dimensions to points in a 3d space but it's often easier to add dimensions by interactivity (i.e. slicing through the dataset by intersecting or projecting on a plane) instead.

CAVE, soon to be replaced by oculus rift
Both kinds of visualizations have applications to programming. For deterministic processes, like the output or evolution in time of algorithms and functions, we want to monitor some data and represent it in an objective, undistorted manner. We know what the data means and how it should work, and we want to check that everything goes according to what we think it should.  
But there are also times were we don't care about exact values but we seek for insight into processes of which we don't have exact mental models. This applies to all non-deterministic issues, networking, threading and so on, but also to many things that are deterministic in nature but have a complex behaviour, like memory hierarchy accesses and cache misses.

- Learn about perception caveats

Whatever your visualization is though, the first thing to be aware of is visual perception: not all visual cues are useful for quantitative analysis. 

Perceptual biases are a big problem, because as they are perceptual, we tend not to see them, just subconsciously we are drawn to some data points more than others when we should not.

Metacritic homepage has horrid bar graphs.
As numbers are bright and below a variable-size image,  games with longer images seem to have lower scores...

Beware  of color, one of the most abused, misunderstood tool for quantitative data. Color (hue) is extremely hard to get right, it's very subjective and it doesn't express well quantities nor relationships (what color is less than another), yet it's used everywhere.
Intensity and saturation are not great either, again very commonly used but often inferior to other hints like point size or stroke width.

From complexdiagrams

- Visualization of programs

Programs are these incredibly complicated projects we manage to carry forward, but if that's not challenging enough we really love working with them in the most complicated ways possible. 

So of course visualization is really limited. The only "mainstream" usage you will have probably encountered is in the form of bad graphs of data from static analysis. Dependences, modules, relationships and so on.

A dependency matrix in NDepend

Certainly if you have to see your program execution itself it -has- to be text. Watch windows, memory views with hex dumps and so on. Visual Studio, which is probably the best debugger IDE we have, is not visual at all nor allows for easy development of visualizations (it's even hard to grab data from memory in it).

We're programmers so it's not a huge deal to dump data to a file or peek memory [... my article], then we can visualize the output of our code with tools that are made for data. 
But an even more important tool is to use visualization directly of the behaviour of code, in runtime. This is really a form of tracing which most often is limited to what's known as "printf" debugging.

Tracing is immensely powerful as it tells us at a high level what our code is doing, as opposed to the detailed inspection of how the code is running that we can get from stepping in a debugger.
Unfortunately there is today basically no tool for graphical representation of program state in time, so you'll have to roll your own. Working on your own sourcecode it's easy enough to put some instrumentation to export data to a live graph and in my own experiments I don't use any library for this, just write the simplest possible ad-hoc code to suck the data out.

Ideally though it would be lovely to be able to instrument compiled code, it's definitely possible but much more of an hassle without the support of a debugger. Another alternative that sometimes I adopt is to just have an external application peek at regular interval into my target's process memory
It's simple enough but it captures data at a very low frequency so it's not always applicable, I use it most of the times not on programs running in realtime but as an live memory visualization while stepping through in a debugger.

Apple's recent Swift language seems a step into the right direction, and looks like it pulled some ideas from Bret Victor and Light Table.
Microsoft had a timid plugin for VisualStudio that did some very basic plotting that doesn't seem to be actively updated and another one for in-memory images, but what would be really needed is the ability to export data easily and in realtime as good visualizations are usually to be made ad-hoc for a specific problem.


If you want to delve deeper into program visualization there is a fair bit written about it by the academia, with also a few interesting conferences, but what's even more interesting to me is seeing it applied to one of the hardest coding problems: reverse engineering. 
It should perhaps not be surprising as reversers and hackers are very smart people, so it should be natural for them to use the best tools in their job.
It's quite amazing seeing how much one can understand with very little other information by just looking at visual fingerprints, data entropy and code execution patterns.
And again visualization is a process of exploration, it can highlight some patterns and anomalies to then delve in further with more visualizations or by using other tools.

Data entropy of an executable, graphed in hilbert order, shows signing keys locations.

- Bonus links

Visualization is a huge topic and it would be silly to try to teach everything it's needed in a post, but I wanted to give some pointers hoping to get some programmers interested. If you are, here some more links for further study. 
Note that most of what you'll find on the topic nowadays is either infovis and data-driven journalism (explaining phenomenons via understandable, pretty graphics) or big-data analytics. 
These are very interesting and I have included a few good examples below, but they are not usually what we seek, as domain experts we don't need to focus on aesthetics and communication, but on unbiased, clear quantitative data visualization. Be mindful of the difference.

- Addendum: a random sampling of stuff I do for work
All made either in Mathematica or Processing and they are all interactive, realtime.
Shader code performance metrics and deltas across versions 
Debugging an offline backer (raytracer) by exporting float data and visualizing as point clouds
Approximation versus ground truth of BRDF normalization
Approximation versus ground truth of area lights
BRDF projection on planes (reasoning about environment lighting, card lighting)

I Support Anita

by DEADC0DE ( at August 31, 2014 11:37 AM

One of the first things you learn when you start making games is to ignore most of the internet. Gamers can undoubtedly be wonderful, but as in all things internet, a vocal minority of idiots can overtake most spaces of discourse. Normally, that doesn't matter, these people are irrelevant and worthless even if they might think they have any power in their jerking circles. But if after words other forms of harassment are used, things change.

I'm not a game designer, I play games, I like games, but my work is about realtime rendering, most often the fact it's used by a game is incidental. So I really didn't want to write this, I didn't think there is anything I can add to what has already been written. And Anita herself does an excellent job at defending her work. Also I think the audience of this blog is the right target for this discussion.

Still, we've passed a point and I feel everybody in this industry should be aware of what's happening and state their opinion. I needed to make a "public" stance.

Recap: Anita Sarkeesian is a media critic. She began a successful kickstarter campaign to produce reviews of gender tropes in videogames. She has been subject to intolerable, criminal harassment. People who spoke in her support have been harassed, websites have been hacked... Just google her name to see the evidence.

My personal stance:
  • I support the work of Anita Sarkeesian. As I would of anybody speaking intelligently about anything, even if I were in disagreement.
  • I agree with the message in the Tropes Vs Women series. I find it to be extremely interesting, agreeable and instrumental to raise awareness of an in many cases not well understood phenomenon. 
    • If I have any opinion on her work, is that I suspect in most cases hurtful stereotypes don't come from malice or laziness (neither of which she mentions as possible causes by the way), but from the fact that games are mostly made by people like me, male, born in the eighties, accustomed to a given culture. 
    • And even if we can logically see the issues we still have in gender depictions, we often lack the emotional connection and ability to notice their prevalence. We need all the critiques we can get.
  • I encourage everybody to take a stance, especially mainstream gaming websites and gaming companies (really, how can you resist being included here), but even smaller blog such as this one. 
    • It's time to marginalize harassment and ban such idiots from the gaming community. To tell them that it's not socially acceptable, that most people don't share their views. 
    • Right now for most of the video attacks (I've found no intelligent rebuttal yet) to Tropes Vs Women are "liked" on youtube. Reasonable people don't speak up, and that's even understandable, nobody should argue with idiots, they are usually better left ignored. But this got out of hands.
  • I'm not really up for a debate. I understand that there can be an debate on the merit of her ideas, there can be debate about her methods even, and I'd love to read anything intelligent about it. 
    • We are way past a discussion on whether she is right or wrong. I personally think she is substantially right, but even if she was wrong I think we should all still fight for her to be able to do her job without such vile attacks. When these things happen, in such an extent, I think it's time for the industry to be vocal, for people to stop and just say no. If you think Anita's work (and her as a person) doesn't deserve at least that respect, I'd invite you to just stop following me, seriously.
P.S. if an artist were to design a logo to start a support campaign, I'd gladly host it. I tried myself but my design skills are not good enough.

Timothy Lottes

HDFury Nano GX: Part 2

by Timothy Lottes ( at August 30, 2014 11:26 AM

Continuing to talk about the radical HDFury Nano GX HDMI to VGA converter...

Got a better VGA monitor (ViewSonic G75f) which can do up to a 85 KHz horizontal scan, which is good for 960x540 @ 140 Hz. This works perfectly through the Nano. Getting a 16:9 aspect ratio on display is as easy has changing the monitor's vertical scaling.

Method for Generating Modelines
Using the EDID, or fetching the modes from the monitor automatically, is effectively useless for anything interesting (non-standard). Not sure if this has more to do with the Monitor's EDID settings, what the Nano returns, or what the driver does with the EDID information. Instead I go back to classic "modelines" in the xorg configuration files.

In order to easily generate "modelines", first take the monitors advertised peak specs, in my case, 1600x1200 @ 68Hz, run them through "cvt" and write down the "hsync". Use this as the maximum hsync the monitor can handle,

[/etc/X11/xorg.conf.d] cvt 1600 1200 68
# 1600x1200 67.96 Hz (CVT) hsync: 84.95 kHz; pclk: 183.50 MHz
Modeline "1600x1200_68.00" 183.50 1600 1712 1880 2160 1200 1203 1207 1250 -hsync +vsync

Then use "cvt" to generate what ever modes are desired at a given resolution, making sure to limit the vertical refresh low enough such that the "hsync" value is never exceeded. At 640x480 this monitor can do 160 Hz,

[/etc/X11/xorg.conf.d] cvt 640 480 160
# 640x480 159.42 Hz (CVT) hsync: 84.49 kHz; pclk: 73.00 MHz
Modeline "640x480_160.00" 73.00 640 688 752 864 480 483 487 530 -hsync +vsync Configuration
Thankfully with NVIDIA drivers, it is possible to completely turn off mode validation. This is what enables the interesting non-standard modes, and also enables users to destroy old VGA monitors which have no out-of-range protection. In the "Device" section add (this is all one line),

Option "ModeValidation" "AllowNon60hzmodesDFPModes, NoEDIDDFPMaxSizeCheck, NoVertRefreshCheck, NoHorizSyncCheck, NoDFPNativeResolutionCheck, NoMaxSizeCheck, NoMaxPClkCheck, AllowNonEdidModes, NoEdidMaxPClkCheck"

Using this, I was able to try something similar to a 15.5 KHz arcade monitor modeline. This is actually for an Amiga I think. Also note "cvt" won't make correct 15.5 KHz modelines, horizontal frequency will be wrong.

Modeline "640x240" 13.22 640 672 736 832 240 243 246 265 -hsync -vsync

Which is out of range of the ViewSonic (to low of a horizontal sync) and triggers the monitor's out-of-range protection. The same mode seems to work on the really old VGA monitor I have (which probably supported CGA, EGA, and VGA horizontal frequencies). This seems to validate that newer NVIDIA GPUs and drivers (in X at least) and the Nano, can be made to generate the lower hsync signals needed to drive CGA arcade monitors. I'm still missing the conversion box required to test this (see the bottom of this post).

Solving the "Broken Panning"
The problem I was having before is the automatically configuring and using both the laptop and HDMI out displays whenever I switch from "Metamode" to classic "Modeline". Since the "Modeline" was not supported by the laptop display, the display would show nothing but still be enabled, and that is why it looked as if the panning was broken. Everything was actually working, the mouse was just going over to the other display which was a black screen.

Fixing this is easy, just disable the laptop screen using the "Monitor-" option (use the "xrandr" command line to list the display connections to find the proper name),

Section "Monitor"
Identifier "nope"
Option "Ignore" "true"
Option "Enable" "false"

Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
Option "Primary" "true"
Option "Enable" "true"
HorizSync 31-86
VertRefresh 60-162
UseModes "Modes0"
Option "DPMS"

Section "Screen"
Identifier "Screen0"
Device "Device0"
Option "Monitor-DP-2" "nope"
Option "Monitor-HDMI-0" "Monitor0"
Monitor "Monitor0"

Next step is to attempt to see if I can drive both screens at different frequency and resolution. Then I can use one for programming and the other for analog output.

240p Arcade Options
Great place to go for information: Under the assumption that my setup can generate a proper 240p VGA signal at the standard 15.5KHz used by arcade monitors (and NTSC TVs), this thread post talks about a converter which will convert that VGA signal into component for a US TV: TC1600 VGA to YPrPb Transcoder (no resampling/scaling).

iPhone Development Tutorials and Programming Tips

Open Source iOS Highly Customizable Google Inspired Segmented Control

by Johann at August 30, 2014 03:08 AM

Post Category    Featured iPhone Development Resources,iOS UI Controls,iPad,iPhone,Objective-C

I’ve mentioned a number of custom segmented controls, most recently NYSegmentedControl which features a great variety of color and shape customization options.

Here’s a highly customizable segmented control called HMSegmentedControl from Hesham Abd-Elmegid inspired by the segmented controls in Google Currents, and other Google products.

HMSegmentedControl allows you to use both text and images in each segment, allows for full color customizations, allows you to place the line indicating the selected segment on the top or bottom, and allows you to scroll horizontally.

Here’s an image showing HMSegmentedControl in action:


You can find HMSegmentedControl on Github here.

A nice custom segmented control.

Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source iOS Highly Customizable Google Inspired Segmented Control

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

iPhone Development Tutorials and Programming Tips

Open Source iOS Component Providing A Simple Contacts App Style Image Cropper

by Johann at August 29, 2014 10:47 PM

Post Category    Featured iPhone Development Resources,iOS UI Controls,iPad,iPhone,Objective-C

I’ve mentioned a few libraries and components that provide an image cropping interface most recently a photo picker component with image cropping capabilities built in called DZNPhotoPickerController.

Here’s a nice simple circular image cropper submitted by Ruslan Skorb called RSKImageCropper.

RSKImageCropper is inspired by the contacts app supports both portrait and landscape orientations and the double tap to reset gesture.

Here’s an image from the read me showing RSKImageCropper in action:


You can find RSKImageCropperon Github here.

A nice component if you’re looking to drop-in a circular image cropper into an app.

Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source iOS Component Providing A Simple Contacts App Style Image Cropper

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Open Source iOS Component For Creating A UITableView With A Customizable Parallax Header

by Johann at August 29, 2014 06:05 PM

Post Category    Featured iPhone Development Resources,iOS UI Controls,iPad,iPhone,Objective-C

I’ve mentioned a number of components that utilize parallax effects, most recently a table view allowing you to create a scrolling view with images displayed with a parallax effect called MJParallaxCollectionView.

Here’s an open source component called ParallaxBlur that provides a nice parallax expanding image table header with a title that sticks neatly at top of the view released by Joseph Pintozzi.  You can easily customize the image and text on top of the table view, and if desired provide your own custom view.

As the readme states:

ParallaxBlur aims the be an easy-to-use implementation of a UITableController with a parallax header. It is screen resolution independant, orientation indendant, and will automatically adjust if there is a navigation bar in place.

The user interaction is fairly straightforward. The header image blurs as you scroll up, leaving a 60 pixel area always visible, and expands out the header image if you pull down, while at the same time making the overlay views transparent.

This image shows ParallaxBlur in action:

You can find ParallaxBlur on Github here.

A nice effect for more informative table views.

Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source iOS Component For Creating A UITableView With A Customizable Parallax Header

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Game Design Aspect of the Month

IGDA Webinar: Monetization

by (Sande Chen) at August 29, 2014 03:04 PM

In this video, game designer Chris Crowell reviews different monetization strategies and illuminates further the concept of free-to-play and play-to-win.

Remember to sign up for the next IGDA Game Design Webinar on Wednesday, September 17 featuring Ian Schreiber on game balance.

Ignacio Castaño


by Ignacio Castaño at August 29, 2014 08:42 AM

We are using Max McGuire’s HLSLParser in The Witness and we just published all our changes in our own github repository:

I also wrote some notes about our motivation, the changes we made and how are we using it at The Witness blog:


OGLplus 0.51.0 released

August 28, 2014 10:56 PM

OGLplus is a collection of open-source, cross-platform libraries which implement an object facade over the modern OpenGL, OpenAL and EGL C-language APIs. It automates resource and object management, error handling and makes the use of these libraries in C++ safer and more convenient.

Physics-Based Animation

Fast and Exact Continuous Collision Detection with Bernstein Sign Classification

by christopherbatty at August 28, 2014 08:24 PM

Min Tang, Ruofeng Tong, Zhendong Wang, Dinesh Manocha

We present fast algorithms to perform accurate CCD queries between triangulated models. Our formulation uses properties of the Bernstein basis and Bezier curves and reduces the problem to evaluating signs of polynomials. We present a geometrically exact CCD algorithm based on the exact geometric computation paradigm to perform reliable Boolean collision queries. This algorithm is more than an order of magnitude faster than prior exact algorithms. We evaluate its performance for cloth and FEM simulations on CPUs and GPUs, and highlight the benefits.

Fast and Exact Continuous Collision Detection with Bernstein Sign Classification

Jorge Jimenez

Next Generation Post Processing in Call of Duty: Advanced Warfare

by Jorge Jimenez at August 28, 2014 02:57 PM

Proud and super thrilled to announce that the slides for our talk “Next Generation Post Processing in Call of Duty: Advanced Warfare” in the SIGGRAPH 2014 Advances in Real-Time Rendering in Games course are finally online. Alternatively, you can also download them in the link below.

Post effects temporal stability, filter quality and accuracy are, in my opinion, one of the most striking differences between games and film. Call of Duty: Advanced Warfare art direction aimed for photorealism, and generally speaking, post effects are a very sought after feature for achieving natural looking, photorealistic images. This talk describes the post processing techniques developed for this game, which aim to narrow the gap between film and games post FX quality. Which is, as you can imagine, a real challenge given our very limited time budget (16.6 ms for a 60 fps game, which is orders of magnitude less of what can be typically found in films).

In particular, the talk describes how scatter-as-you-gather approaches can be leveraged for trying to approximate ground truth algorithms, including the challenges that we had to overcome in order for them to work in a robust and accurate way. Typical gather depth of field and motion blur algorithms only deal with color information, while our approaches also explicitly consider transparency. The core idea is based on the observation that ground truth motion blur and depth of field algorithms (like stochastic rasterization) can be summarized as:

  • Extending color information, according to changes in time (motion blur) and lens position (depth of field).
  • Creating an alpha mask that allows the reconstruction of accurate growing/shrinking gradients on the object silhouettes.

This explicit handling of transparency allows for more realistic depth of field focusing effects, and for more convincing and natural-looking motion blur.

In the slides you can also find our approaches to SSS and bloom, and as a bonus, our take on shadows. I don’t want to spoil the slides too much, but for SSS we are using separable subsurface scattering, for bloom a pyramidal filter hierarchy that improves temporal stability and robustness, and for shadow mapping a 8-tap filter with a special per-pixel noise A.K.A. “Interleaved Gradient Noise”, which together with a spiral-like sampling pattern, increases the temporal stability (like dither approaches) while still generating a rich number of penumbra steps (like random approaches).

During the actual talk in SIGGRAPH, I didn’t had time to cover everything, but as promised every single detail is in the online slides. Note that there are many hidden slides, and a bunch of notes as well; you might miss them if you read them in slide show mode.

Hope you like them!

Timothy Lottes

HDFury Nano GX: HDMI to VGA

by Timothy Lottes ( at August 28, 2014 09:11 AM

Got my HDFury Nano GX, now have the ability to run my little CRT from the HDMI out of the laptop with the 120Hz screen and a GTX 880M...

The Nano is simply awesome. Can now also run the PS4 on a VGA CRT with this device, way better than the PlasmaHDTV I've been using. When the device needs to decript HDMI signals, it needs the cord for USB power. My little CRT can do 720p60Hz, and the tiny amount of super-sampling from the PS4's downscaler in combination with the CRT scanline bloom, low latency, and low persistance creates an awesome visual.

Running from the GTX 880M with NVIDIA drivers in Linux worked right out of the box at 720p60Hz also on the little CRT. I ran with 720p GPU-downsampled from 1080p to compare apples to apples. Yes 60Hz flickers with a white screen, but with the typically low intensity content I run, I don't really notice the flicker. Comparing the 120Hz LCD and the CRT at 60Hz is quite interesting. The CRT definitely looks better motion wise. The 120Hz LCD has no stobe backlight, so it has 4-8x higher persistence than the CRT at 60Hz. Very fast motion is easy to track visually on the CRT. When the eye tracking fails, it still does not look as bad as the 120Hz LCD. The 120Hz LCD is harder track in fast motion without seeing what looks similar to full-shutter 4-tap motion blur at 30Hz. It is still visually obvious that the frame sits for a bit in time.

In terms of responsiveness, the 120Hz LCD is direct driven by the 880M. GPU LUT reprogram to reading the value on a color calibration sensor loop is just 8 ms. My test application also minimizes latency of input by reading controller input directly from CPU memory right before view dependent rendering. Even with that, the 120Hz definitely feels more responsive. The 8ms difference between CRT at 60Hz and LCD at 120Hz seems to make an important difference.

Thoughts on Motion Blur
Motion blur on the CRT at 60Hz in my mind is completely not necessary. Motion blur on the 120Hz LCD (or even 60Hz LCD) is something I would not waste perf on any more. However it does seem as if the entire point of motion blur for "scan-and-hold" displays like LCDs is to simply reduce the confusion that the human visual system is subjected to. Specifically just to break the hard edges of objects in motion, as to reduce the blur confusion the mind is getting from the full persistance "hold". Seems like if motion blur is used at 60Hz and above on an LCD, it is much better to just limit it to a very short size with no banding.

Nano and NVIDIA Drivers in X
Had to manually adjust the xorg conf to get anything outside 720p60Hz to work. I disabled the driver from using the EDID data, went back to classic horz and vert ranges. Noticed a bunch of issues with the NVIDIA Drivers and/or hardware,

(a.) Down-sampling at scan-out has some limits, for example going from 1920x1080 to 640x480 won't work. However scaling only height or width with virtual panning in the unscaled direction does work. This implies that either the driver has a bug, or more likely that the hardware does not have enough on-chip line buffer for the scalar to do such high reductions.

(b.) Up-sampling at scan-out from NVIDIA GPUs is completely useless because they all introduce ringing (hopefully some day the will fix that, I haven't tried Maxwell GPUs yet).

(c.) Metamodes won't do under 31KHz horizontal frequency modes. Instead it forces the dead ugly "doublescan".

(d.) Skipping metamodes and fall back to modelines has a bug where small resolutions automatically extend out the virtual panning resolution in width only, but have broken panning. I still have not found a workaround for this. The modelines even under 31KHz seem to work however (must use "ModeValidation" options to turn off the safety checks).

The Nano does work with frequencies outside 60Hz: managed to get 85Hz going at 640x480. Seems as if the Nano also supports low resolutions and arcade horizontal frequencies (a modeline with 320x240 around 60Hz worked). Unfortunately I'm somewhat limited in testing by the limits of the CRT. It also seemed as if I could get the 880M to actually use an arcade horizontal frequency. But I don't have a way to validate this yet. Won't be sure until I eventually grab another converter (VGA to Component supporting 240p) and try driving a NTSC TV with 240p like old consoles did.