Planet Gamedev

Ignacio Castaño

HLSLParser

by Ignacio Castaño at August 29, 2014 08:42 AM

We are using Max McGuire’s HLSLParser in The Witness and we just published all our changes in our own github repository:

https://github.com/Thekla/hlslparser

I also wrote some notes about our motivation, the changes we made and how are we using it at The Witness blog:

http://the-witness.net/news/2014/08/hlslparser/

iPhone Development Tutorials and Programming Tips

Open Source iOS Component Providing A Nice Scrolling Calendar App Style Day Picker

by Johann at August 29, 2014 06:09 AM

Post Category    Featured iPhone Development Resources,iOS UI Controls,iPad,iPhone,Objective-C

I’ve mentioned a few date picker components and a few months ago mentioned an open source project providing a nice reproduction iOS 7 ipad calendar app.

Here’s an open source calendar component called ASDayPicker from Appscape that provides a nice scrolling day picker that requires a minimal amount of space inspired by the calendar app’s week view.

With ASDatePicker you can customize the start and end dates that the user can choose from, and the colors of the calendar. A complete example is included.

Here’s an animation from the readme showing ASDayPicker in action:
ASDayPicker

You can find ASDayPicker on Github here.

A nice simple day picker component.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source iOS Component Providing A Nice Scrolling Calendar App Style Day Picker

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

A In-Depth Case Study On The Usage Of Swift Optionals

by Johann at August 29, 2014 12:23 AM

Post Category    Featured iPhone Development Resources,iOS Development Tutorials,Swift

I’ve mentioned a few tutorials on larding the Swift language, most recently an in-depth guide on using Swift arrays, and a tutorial on Swift basics through the creation of a falling blocks game.

Here is an in-depth case study and tutorial from the  Apple Swift blog exploring the use of optionals.

In the post the authors create an NSDictonary objectForKeys function in Swift and  explain how to use options for this situations when the key is not found and the advantages over NSNull with Objective-C.

You can find the guide on the Swift blog.

A nice straightforward guide on using optionals.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: A In-Depth Case Study On The Usage Of Swift Optionals

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

OpenGL

OGLplus 0.51.0 released

August 28, 2014 10:56 PM

OGLplus is a collection of open-source, cross-platform libraries which implement an object facade over the modern OpenGL, OpenAL and EGL C-language APIs. It automates resource and object management, error handling and makes the use of these libraries in C++ safer and more convenient.

Physics-Based Animation

Fast and Exact Continuous Collision Detection with Bernstein Sign Classification

by christopherbatty at August 28, 2014 08:24 PM

Min Tang, Ruofeng Tong, Zhendong Wang, Dinesh Manocha

We present fast algorithms to perform accurate CCD queries between triangulated models. Our formulation uses properties of the Bernstein basis and Bezier curves and reduces the problem to evaluating signs of polynomials. We present a geometrically exact CCD algorithm based on the exact geometric computation paradigm to perform reliable Boolean collision queries. This algorithm is more than an order of magnitude faster than prior exact algorithms. We evaluate its performance for cloth and FEM simulations on CPUs and GPUs, and highlight the benefits.

Fast and Exact Continuous Collision Detection with Bernstein Sign Classification

iPhone Development Tutorials and Programming Tips

Open Source iOS Library For Easy UIWebView Proxy Requests

by Johann at August 28, 2014 06:40 PM

Post Category    Featured iPhone Development Resources,iOS Development Libraries,Objective-C,Open Source iOS Libraries And Tools

Some time ago I mentioned the WebViewJavascriptBridge for easy communication with Objective-C code from a UIWebView from Marc Westin that has gone on to become an extremely popular and widely used library.

Here’s another handy library from Marcus providing easy proxy requests for UIWebView’s with a number of advantageous features called WebViewProxy.

Some of the features of WebViewProxy include:

- Asynchronous or synchronous serving of responsed to intercepted requests (unlike cachedResponseForRequest)
- Handy syntax for intercepting all requests, requests from a specific host, or URL path or matching NSPredicate
- Methods for easily responding with image, text, html or JSON data
- Lower level methods for responding with specific HTTP headers and NSData
- Proxying remote requests

You can find WebViewProxy on Github here.

A great library for proxying UIWebView requests without the need to use NSURLProtocol.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source iOS Library For Easy UIWebView Proxy Requests

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.



Jorge Jimenez

Next Generation Post Processing in Call of Duty: Advanced Warfare

by Jorge Jimenez at August 28, 2014 02:57 PM

Proud and super thrilled to announce that the slides for our talk “Next Generation Post Processing in Call of Duty: Advanced Warfare” in the SIGGRAPH 2014 Advances in Real-Time Rendering in Games course are finally online. Alternatively, you can also download them in the link below.

Post effects temporal stability, filter quality and accuracy are, in my opinion, one of the most striking differences between games and film. Call of Duty: Advanced Warfare art direction aimed for photorealism, and generally speaking, post effects are a very sought after feature for achieving natural looking, photoreal images. This talk describes the post effects techniques developed for this game, which aim to narrow the gap between film and games post effects quality. Which is, as you can imagine, a real challenge given our very limited time budget (16.6 ms for a 60 fps game).

In particular, the talk describes how scatter-as-you-gather approaches can be leveraged for trying to approximate ground truth algorithms, including the challenges that we had to overcome in order for them to work in a robust and accurate way. Typical practical real-time depth of field and motion blur algorithms only deal with color information, while our approaches also explicitly consider transparency. The core idea is based on the observation that ground truth motion blur and depth of field algorithms (like stochastic rasterization) can be summarized as:

  • Extending color information, according to changes in time (motion blur) and lens position (depth of field).
  • Creating an alpha mask that allows the reconstruction of accurate growing/shrinking gradients on the object silhouettes.

This explicit handling of transparency allows for more realistic depth of field focusing effects, and for more convincing and natural-looking motion blur.

In the slides you can also find our approaches to SSS and bloom, and as a bonus, our take on shadows. I don’t want to spoil the slides too much, but for SSS we are using separable subsurface scattering, for bloom a pyramidal filter hierarchy that improves temporal stability and robustness, and for shadow mapping a 8-tap filter with a special per-pixel noise A.K.A. “Interleaved Gradient Noise”, which together with a spiral-like sampling pattern, increases the temporal stability (like dither approaches) while still generating a rich number of penumbra steps (like random approaches).

During the actual talk in SIGGRAPH, I didn’t had time to cover everything, but as promised every single detail is in the online slides. Note that there are many hidden slides, and a bunch of notes as well; you might miss them if you read them in slide show mode.

Hope you like them!

iPhone Development Tutorials and Programming Tips

Tutorial: How To Create A Tracking App With Real-Time On Map Route Drawing

by Johann at August 28, 2014 01:22 PM

Post Category    Featured iPhone Development Resources,iOS Development Tutorials,Objective-C

Earlier this year I mentioned an open source app personal movement tracking app in development called Theseus inspired by Google’s Latitude app.

Here’s a nice step-by-step tutorial on how to create a movement tracking app inspired by the popular Runkeeper app from Matt Luedke.

The topics within the tutorial include:

- Setting up a Core Data database for location and run data
- Setting up the interface in map views
- Tracking the location using Core location
- Drawing on the map in real-time to show where the user has been
- Saving the data for later
- Using the simulator test your app
- Setting up a badge system for tracking progress
- Setting up custom map annotations

This screenshot from the tutorial shows off the end result:

Running App Tutorial

You can find the tutorial in two parts on the Ray Wenderlich blog: Part 1, Part 2.

A nice guide for anyone looking to create a movement tracking app.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Tutorial: How To Create A Tracking App With Real-Time On Map Route Drawing

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Timothy Lottes

HDFury Nano GX: HDMI to VGA

by Timothy Lottes (noreply@blogger.com) at August 28, 2014 09:11 AM

Got my HDFury Nano GX, now have the ability to run my little CRT from the HDMI out of the laptop with the 120Hz screen and a GTX 880M...



The Nano is simply awesome. Can now also run the PS4 on a VGA CRT with this device, way better than the PlasmaHDTV I've been using. When the device needs to decript HDMI signals, it needs the cord for USB power. My little CRT can do 720p60Hz, and the tiny amount of super-sampling from the PS4's downscaler in combination with the CRT scanline bloom, low latency, and low persistance creates an awesome visual.

Running from the GTX 880M with NVIDIA drivers in Linux worked right out of the box at 720p60Hz also on the little CRT. I ran with 720p GPU-downsampled from 1080p to compare apples to apples. Yes 60Hz flickers with a white screen, but with the typically low intensity content I run, I don't really notice the flicker. Comparing the 120Hz LCD and the CRT at 60Hz is quite interesting. The CRT definitely looks better motion wise. The 120Hz LCD has no stobe backlight, so it has 4-8x higher persistence than the CRT at 60Hz. Very fast motion is easy to track visually on the CRT. When the eye tracking fails, it still does not look as bad as the 120Hz LCD. The 120Hz LCD is harder track in fast motion without seeing what looks similar to full-shutter 4-tap motion blur at 30Hz. It is still visually obvious that the frame sits for a bit in time.

In terms of responsiveness, the 120Hz LCD is direct driven by the 880M. GPU LUT reprogram to reading the value on a color calibration sensor loop is just 8 ms. My test application also minimizes latency of input by reading controller input directly from CPU memory right before view dependent rendering. Even with that, the 120Hz definitely feels more responsive. The 8ms difference between CRT at 60Hz and LCD at 120Hz seems to make an important difference.

Thoughts on Motion Blur
Motion blur on the CRT at 60Hz in my mind is completely not necessary. Motion blur on the 120Hz LCD (or even 60Hz LCD) is something I would not waste perf on any more. However it does seem as if the entire point of motion blur for "scan-and-hold" displays like LCDs is to simply reduce the confusion that the human visual system is subjected to. Specifically just to break the hard edges of objects in motion, as to reduce the blur confusion the mind is getting from the full persistance "hold". Seems like if motion blur is used at 60Hz and above on an LCD, it is much better to just limit it to a very short size with no banding.

Nano and NVIDIA Drivers in X
Had to manually adjust the xorg conf to get anything outside 720p60Hz to work. I disabled the driver from using the EDID data, went back to classic horz and vert ranges. Noticed a bunch of issues with the NVIDIA Drivers and/or hardware,

(a.) Down-sampling at scan-out has some limits, for example going from 1920x1080 to 640x480 won't work. However scaling only height or width with virtual panning in the unscaled direction does work. This implies that either the driver has a bug, or more likely that the hardware does not have enough on-chip line buffer for the scalar to do such high reductions.

(b.) Up-sampling at scan-out from NVIDIA GPUs is completely useless because they all introduce ringing (hopefully some day the will fix that, I haven't tried Maxwell GPUs yet).

(c.) Metamodes won't do under 31KHz horizontal frequency modes. Instead it forces the dead ugly "doublescan".

(d.) Skipping metamodes and fall back to modelines has a bug where small resolutions automatically extend out the virtual panning resolution in width only, but have broken panning. I still have not found a workaround for this. The modelines even under 31KHz seem to work however (must use "ModeValidation" options to turn off the safety checks).

The Nano does work with frequencies outside 60Hz: managed to get 85Hz going at 640x480. Seems as if the Nano also supports low resolutions and arcade horizontal frequencies (a modeline with 320x240 around 60Hz worked). Unfortunately I'm somewhat limited in testing by the limits of the CRT. It also seemed as if I could get the 880M to actually use an arcade horizontal frequency. But I don't have a way to validate this yet. Won't be sure until I eventually grab another converter (VGA to Component supporting 240p) and try driving a NTSC TV with 240p like old consoles did.



iPhone Development Tutorials and Programming Tips

Tool: Xcode Plugin That Adds Filtering To The Debug Console (With Regex Support)

by Johann at August 28, 2014 06:19 AM

Post Category    Featured iPhone Development Resources,iOS Programming Tools And Utilities,Objective-C

I’ve mentioned a number of Xcode plugins, and early last year I mentioned a plugin for color coding your debugging console text that helps to distinguish what you’re looking for in console.

Here’s another Xcode plugin that enhances the debugging console providing dead simple fast output console filtering based on a search called MCLog from Michael Chen – a very nice upgrade from the built in searching.

MCLog adds a search box to the bottom right hand corner of the debug console, and can even handle regular expressions.  The filtering works on the console output in real time which is very nice when you have a lot of unwanted statements coming out.

Here is an animation from the readme showing MCLog in action:

MCLog

You can find MCLog on Github here.

A very useful enhancement to the debugging console.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Tool: Xcode Plugin That Adds Filtering To The Debug Console (With Regex Support)

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Gamasutra Feature Articles

Selling a game before it's 'done': Tips and insight for paid alphas

August 28, 2014 04:00 AM

These days, some of the most successful games out there aren't even out yet. ...

Game From Scratch

LibGDX Tutorial 13: Physics with Box2D Part 1: A Basic Physics Simulations.

by Mike@gamefromscratch.com at August 27, 2014 10:31 PM

 

Today we are going to look at implementing physics in LibGDX.  This technically isn’t part of LibGDX itself, but instead is implemented as an extension.  The physics engine used in LibGDX is the popular Box2D physics system a library that has been ported to basically every single platform and language ever invented, or so it seems. We are going to cover how to implement Box2D physics in your 2D LibGDX game.  This is a complex subject so will require multiple parts.

 

If you’ve never used a Physics Engine before, we should start with a basic overview of what they do and how they work.  Essentially a physics engine takes scene information that you provide, then calculates “realistic” movement using physics calculations.  It goes something like this:

  • you describe all of the physics entities in your world to the physics engine, including bounding volumes, mass, velocity, etc
  • you tell the engine to update, either per frame or on some other interval
  • the physics engine calculates how the world has changed, what’s collided with what, how much gravity effects each item, current speed and position, etc
  • you take the results of the physics simulation and update your world accordingly.

 

Don’t worry, we will look at exactly how in a moment.

 

First we need to talk for a moment about creating your project.  Since Box2D is now implemented as an extension ( an optional LibGDX component ), you need to add it either manually or when you create your initial project.  Adding a library to an existing project is IDE dependent, so I am instead going to look at adding it during project creation… and totally not just because it’s really easy that way.

 

When you create your LibGDX project using the Project Generator, you simply specify which extensions you wish to include and Gradle does the rest.  In this case you simply check the box next to Box2d when generating your project like normal:

 

image

 

… and you are done.  You may be asking, hey what about Box2dlights?  Nope, you don’t currently need that one.  Box2dlights is a project for simulating lighting and shadows based off the Box2d physics engine.  You may notice in that list another entity named Bullet.  Bullet is another physic engine, although more commonly geared towards 3D games, possibly more on that at a later date.  Just be aware if you are working in 3D, Box2d isn’t of much use to you, but there are alternatives.

 

Ok, now that we have a properly configured project, let’s take a look at a very basic physics simulation.  We are simply going to take the default LibGDX graphic and apply gravity to it, about the simplest simulation you can make that actually does something.  Code time!

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.math.Vector2;
import com.badlogic.gdx.physics.box2d.*;

public class Physics1 extends ApplicationAdapter {
    SpriteBatch batch;
    Sprite sprite;
    Texture img;
    World world;
    Body body;

    @Override
    public void create() {

        batch = new SpriteBatch();
        // We will use the default LibGdx logo for this example, but we need a 
        sprite since it's going to move
        img = new Texture("badlogic.jpg");
        sprite = new Sprite(img);

        // Center the sprite in the top/middle of the screen
        sprite.setPosition(Gdx.graphics.getWidth() / 2 - sprite.getWidth() / 2,
                Gdx.graphics.getHeight() / 2);

        // Create a physics world, the heart of the simulation.  The Vector 
        passed in is gravity
        world = new World(new Vector2(0, -98f), true);

        // Now create a BodyDefinition.  This defines the physics objects type 
        and position in the simulation
        BodyDef bodyDef = new BodyDef();
        bodyDef.type = BodyDef.BodyType.DynamicBody;
        // We are going to use 1 to 1 dimensions.  Meaning 1 in physics engine 
        is 1 pixel
        // Set our body to the same position as our sprite
        bodyDef.position.set(sprite.getX(), sprite.getY());

        // Create a body in the world using our definition
        body = world.createBody(bodyDef);

        // Now define the dimensions of the physics shape
        PolygonShape shape = new PolygonShape();
        // We are a box, so this makes sense, no?
        // Basically set the physics polygon to a box with the same dimensions 
        as our sprite
        shape.setAsBox(sprite.getWidth(), sprite.getHeight());

        // FixtureDef is a confusing expression for physical properties
        // Basically this is where you, in addition to defining the shape of the 
        body
        // you also define it's properties like density, restitution and others 
        we will see shortly
        // If you are wondering, density and area are used to calculate over all 
        mass
        FixtureDef fixtureDef = new FixtureDef();
        fixtureDef.shape = shape;
        fixtureDef.density = 1f;

        Fixture fixture = body.createFixture(fixtureDef);

        // Shape is the only disposable of the lot, so get rid of it
        shape.dispose();
    }

    @Override
    public void render() {

        // Advance the world, by the amount of time that has elapsed since the 
        last frame
        // Generally in a real game, dont do this in the render loop, as you are 
        tying the physics
        // update rate to the frame rate, and vice versa
        world.step(Gdx.graphics.getDeltaTime(), 6, 2);

        // Now update the spritee position accordingly to it's now updated 
        Physics body
        sprite.setPosition(body.getPosition().x, body.getPosition().y);

        // You know the rest...
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        batch.begin();
        batch.draw(sprite, sprite.getX(), sprite.getY());
        batch.end();
    }

    @Override
    public void dispose() {
        // Hey, I actually did some clean up in a code sample!
        img.dispose();
        world.dispose();
    }
}

 

The program running:

 

test

 

What's  going on here is mostly defined in the comments, but I will give a simpler overview in English.  Basically when using a Physics Engine, you create a physical representation for each corresponding object in your game.  In this case we created a physics object ( Body ) that went along with our sprite.  It’s important to realize, there is no actual relationship between these two objects.  There are a couple of components that go into a physics body, BodyDef which defines what type of body it is ( more on this later, for now realize DynamicBody means a body that is updated and capable of movement ) and FixtureDef, which defines the shape and physical properties of the Body.  Of course, there is also the World, which is the actual physics simulation.

 

So, basically we created a Body which is the physical representation of our Sprite in the physics simulation.  Then in render() we call the incredibly important step() method.  Step is what advances the physics simulation… basically think of it as the play button.  The physics engine then calculations all the various mathematics that have changes since the last call to step.  The first value we pass in is the amount of time that has elapsed since the last update.  The next two values control the amount of accuracy in contact/joint calculations for velocity and position. Basically the higher the values the more accurate your physics simulation will be, but the more CPU intensive as well.  Why 6 and 2?  ‘cause that’s what the LibGDX site recommend and that works for me.  At the end of the day these are values you can tweak to your individual game.  The one other critical take away here is we update the sprites position to match the newly updated body’s position.  Once again, in this example, there is no actual link between a physics body and a sprite, so you have to do it yourself.

 

There you go, the worlds simplest physics simulation.  There are a few quick topics to discuss before we move on.  First, units.

 

This is an important and sometimes tricky concept to get your head around with physics systems.  What does 1 mean?  One what?  The answer is, whatever the hell you want it to be, just be consistent about it!  In this particular case I used pixels.  Therefore 1 unit in the physics engine represents 1 pixel on the screen.  So when I said gravity is (0,-98) that means gravity is applied at a rate of –98 pixels along the y axis per second.  Just as commonly, 1 in the physics engine could be meters, feet, kilometer, etc… then you use a custom ratio for translating to and from screen coordinates.  Most physics systems, Box2d included, really don’t like you mixing your scales however.  For example, if you have a universe simulation where 1 == 100 miles, then you want to calculate the movement of an Ant at 0.0000001 x 100miles per hour, you will break the simulation, hard.  Find a scale that works well with the majority of your game and stick with it.  Extremely large and extremely small values within that simulation will cause problems.

 

Finally, a bit of a warning about how I implemented this demo and hopefully something I will cover properly at a later date.  In this case I updated the physics system in the render loop.  This is a possibility but generally wasteful.  It’s fairly common to run your physics simulation at a fixed rate ( 30hz and 60hz being two of the most common, but lower is also a possibility if processing restrained ) and your render loop as fast as possible.

 

In the next part we will give our object something to collide with, stay tuned.



iPhone Development Tutorials and Programming Tips

Traffic Ranked iOS App Review Site Listing Updated

by Johann at August 27, 2014 09:31 PM

Post Category    Featured iPhone Development Resources,News

The traffic ranked iOS app review site list has once again this site adds a number of new sites, and many dead sites have been removed. In addition submission and traffic links have been updated.

You can find the listing here: app review sites.

Hope you enjoy the list!


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Traffic Ranked iOS App Review Site Listing Updated

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Bitsquid

Building a Data-Oriented Entity System (part 1)

by Niklas (noreply@blogger.com) at August 27, 2014 08:31 PM

We have recently started to look into adding an entity/component system to the Bitsquid engine.

You may be surprised to learn that the Bitsquid engine isn't already component based. But actually there has never been a great need for that. Since the gameplay code is usually written in Lua rather than C++, we don't run into the common problems with deep and convoluted inheritance structures that prompt people to move to component based designs. Instead, inheritance is used very sparingly in the engine.

But as we are expanding our plugin system, we need a way for C++ plugins to bestow game objects with new functionalities and capabilities. This makes a component architecture a very natural fit.

Entities and Components

In the Bitsquid engine, we always strive to keep systems decoupled and data-oriented and we want to use the same approach for the component architecture. So, in our system, entities are not heap allocated objects. Instead, an entity is just an integer, a unique ID identifying a particular entity:

struct Entity
{
unsigned id;
};

A special class, the EntityManager keeps track of the entities that are alive.

A component is not an object either. Instead, a component is something that is handled by a ComponentManager. The task of a ComponentManager is to associate entities with components. For example, the DebugNameComponentManager can be used to associate debug names with entities:

class DebugNameComponentManager
{
public:
void set_debug_name(Entity e, const char *name);
const char *debug_name(Entity e) const;
};

Two things are interesting to note about this decoupled design.

First, there is no DebugNameComponent class for handling individual debug name components in this design. That is not needed, because all component data is managed internally by the DebugNameComponentManager. The manager could decide to use heap allocated DebugNameComponent objects internally. But it is not forced to. And usually it is much more efficient to lay out the data differently. For example, as a structure of arrays in a single continuous buffer. In a future post, I'll show some examples of this.

Second, there is no place where we keep a list of all the components that an entity has. It is only the DebugNameComponentManager that knows whether an entity has a debug name component or not, and if you want to talk about that component you have to do it through the DebugNameComponentManager. There is no such thing as an "abstract" component.

So what components an entity has is only defined by what has been registered with the different component managers in the game. And plugins may extend the system with new component managers.

It is up to the component manager to decide if it makes sense for an entity to have multiple components of its type. For example, the DebugNameComponentManager only allows a single debug name to be associated with an entity. But the MeshComponentManager allows an entity to have multiple meshes.

The manager is responsible for performing any computations necessary to update the components. Updates are done one component manager at a time, not one entity at a time, and when a component manager is updated it updates all its components in one go. This means that common calculations can be shared and that all the data is hot in the caches. It also makes the update easier to profile, multithread or offload to an external processor. All this translates to huge performance benefits.

The EntityManager

We want to be able to use the entity ID as a weak reference. I.e., given an entity ID we want to be able to tell if it refers to a living entity or not.

Having a weak reference system is important, because if we only have strong references then if the entity dies we must notify everybody that might possibly hold a reference to the entity so that they can delete it. This is both costly and complicated. Especially since references might be held by other threads or by Lua code.

To enable weak referencing, we use the EntityManager class to keep track of all live entities. The simplest way of doing that would be to just use a set:

class EntityManager
{
HashSet<Entity> _entities;
Entity _next;

public:
Entity create()
{
++_next.id;
while (alive(_next))
++_next.id;
_entities.insert(_next);
return _next;
}

bool alive(Entity e)
{
return _entities.has(e);
}

void destroy(Entity e)
{
_entities.erase(e);
}
};

This is pretty good, but since we expect the alive() function to be a central piece of code that gets called a lot, we want something that runs even faster than a set.

We can change this to a simple array lookup by splitting the entity ID into an index and a generation part:

const unsigned ENTITY_INDEX_BITS = 22;
const unsigned ENTITY_INDEX_MASK = (1<<ENTITY_INDEX_BITS)-1;

const unsigned ENTITY_GENERATION_BITS = 8;
const unsigned ENTITY_GENERATION_MASK = (1<<ENTITY_GENERATION_BITS)-1;

struct Entity
{
unsigned id;

unsigned index() const {return id & ENTITY_INDEX_MASK;}
unsigned generation() const {return (id >> ENTITY_INDEX_BITS) & ENTITY_GENERATION_MASK;}
};

The idea here is that the index part directly gives us the index of the entity in a lookup array. The generation part is used to distinguish entities created at the same index slot. As we create and destroy entities we will at some point have to reuse an index in the array. By changing the generation value when that happens we ensure that we still get a unique ID.

In our system we are restricted to using 30 bits for the entity ID. The reason for this is that we need to fit it in a 32 bit pointer in order to be able to use a Lua light userdata to store it. We also need to steal two bits from this pointer in order to distinguish it from other types of light userdata that we use in the engine.

If you didn't have this restriction, or if you only targeted 64-bit platforms it would probably be a good idea to use some more bits for the ID.

We've split up our 30 bits into 22 bits for the index and 8 bits for the generation. This means that we support a maximum of 4 million simultaneous entities. It also means that we can only distinguish between 256 different entities created at the same index slot. If more than 256 entities are created at the same index slot, the generation value will wrap around and our new entity will get the same ID as an old entity.

To prevent that from happening too often we need to make sure that we don't reuse the same index slot too often. There are various possible ways of doing that. Our solution is to put recycled indices in a queue and only reuse values from that queue when it contains at least MINIMUM_FREE_INDICES = 1024 items. Since we have 256 generations, an ID will never reappear until its index has run 256 laps through the queue. So this means that you must create and destroy at least 256 * 1024 entities until an ID can reappear. This seems reasonably safe, but if you want you can play with the numbers to get different margins. For example, if you don't need 4 M entities, you can steal some bits from index and give to generation.

A nice thing about only having 8 bits in generation is that we just need 8 bits per entity in our lookup array. This saves memory, but also gives us better performance, since we will fit more in the cache. With this solution, the code for the EntityManager becomes:

class EntityManager
{
Array<unsigned char> _generation;
Deque<unsigned> _free_indices;

public:
Entity create()
{
unsigned idx;
if (_free_indices.size() > MINIMUM_FREE_INDICES) {
idx = _free_indices.front();
_free_indices.pop_front();
} else {
_generation.push_back(0);
idx = _generation.size() - 1;
XENSURE(idx < (1 << ENTITY_INDEX_BITS));
}
return make_entity(idx, _generation[idx]);
}

bool alive(Entity e) const
{
return _generation[e.index()] == e.generation();
}

void destroy(Entity e)
{
const unsigned idx = e.index();
++_generation[idx];
_free_indices.push_back(idx);
}
};

In the next post, we will take a look at the design of the component classes.

Geeks3D Forums

NVIDIA R340.76 OpenGL 4.5 Driver for Windows

August 27, 2014 06:34 PM

Direct Downloads:

- R340.76 WinXP 32-bit

- [url=https://developer.nvidia.com/sites/default/files/akamai/opengl45/windows/340.76_g...



OpenGL Extensions Viewer 4.27 for Windows

August 27, 2014 05:43 PM

Download from here

No changelog available, but adds OpenGL 4.5 support



OpenGL

Delphi / Pascal OpenGL header translation now supports OpenGL 4.5

August 27, 2014 03:47 PM

The Delphi/Pascal OpenGL header translation of the Delphi OpenGL Community has been updated to the latest 4.5 release of OpenGL. The header is a one-in-all unit, and supports all OpenGL versions up to 4.5 (core and extensions). It can be used with Delphi and Freepascal and supports a wide range of platforms, including 32- and 64- Bit Windows, Linux and Mac OSX. The current download and changelog can be found on the official bitbucket repository.

cbloom rants

08-27-14 - LZ Match Length Redundancy

by cbloom (noreply@blogger.com) at August 27, 2014 03:21 PM

A quick note on something that does not work.

I've written before about the redundancy in LZ77 codes. ( for example ). In particular the issue I had a look at was :

Any time you code a match, you know that it must be longer than any possible match at lower offsets.

eg. you won't sent a match of length of 3 to offset 30514 if you could have send offset 1073 instead. You always choose the lowest possible offset that gives you a given match length.

The easy way to exploit this is to send match lengths as the delta from the next longest match length at lower offset. You only need to send the excess, and you know the excess is greater than zero. So if you have an ML of 3 at offset 1073, and you find a match of length 4 at offset 30514, then you send {30514,+1}

To implement this in the encoder is straightforward. If you walk your matches in order from lowest offset to highest offset, then you know the current best match length as you go.

The same principle applies to the "last offsets" ; you don't send LO2 if you could sent LO0 at the same length, so the higher index LO matches must be of greater length. And the same thing applies to ROLZ.

I tried this in all 3 cases (normal LZ matches, LO matches, ROLZ). No win. Not even tiny, but close to zero.

Part of the problem is that match lengths are just not where the redundancy is. But I assume that part of what's happening is that match lengths have patterns that the delta-ing ruins. For example binary files will have patterns of 4 or 8 long matches, or in an LZMA-like you'll have certain patterns show up like at certain pos&3 intervals after a literal you get a 3-long match, etc.

I tried some obvious ideas like using the next-lowest-length as part of the context for coding the delta-length. In theory you could be able to recapture something like a next-lowest of 3 predicts a delta of 1 in places where an ML of 4 is likely. But I couldn't find a win there.

I believe this is a dead end. Even if you could find a small win, it's too slow in the decoder to be worth it.



07-14-14 - Suffix-Trie Coded LZ

by cbloom (noreply@blogger.com) at August 27, 2014 03:21 PM

Idea : Suffix-Trie Coded LZ :

You are doing LZ77-style coding (eg. matches in the prior stream or literals), but send the matches in a different way.

You have a Suffix Trie built on the prior stream. To find the longest match for a normal LZ77 you would take the current string to code and look it up by walking it down the Trie. When you reach the point of deepest match, you see what string in the prior stream made that node in the Trie, and send the position of that string as an offset.

Essentially what the offset does is encode a position in the tree.

But there are many redundancies in the normal LZ77 scheme. For example if you only encode a match of length 3, then the offsets that point to "abcd.." and "abce.." are equivalent, and shouldn't be distinguished by the encoding. The fact that they both take up space in the numerical offset is a waste of bits. You only want to distinguish offsets that actually point at something different for the current match length.

The idea in a nutshell is that instead of sending an offset, you send the descent into the trie to find that string.

At each node, first send a single bit for does the next byte in the string match any of the children. (This is equivalent to a PPM escape). If not, then you're done matching. If you like, this is like sending the match length with unary : 1 bits as long as you're in a node that has a matching child, then a 0 bit when you run out of matches. (alternatively you could send the entire match length up front with a different scheme).

When one of the children matches, you must encode which one. This is just an encoding of the next character, selected from the previously seen characters in this context. If all offsets are equally likely (they aren't) then the correct thing is just Probability(child) = Trie_Leaf_Count(child) , because the number of leaves under a node is the number of times we've seen this substring in the past.

(More generally the probability of offsets is not uniform, so you should scale the probability of each child using some modeling of the offsets. Accumulate P(child) += P(offset) for each offset under a child. Ugly. This is unfortunately very important on binary data where the 4-8-struct offset patterns are very strong.)

Ignoring that aside - the big coding gain is that we are no longer uselessly distinguishing offsets that only differ at higher match length, AND instead of just wasting those bits, we instead use them to make those offsets code smaller.

For example : say we've matched "ab" so far. The previous stream contains "abcd","abce","abcf", and "abq". Pretend that somehow those are the only strings. Normal LZ77 needs 2 bits to select from them - but if our match len is only 3 that's a big waste. This way we would say the next char in the match can either be "c" or "q" and the probabilities are 3/4 and 1/4 respectively. So if the length-3 match is a "c" we send that selection in only log2(4/3) bits = 0.415

And the astute reader will already be thinking - this is just PPM! In fact it is exactly a kind of PPM, in which you start out at low order (min match length, typically 3 or so) and your order gets deeper as you match. When you escape you junk back to order 3 coding, and if that escapes it jumps back to order 0 (literal).

There are several major problems :

1. Decoding is slow because you have to maintain the Suffix Trie for both encode and decode. You lose the simple LZ77 decoder.

2. Modern LZ's benefit a lot from modeling the numerical value of the offset in binary files. That's ugly & hard to do in this framework. This method is a lot simpler on text-like data that doesn't have numerical offset patterns.

3. It's not Pareto. If you're doing all this work you may as well just do PPM.

In any case it's theoretically interesting as an ideal of how you would encode LZ offsets if you could.

(and yes I know there have been many similar ideas in the past; LZFG of course, and Langdon's great LZ-PPM equivalence proof)

07-03-14 - Oodle 1.41 Comparison Charts

by cbloom (noreply@blogger.com) at August 27, 2014 03:21 PM

I did some work for Oodle 1.41 on speeding up compressors. Mainly the Fast/VeryFast encoders got faster. I also took a pass at trying to make sure the various options were "Pareto", that is the best possible space/speed tradeoff. I had some options that were off the curve, like much slower than they needed to be, or just worse with no benefit, so it was just a mistake to use them (LZNib Normal was particularly bad).

Oodle 1.40 got the new LZA compressor. LZA is a very high compression arithmetic-coded LZ. The goal of LZA is as much compression as possible while retaining somewhat reasonable (or at least tolerable) decode speeds. My belief is that LZA should be used for internet distribution, but not for runtime loading.

The charts :

compression ratio : (raw/comp ratio; higher is better)

compressor VeryFast Fast Normal Optimal1 Optimal2
LZA 2.362 2.508 2.541 2.645 2.698
LZHLW 2.161 2.299 2.33 2.352 2.432
LZH 1.901 1.979 2.039 2.121 2.134
LZNIB 1.727 1.884 1.853 2.079 2.079
LZBLW 1.636 1.761 1.833 1.873 1.873
LZB16 1.481 1.571 1.654 1.674 1.674
lzmamax  : 2.665 to 1
lzmafast : 2.314 to 1
zlib9 : 1.883 to 1 
zlib5 : 1.871 to 1
lz4hc : 1.667 to 1
lz4fast : 1.464 to 1

encode speed : (mb/s)

compressor VeryFast Fast Normal Optimal1 Optimal2
LZA 23.05 12.7 6.27 1.54 1.07
LZHLW 59.67 19.16 7.21 4.67 1.96
LZH 76.08 17.08 11.7 0.83 0.46
LZNIB 182.14 43.87 10.76 0.51 0.51
LZBLW 246.83 49.67 1.62 1.61 1.61
LZB16 511.36 107.11 36.98 4.02 4.02
lzmamax  : 5.55
lzmafast : 11.08
zlib9 : 4.86
zlib5 : 25.23
lz4hc : 32.32
lz4fast : 606.37

decode speed : (mb/s)

compressor VeryFast Fast Normal Optimal1 Optimal2
LZA 34.93 37.15 37.76 37.48 37.81
LZHLW 363.94 385.85 384.83 391.28 388.4
LZH 357.62 392.35 397.72 387.28 383.38
LZNIB 923.66 987.11 903.21 1195.66 1194.75
LZBLW 2545.35 2495.37 2465.99 2514.48 2515.25
LZB16 2752.65 2598.69 2687.85 2768.34 2765.92
lzmamax  : 42.17
lzmafast : 40.22
zlib9 : 308.93
zlib5 : 302.53
lz4hc : 2363.75
lz4fast : 2288.58

While working on LZA I found some encoder speed wins that I ported back to LZHLW (mainly in Fast and VeryFast). A big one is to early out for last offsets; when I get a last offset match > N long, I just take it and don't even look for non-last-offset matches. This is done in the non-Optimal modes, and surprisingly hurts compression almost not all while helping speed a lot.

Four of the compressors are now in pretty good shape (LZA,LZHLW,LZNIB, and LZB16). There are a few minor issues to fix someday (someday = never unless the need arises) :

LZA decoder should be a little faster (currently lags LZMA a tiny bit). LZA Optimal1 would be better with a semi-greedy match finder like MMC (LZMA is much faster to encode than me at the same compression level; perhaps a different optimal parse scheme is needed too). LZA Optimal2 should seed with multi-parse. LZHLW Optimal could be faster. LZNIB Normal needs much better match selection heuristics, the ones I have are really just not right. LZNIB Optimal should be faster; needs a better way to do threshold-match-finding. LZB16 Optimal should be faster; needs a better 64k-sliding-window match finder.

The LZH and LZBLW compressors are a bit neglected and you can see they still have some of the anomalies in the space/speed tradeoff curve, like the Normal encode speed for LZBLW is so bad that you may as well just use Optimal. Put aside until there's a reason to fix them.


If another game developer tells me that "zlib is a great compromise and you probably can't beat it by much" I'm going to murder them. For the record :

zlib -9 :
4.86 MB/sec to encode
308.93 MB/sec to decode
1.883 to 1 compression

LZHLW Optimal1 :
4.67 MB/sec to encode
391.28 MB/sec to decode
2.352 to 1 compression
come on! The encoder is slow, the decoder is slow, and it compresses poorly.

LZMA in very high compression settings is a good tradeoff. In its low compression fast modes, it's very poor. zlib has the same flaw - they just don't have good encoders for fast compression modes.

LZ4 I have no issues with; in its designed zone it offers excellent tradeoffs.


In most cases the encoder implementations are :


VeryFast =
cache table match finder
single hash
greedy parse

Fast = 
cache table match finder
hash with ways
second hash
lazy parse
very simple heuristic decisions

Normal =
varies a lot for the different compressors
generally something like a hash-link match finder
or a cache table with more ways
more lazy eval
more careful "is match better" heuristics

Optimal =
exact match finder (SuffixTrie or similar)
cost-based match decision, not heuristic
backward exact parse of LZB16
all others have "last offset" so require an approximate forward parse

I'm mostly ripping out my Hash->Link match finders and replacing them with N-way cache tables. While the cache table is slightly worse for compression, it's a big speed win, which makes it better on the space-speed tradeoff spectrum.

I don't have a good solution for windowed optimal parse match finding (such as LZB16-Optimal). I'm currently using overlapped suffix arrays, but that's not awesome. Sliding window SuffixTrie is an engineering nightmare but would probably be good for that. MMC is a pretty good compromise in practice, though it's not exact and does have degenerate case breakdowns.


LZB16's encode speed is very sensitive to the hash table size.


-h12
24,700,820 ->16,944,823 =  5.488 bpb =  1.458 to 1
encode           : 0.045 seconds, 161.75 b/kc, rate= 550.51 mb/s
decode           : 0.009 seconds, 849.04 b/kc, rate= 2889.66 mb/s

-h13
24,700,820 ->16,682,108 =  5.403 bpb =  1.481 to 1
encode           : 0.049 seconds, 148.08 b/kc, rate= 503.97 mb/s
decode           : 0.009 seconds, 827.85 b/kc, rate= 2817.56 mb/s

-h14
24,700,820 ->16,491,675 =  5.341 bpb =  1.498 to 1
encode           : 0.055 seconds, 133.07 b/kc, rate= 452.89 mb/s
decode           : 0.009 seconds, 812.73 b/kc, rate= 2766.10 mb/s

-h15
24,700,820 ->16,409,957 =  5.315 bpb =  1.505 to 1
encode           : 0.064 seconds, 113.23 b/kc, rate= 385.37 mb/s
decode           : 0.009 seconds, 802.46 b/kc, rate= 2731.13 mb/s

If you accidentally set it too big you get a huge drop-off in speed. (The charts above show -h13 ; -h12 is more comparable to lz4fast (which was built with HASH_LOG=12)).

I stole an idea from LZ4 that helped the encoder speed a lot. (lz4fast is very good!) Instead of doing the basic loop like :


while(!eof)
{
  if ( match )
    output match
  else
    output literal
}

instead do :

while(!eof)
{
  while( ! match )
  {
    output literal
  }

  output match
}

This lets you make a tight loop just for outputing literals. It makes it clearer to you as a programmer what's happening in that loop and you can save work and simplify things. It winds up being a lot faster. (I've been doing the same thing in my decoders forever but hadn't done in the encoder).

My LZB16 is very slightly more complex to encode than LZ4, because I do some things that let me have a faster decoder. For example my normal matches are all no-overlap, and I hide the overlap matches in the excess-match-length branch.

iPhone Development Tutorials and Programming Tips

Open Source iOS Component Providing A Beautiful Long-Tap Pop-Out Share Menu

by Johann at August 27, 2014 01:04 PM

Post Category    Featured iPhone Development Resources,iOS UI Controls,iPad,iPhone,Objective-C

Previously I mentioned a few interesting components for creating pop-out widget style selection menus such as DLWidgetMenu which features customizable layouts, and most recently GHWidgetMenu which allows the menu to vary based on context it is selected from.

Here’s an open source component for creating beautiful pop-out selection menus called from a long tap called YLLongTapShare from Yong Li.

YLLongTapShare allows you to specify the menu selections and icons, has callbacks for the menu item selected and provides a nice “done” image for when a choice is made.

These screenshots of the example show off the neat design.
YLLongTapShare-1
YLLongTapShare-2YLLongTapShare-3

You can find YLLongTapShare on Github here.

You can find the design that inspired YLLongTapShare on Dribble here.

Note: Before someone asks – You will need to implement the desired sharing functionality.

A nice implementation of a beautiful menu design.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source iOS Component Providing A Beautiful Long-Tap Pop-Out Share Menu

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.



iOS Tutorial And Code Example On Creating Tinder Style Swipe-To-Choose Views

by Johann at August 27, 2014 06:18 AM

Post Category    Featured iPhone Development Resources,iOS Development Source Code Examples,iOS Development Tutorials,Objective-C

Earlier this year I mentioned a  component for implementing Tinder style swipe-to-choose cards called MDCSwipeToChoose.

Here’s a nice tutorial that goes step-by-step through the process of creating cards with cool Tinder style dragging animations from Nimrod Gutman.

Specifically the tutorial covers:

- Setting up a draggable image view
- Modifying the rotation and scale while the image is being dragged to match the effect seen in the Tinder app
- Adding overlay images as the image is dragged
- Performing the selected action

You can find the tutorial over on Nimrod Gutman’s blog.

Richard Kim has put together a nice example called TinderSimpleSwipeCards based on Nimrod’s tutorial adding in a number of neat customizations, and here’s an animation showing Richard’s example in action:

TinderSimpleSwipeCards

You can find TinderSimpleSwipeCards on Github here.

A nice tutorial and example on implementing the swipe to choose feature.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: iOS Tutorial And Code Example On Creating Tinder Style Swipe-To-Choose Views

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Gamasutra Feature Articles

Game Design Deep Dive: Amnesia's 'Sanity Meter'

August 27, 2014 04:00 AM

Amnesia: The Dark Descent's "sanity meter" feature was born out of darkness. Creative director Thomas Grip explains the evolution of the meter's design. ...

Timothy Lottes

Thoughts on Display Color Calibration for Games

by Timothy Lottes (noreply@blogger.com) at August 26, 2014 11:09 PM

On Apple products across the board, the factory tonal configuration is Gamma 2.2 not sRGB. Using an sRGB backbuffer is totally useless, instead whatever shader converts from linear high dynamic range to display target needs to manually do the pow(). Typically this step is manual anyway, because that is required to properly dither the floating point color to 8-bit per channel output. On the plus side, Apple products are so well calibrated and matched even between desktop and mobile, that anyone with a color calibrated authoring pipeline can target the hardware and the consumer will experience the artist's intent. This is simply awesome.

On the PC side, and from what I could see on a very small sampling of the fragmented Android space, sRGB is a better match (than Gamma 2.2) to default device factory calibration. This is not surprising given that both sRGB and Rec.709 (and later) HDTV standards adopted a linear segment close to black. The idea being that the linear segment enables a better perceptual distribution given a fixed set of bits.

The disadvantage of encodings like sRGB, which mix both a linear segment and gamma curve into the tonal curve, is that "correct" manual dithering can be more expensive (because the conversion is much more expensive). Given that all realtime digital content should use temporal dithering to avoid output banding, Apple's choice of fixed Gamma 2.2 seems like a much better choice. However...

On the Topic of Banding
TN panels often are 6-bit/channel, with temporal dithering. Plasma hits the other end of the extreme (maybe 1 or 2-bit/channel?) with extreme temporal dithering (600 Hz). In both these cases, applications need to manually dither beyond the exact amount required for 8-bit output (display dither is too conservative). Also in both these cases, the application temporal dither can mix in bad ways with the display's temporal dithering. My current feeling is that the correct solution to this problem is to replace the "correct" temporal dither with a film grain with a gamma responce (like film) applied in the linear HDR colorspace. This film grain would have a minimum amount even in the light areas which is large enough to serve as the temporal dither (to remove banding on worst case target). Also the film grain would be between 1.5 and 2 pixel in size, so that it does not conflict with a display's 1 pixel sized temporal dithering. The end result of this, is that sRGB again is a fine target, and Gamma 2.2 requires extra shader overhead.

White Point Calibration
Seems best to just target the D65 (daylight filtered 6500K) of sRGB. Knowing that: (a.) displays will be +/- that value, (b.) the mind automatically adapts to small differences in white point, and (c.) that white point will change +/- towards darks as well even on a given display. The cause of (c.) is that even if the display is calibrated to D65, the native black point of the display typically is not D65, and the only way to fix that is to raise the black level (adding intensity to come channels, reducing contrast), which is not something OEMs and users want.

Simple Production Calibration Goals
So the goal of simple calibration of displays is to get the {R,G,B} LUTs to provide a D65 white, through the entire gray scale, with the sRGB tonal curve, with the exception that somewhere in the darks, the very dark grey color will start to color shift to the native black point color tint, and then terminate at something which is not fully black. The color gamut of the display ultimately decides saturation scaling, which changes per type of display.

Simple In-Game Controls
Display gamma is the wild west, so at least need some user-adjustable gamma. Something like "move the slider until the dark symbol is just barely visible". Still not sure if user-adjustable offset is required as well.



Scifi Reading Suggestion List from Twitters

by Timothy Lottes (noreply@blogger.com) at August 26, 2014 08:47 PM

Accelerando - Charles Stross
Altered Carbon - Richard K. Morgan
Anathem - Neal Stephenson
Blindsight - Peter Watts
Blue Remembered Earth - Alastair Reynolds
Book of the New Sun: Series of 4 books - Gene Wolfe
Commonwealth Saga: Pandora's Star, Judas Unchained - Peter F. Hamilton
Cryponomicon - Neal Stephenson
The Culture Series - Ian M Banks
Deepness in the Sky - Vernor Vinge
Diamond Age - Neal Stephenson
Dune Series - Frank Herbert
A Fire Upon the Deep - Vernor Vinge
Leviathan Wakes - James S. A. Corey
Lord of Light - Roger Zelazny
Heechee Series - Frederik Pohl
House of Suns - Alastair Reynolds
Hyperion Cantos: Hyperion, ... - Dan Simmons
In Her Name Series - Michael R. Hicks
The Martian - Andy Weir
The Moon is a Harsh Mistress - Heinlein
Neptune's Brood - Charles Stross
Nexus - Ramez Naam
The Night's Dawn Trilogy: The Reality Dysfunction, The Neutronium Alchemist, The Naked God - Peter F. Hamilton
Old Man's War - John Scalzi
Only Forward - Michael Marshall Smith
Player of Games - Iain M. Banks
Redshirts - John Scalzi
Revelation Space - Alastair Reynolds
Sandkings - G.R.R Martin
Silo Series: Wool, Shift, Dust - Hugh Howey
Singularity Sky, Iron Sunrise - Charles Stross
The Skinner - Neal Asher
Solaris - Stanislaw Lems
Star Wolf - David Hamilton
The Unincorporated Man - Dani Kollin and Eytan Kollin
Tuf Voyaging - G.R.R Martin
Windup Girl - Paolo Bacigalupi

iPhone Development Tutorials and Programming Tips

Open Source Swift Based Library That Wraps The Accelerate Framework For Easier Usage

by Johann at August 26, 2014 06:20 PM

Post Category    Featured iPhone Development Resources,iOS Development Libraries,Open Source iOS Libraries And Tools,Swift

The Accelerate framework contains many high performance matrix math, linear algebra, image processing, and digital signal processing functions, but suffers from a somewhat unusual syntax when compared to other iOS frameworks. Last month I mentioned a library inspired by Python’s NumPy that uses the accelerate framework for performing math called Swix.

Here’s a nice Swift based wrapper library for the Accelerate framework from Matt Thompson called Surge. Surge wraps the unusual syntax of many Accelerate functions into a nice simple syntax.

This performance comparison from the readme :

import Surge

let numbers: [Double] = ... // 1000000 Elements
var sum: Double = 0.0

// Naïve Swift Implementation
sum = reduce(numbers, 0.0, +) // Time: 5.700 sec (2% STDEV)

// Surge Implementation
sum = Surge.sum(numbers) // Time: 0.001 sec (17% STDEV)

Resulted in a performance increase of 5,700x speed increase with the size of numbers at 1,000,000 elements using Surge vs. the native Swift math implementation.

You can find Surge on Github here.

If you’re unfamiliar with the Accelerate framework this is a nice introductory video from WWDC 2013 Apple developer login required.

A great library for utilizing the accelerate framework.

Thanks to Chris for the submission.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source Swift Based Library That Wraps The Accelerate Framework For Easier Usage

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Open Source Component With A Nice Interface For Image, Video And Audio Capture And Picking

by Johann at August 26, 2014 06:00 AM

Post Category    Featured iPhone Development Resources,iOS Development Libraries,Objective-C,Open Source iOS Libraries And Tools

Previously I mentioned a number of image picker components such as DOImagePickerController featuring easy multiple image selection, and multiple album support and DBCamera which provides a nice clean interface for taking photos with built-in cropping.

Here’s an open source project that puts everything together with a nice interface providing a nice interface for taking images, capturing video, and recording audio integrated with image, video, and audio pickers called IQMediaPickerController from Mohd Iftekhar Qurashi.

The three main components included within the project for media capture, picking images or videos, and picking audio can each be used separately in case you wan to integrate one without another.

Here are images from the readme showing the included media capture controller, image and video picker, and audio controller in action:

IQCaptureController IQAssetsPickerControllerIQAudioPickerController

You can find IQMediaPickerController on Github here.

A nice project for capturing and choosing media with a great interface.

More: Custom Media Picker Components


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source Component With A Nice Interface For Image, Video And Audio Capture And Picking

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.