Planet Gamedev

iPhone Development Tutorials and Programming Tips

Open Source iOS Library For Easy Blocks Based Reachability Updates

by Johann at July 30, 2014 06:28 AM

Post Category    Featured iPhone Development Resources,iOS UI Controls,iPad,iPhone,Objective-C

In the past I mentioned a few libraries that make it easier to test reachability, but unfortunately some issues have come up with updates to the iOS platform.

Here’s a nice up-to-date library for tracking reachability changes with a nice simple blocks based callback called JTSReachability from Jared Sinclair.

This code example from the readme shows how to set up the block to be executed on network status changes:

JTSReachabilityResponder *responder = [JTSReachabilityResponder sharedInstance];

[responder addHandler:^(JTSNetworkStatus status) {
      // Respond to the value of "status"
} forKey:@"MyReachabilityKey"];

And in the dealloc be sure to remove MyReachabilityKey.

You can find JTSReachability on Github here.

A nice simple way to keep track of reachability.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source iOS Library For Easy Blocks Based Reachability Updates

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Game From Scratch

Game Development Tutorial: Swift and SpriteKit Part 6 - Working with Physics Part 2

by Mike@gamefromscratch.com at July 29, 2014 08:48 PM

 

Ok I think I ended up with an overly complicated title for this series once again…  This is the next part in the ongoing Swift with SpriteKit tutorial series.  In the previous part we hooked up a very simple physics simulation.  First starting with basic gravity then with a collision off the edge of the screen.  In this part we are going to look at some slightly more complicated collision scenarios.  This tutorial section is going to be a bit more code heavy than before.

 

In this first code example, we are going to have two physics guided balls on the screen.   This example will show how to “do something” when two objects collide.  Let’s hope right in:

 

import SpriteKit

 

class GameScene: SKScene, SKPhysicsContactDelegate {

    override func didMoveToView(view: SKView) {

        

        // Define the bitmasks identifying various physics objects

        let sphereObject : UInt32 = 0x01;

        let worldObject : UInt32 = 0x02;

        

        //Create a ball shape

        var path = CGPathCreateMutable();

        CGPathAddArc(path, nil, 0, 0, 45, 0, M_PI*2, true);

        CGPathCloseSubpath(path);

 

        // Create one ball

        var shapeNode = SKShapeNode();

        shapeNode.path = path;

        shapeNode.lineWidth = 2.0;

        shapeNode.position = CGPoint(x:self.view.frame.width/2,y:self.view.frame.height);

        

        // Set the ball's physical properties

        shapeNode.physicsBody = SKPhysicsBody(circleOfRadius: shapeNode.frame.width/2);

        shapeNode.physicsBody.dynamic = true;

        shapeNode.physicsBody.mass = 5;

        shapeNode.physicsBody.friction = 0.2;

        shapeNode.physicsBody.restitution = 1;

        shapeNode.physicsBody.collisionBitMask = sphereObject | worldObject;

        shapeNode.physicsBody.categoryBitMask = sphereObject;

        shapeNode.physicsBody.contactTestBitMask = sphereObject;

        

        // Now create another ball

        var shapeNode2 = SKShapeNode();

        shapeNode2.path = path;

        shapeNode2.position = CGPoint(x:self.view.frame.width/2,y:self.view.frame.height/2);

        

        shapeNode2.physicsBody = SKPhysicsBody(circleOfRadius: shapeNode.frame.width/2);

        shapeNode2.physicsBody.dynamic = true;

        shapeNode2.physicsBody.mass = 5;

        shapeNode2.physicsBody.friction = 0.2;

        shapeNode2.physicsBody.restitution = 1;

        shapeNode2.physicsBody.collisionBitMask = sphereObject | worldObject;

        shapeNode2.physicsBody.categoryBitMask = sphereObject;

        shapeNode2.physicsBody.contactTestBitMask = sphereObject;

        

        // Now make the edges of the screen a physics object as well

        scene.physicsBody = SKPhysicsBody(edgeLoopFromRect: view.frame);

        scene.physicsBody.contactTestBitMask = worldObject;

        scene.physicsBody.categoryBitMask = worldObject;

        

        // Make gravity "fall" at 1 unit per second along the y-axis

        self.physicsWorld.gravity.dy = -1;

        

        self.addChild(shapeNode);

        self.addChild(shapeNode2);

        

        // We implement SKPhysicsContactDelegate to get called back when a contact occurs

        // Register ourself as the delegate

        self.physicsWorld.contactDelegate = self;

 

    }

    

    // This function is called on contact between physics objects

    func didBeginContact(contact:SKPhysicsContact){

        let node1:SKNode = contact.bodyA.node;

        let node2:SKNode = contact.bodyB.node;

        

        // Node 1 is the object the hit another object

        // Randomly apply a force of 0 - 1000 on both x and y axis

        node1.physicsBody.applyImpulse(CGVector(

            CGFloat(arc4random() % 1000),

            CGFloat(arc4random() % 1000)));

        

        // Node 2 is the other object, the one being hit

        node2.physicsBody.applyImpulse(CGVector(

            CGFloat(arc4random() % 1000),

            CGFloat(arc4random() % 1000)));

    }

    

}

 

And when you run it:

 

Physics3

 

A lot of the code is very similar to our previous example, so I will only focus on what’s new.  The first thing you may notice is our class now implements SKPhysicsContactDelegate.  This provides a function, didBeginContact() that will be called when a contact occurs.  In this example, didBeginContact will only be called when a contact occurs between spheres, and not the edge of the screen.  I will explain why shortly.

 

The only other major change in this code is for each PhysicsObject in the scene, we now define a couple values, collisionBitMask, categoryBitmask and contactTestBitMask.  Earlier in the code you may have noticed:

        let sphereObject : UInt32 = 0x01;

        let worldObject : UInt32 = 0x02;

This is where we define our two bitmasks.  Bitmasking may be a somewhat new concept to you.  Basically its a way of packing multiple values into a single variable.  Lets use a simple 8 byte char as an example.  In memory, in terms of bits, a char is composed of 8 bits that can be either on or off.  Like this for example:

10011001

 

So using a single byte of memory, we are able to store 8 individual on/off values.  1 representing on, while 0 represents off.  Of course in variables we don’t generally deal with values in binary form, but instead we often use hexadecimal.  The value above, 10011001 as binary translates to 153 decimal or 0x99 in hex.  Now lets look at our defined values.  We are essentially saying

0001 is a sphereObject

0010 is a worldObject.

 

Now using bitwise math you can do some neat stuff. For example, using a bitwise AND, you can make an object out of both:

let worldSphere = sphereObject & worldObject; // result 0011

This allows you to pack multiple on/off values into a single variable.  Now a full discussion on bitwise math is way beyond the scope of this tutorial, you can read more about it here.  But the basics above should get you through this code example.

 

Basically in SpriteKit physics you define the “type” of an object using categoryBitmask.  So in this example we set the categoryBitmask of each of our spheres to sphereObject, and the world frame to worldObject.  Next you tell each object what it interacts with, like we did here:

        shapeNode2.physicsBody.collisionBitMask = sphereObject | worldObject;

        shapeNode2.physicsBody.categoryBitMask = sphereObject;

        shapeNode2.physicsBody.contactTestBitMask = sphereObject;

What we are saying here is this node will collide with sphereObjects OR worldObjects, but it is a sphereObject and will only contact with other sphereObjects.

Therefore, our contact delegate will only be called when two spheres contact, while nothing will happen when a sphere contacts the side of the screen.  As you notice from the balls bouncing off the side, the collision still occur.  By default, each collision and contact bit mask is set to 0xFFFFFF, which basically sets all possible bits to on, meaning everything contacts and collides with everything else.

 

The delegate function called each time a collision occurs between sphere objects is pretty straight forward:

    func didBeginContact(contact:SKPhysicsContact){

        let node1:SKNode = contact.bodyA.node;

        let node2:SKNode = contact.bodyB.node;

        

        // Node 1 is the object the hit another object

        // Randomly apply a force of 0 - 1000 on both x and y axis

        node1.physicsBody.applyImpulse(CGVector(

            CGFloat(arc4random() % 1000),

            CGFloat(arc4random() % 1000)));

        

        // Node 2 is the other object, the one being hit

        node2.physicsBody.applyImpulse(CGVector(

            CGFloat(arc4random() % 1000),

            CGFloat(arc4random() % 1000)));

    }

Basically for each node involved in the collision we apply a random impulse ( think push ) in a random direction between 0 and 1000 in both the x and y axis.

Speaking of applying force, let’s take a quick look at another example:

import SpriteKit

 

class GameScene: SKScene, SKPhysicsContactDelegate {

    

    var shapeNode:SKShapeNode;

    

    init(size:CGSize) {

 

        shapeNode = SKShapeNode();

        //Create a ball shape

        var path = CGPathCreateMutable();

        CGPathAddArc(path, nil, 0, 0, 45, 0, M_PI*2, true);

        CGPathCloseSubpath(path);

        

 

        shapeNode.path = path;

        shapeNode.lineWidth = 2.0;

 

        

        // Set the ball's physical properties

        shapeNode.physicsBody = SKPhysicsBody(circleOfRadius: shapeNode.frame.width/2);

        shapeNode.physicsBody.dynamic = true;

        shapeNode.physicsBody.mass = 5;

        shapeNode.physicsBody.friction = 0.2;

        shapeNode.physicsBody.restitution = 1;

        // this time we dont want gravity mucking things up

        shapeNode.physicsBody.affectedByGravity = false;

        

        super.init(size:size);

    }

    

    override func didMoveToView(view: SKView) {

 

        

        // Position the ball top center of the view

        shapeNode.position = CGPoint(x:view.frame.width/2,y:view.frame.height);

 

        // Now make the edges of the screen a physics object as well

        scene.physicsBody = SKPhysicsBody(edgeLoopFromRect: view.frame);

        

        self.addChild(shapeNode);

        

    }

    

    override func keyUp(theEvent: NSEvent!) {

 

        switch theEvent.keyCode{

        

        case126: // up arrow

            shapeNode.physicsBody.applyImpulse(CGVector(0,1000));

        case125: // down arrow

            shapeNode.physicsBody.applyImpulse(CGVector(0,-1000));

        case123: // left arrow

            shapeNode.physicsBody.applyForce(CGVector(-1000,0));

        case124: // right arrow

            shapeNode.physicsBody.applyForce(CGVector(1000,0));

        case49: // spacebar

            shapeNode.physicsBody.velocity = CGVector(0,0);

            shapeNode.position = CGPoint(x: self.view.frame.width/2,y:self.view.frame.height/2);

        default:

            return;

        }

    }

    

}

This sample is pretty straight forward, thus no animated gif. If the user presses up or down, an implies is applied along that direction. If the user presses left or right, a force is applied instead.  While if the user hits the spacebar, the sphere’s velocity is set to nothing and it is manually moved to the centre of the screen. One thing you might notice is implies moves a great deal more than force.  This is because force is expected to be applied per frame.  Think of it like driving a car.

An impulse is like i loaded your car into a gigantic slingshot and propelled you along at a blistering speed.

Force on the other hand is much more akin to you driving the car yourself.  If you take your foot of the gas, you rapidly decelerate.  If on the other hand you keep your foot down ( apply the same amount of force each frame ), you will move along consistently at the same speed.

As you see from the above example, you can also manipulate velocity and position directly.  Generally though this mucks with the physics simulation something severe, so generally isn’t recommended if alternatives exist.

#AltDevBlogADay

Custom Vector Allocation

by Thomas Young at July 29, 2014 02:42 PM

(First posted to upcoder.com, number 6 in a series of posts about Vectors and Vector based containers.)

A few posts back I talked about the idea of 'rolling your own' STL-style vector class, based my experiences with this at PathEngine.

In that original post and these two follow-ups I talked about the general approach and also some specific performance tweaks that actually helped in practice for our vector use cases.

I haven't talked about custom memory allocation yet, however. This is something that's been cited in a number of places as a key reason for switching away from std::vector so I'll come back now and look at the approach we took for this (which is pretty simple, but nonstandard, and also pre C++11), and assess some of the implications of using this kind of non-standard approach.

I approach this from the point of view of a custom vector implementation, but I'll be talking about some issues with memory customisation that also apply more generally.

Why custom allocation?

In many situations it's fine for vectors (and other containers) to just use the same default memory allocation method as the rest of your code, and this is definitely the simplest approach.

(The example vector code I posted previously used malloc() and free(), but works equally well with global operator new and delete.)

But vectors can do a lot of memory allocation, and memory allocation can be expensive, and it's not uncommon for memory allocation operations to turn up in profiling as the most significant cost of vector based code. Custom memory allocation approaches can help resolve this.

And some other good reasons for hooking into and customising allocations can be the need to avoid memory fragmentation or to track memory statistics.

For these reasons generalised memory customisation is an important customer requirement for our SDK code in general, and then by extension for the vector containers used by this code.

Custom allocation in std::vector

The STL provides a mechanism for hooking into the container allocation calls (such as vector buffer allocations) through allocators, with vector constructors accepting an allocator argument for this purpose.

I won't attempt a general introduction to STL allocators, but there's a load of material about this on the web. See, for example, this article on Dr Dobbs, which includes some example use cases for allocators. (Bear in mind that this is pre C++11, however. I didn't see any similarly targeted overview posts for using allocators post C++11.)

A non-standard approach

We actually added the possibility to customise memory allocation in our vectors some time after switching to a custom vector implementation. (This was around mid-2012. Before that PathEngine's memory customisation hooks worked by overriding global new and delete, and required dll linkage if you wanted to manage PathEngine memory allocations separately from allocations in the main game code.)

We've generally tried to keep our custom vector as similar as possible to std::vector, in order to avoid issues with unexpected behaviour (since a lot of people know how std::vector works), and to ensure that code can be easily switched between std::vector and our custom vector. When it came to memory allocation, however, we chose a significantly different (and definitely non-standard) approach, because in practice a lot of vector code doesn't actually use allocators (or else just sets allocators in a constructor), because we already had a custom vector class in place, and because I just don't like STL allocators!

Other game developers

A lot of other game developers have a similar opinion of STL allocators, and for many this is actually then also a key factor in a decision to switch to custom container classes.

For example, issues with the design of STL allocators are quoted as one of the main reasons for the creation of the EASTL, a set of STL replacement classes, by Electronic Arts. From the EASTL paper:

Among game developers the most fundamental weakness is the std allocator design, and it is this weakness that was the largest contributing factor to the creation of EASTL.

And I've heard similar things from other developers. For example, in this blog post about the Bitsquid approach to allocators Niklas Frykholm says:

If it weren't for the allocator interface I could almost use STL. Almost.

Let's have a look at some of the reasons for this distaste!

Problems with STL allocators

We'll look at the situation prior to C++11, first of all, and the historical basis for switching to an alternative mechanism.

A lot of problems with STL allocators come out of confusion in the initial design. According to Alexander Stepanov (primary designer and implementer of the STL) the custom allocator mechanism was invented to deal with a specific issue with Intel memory architecture. (Do you remember near and far pointers? If not, consider yourself lucky I guess!) From this interview with Alexander:

Question: How did allocators come into STL? What do you think of them?

Answer: I invented allocators to deal with Intel's memory architecture. They are not such a bad ideas in theory - having a layer that encapsulates all memory stuff: pointers, references, ptrdiff_t, size_t. Unfortunately they cannot work in practice.

And it seems like this original design intention was also only partially executed. From the wikipedia entry for allocators:

They were originally intended as a means to make the library more flexible and independent of the underlying memory model, allowing programmers to utilize custom pointer and reference types with the library. However, in the process of adopting STL into the C++ standard, the C++ standardization committee realized that a complete abstraction of the memory model would incur unacceptable performance penalties. To remedy this, the requirements of allocators were made more restrictive. As a result, the level of customization provided by allocators is more limited than was originally envisioned by Stepanov.

and, further down:

While Stepanov had originally intended allocators to completely encapsulate the memory model, the standards committee realized that this approach would lead to unacceptable efficiency degradations. To remedy this, additional wording was added to the allocator requirements. In particular, container implementations may assume that the allocator's type definitions for pointers and related integral types are equivalent to those provided by the default allocator, and that all instances of a given allocator type always compare equal, effectively contradicting the original design goals for allocators and limiting the usefulness of allocators that carry state.

Some of the key problems with STL allocators (historically) are then:

  • Unnecessary complexity, with some boiler plate stuff required for features that are not actually used
  • A limitation that allocators cannot have internal state ('all instances of a given allocator type are required to be interchangeable and always compare equal to each other')
  • The fact the allocator type is included in container type (with changes to allocator type changing the type of the container)

There are some changes to this situation with C++11, as we'll see below, but this certainly helps explain why a lot of people have chosen to avoid the STL allocator mechanism, historically!

Virtual allocator interface

So we decided to avoid STL allocators, and use a non-standard approach.

The approach we use is based on a virtual allocator interface, and avoids the need to specify allocator type as a template parameter.

This is quite similar to the setup for allocators in the BitSquid engine, as described by Niklas here (as linked above, it's probably worth reading that post if you didn't see this already, as I'll try to avoid repeating the various points he discussed there).

A basic allocator interface can then be defined as follows:

class iAllocator
{
public:
    virtual ~iAllocator() {}
    virtual void* allocate(tUnsigned32 size) = 0;
    virtual void deallocate(void* ptr) = 0;
// helper
    template <class T> void
    allocate_Array(tUnsigned32 arraySize, T*& result)
    {
        result = static_cast<T*>(allocate(sizeof(T) * arraySize));
    }
};

The allocate_Array() method is for convenience, concrete allocator objects just need to implement allocate() and free().

We can store a pointer to iAllocator in our vector, and replace the direct calls to malloc() and free() with virtual function calls, as follows:

    static T*
    allocate(size_type size)
    {
        T* allocated;
        _allocator->allocate_Array(size, allocated);
        return allocated;
    }
    void
    reallocate(size_type newCapacity)
    {
        T* newData;
        _allocator->allocate_Array(newCapacity, newData);
        copyRange(_data, _data + _size, newData);
        deleteRange(_data, _data + _size);
        _allocator->deallocate(_data);
        _data = newData;
        _capacity = newCapacity;
    }

These virtual function calls potentially add some overhead to allocation and deallocation. It's worth being quite careful about this kind of virtual function call overhead, but in practice it seems that the overhead is not significant here. Virtual function call overhead is often all about cache misses and, perhaps because there are often just a small number of actual allocator instance active, with allocations tending to be grouped by allocator, this just isn't such an issue here.

We use a simple raw pointer for the allocator reference. Maybe a smart pointer type could be used (for better modern C++ style and to increase safety), but we usually want to control allocator lifetime quite explicitly, so we're basically just careful about this.

Allocators can be passed in to each vector constructor, or if omitted will default to a 'global allocator' (which adds a bit of extra linkage to our vector header):

    cVector(size_type size, const T& fillWith,
        iAllocator& allocator = GlobalAllocator()
        )
    {
        _data = 0;
        _allocator = &allocator;
        _size = size;
        _capacity = size;
        if(size)
        {
            _allocator->allocate_Array(_capacity, _data);
            constructRange(_data, _data + size, fillWith);
        }
    }

Here's an example concrete allocator implementation:

class cMallocAllocator : public iAllocator
{
public:
    void*
    allocate(tUnsigned32 size)
    {
        assert(size);
        return malloc(static_cast<size_t>(size));
    }
    void
    deallocate(void* ptr)
    {
        free(ptr);
    }
};

(Note that you normally can call malloc() with zero size, but this is something that we disallow for PathEngine allocators.)

And this can be passed in to vector construction as follows:

    cMallocAllocator allocator;
    cVector<int> v(10, 0, allocator);

Swapping vectors

That's pretty much it, but there's one tricky case to look out for.

Specifically, what should happen in our vector swap() method? Let's take a small diversion to see why there might be a problem.

Consider some code that takes a non-const reference to vector, and 'swaps a vector out' as a way of returning a set of values in the vector without the need to heap allocate the vector object itself:

class cVectorBuilder
{
    cVector<int> _v;
public:
    //.... construction and other building methods
    void takeResult(cVector<int>& result); // swaps _v into result
};

So this code doesn't care about allocators, and just wants to work with a vector of a given type. And maybe there is some other code that uses this, as follows:

void BuildData(/*some input params*/, cVector& result)
{
  //.... construct a cVectorBuilder and call a bunch of build methods
    builder.takeResult(result);
}

Now there's no indication that there's going to be a swap() involved, but the result vector will end up using the global allocator, and this can potentially cause some surprises in the calling code:

   cVector v(someSpecialAllocator);
   BuildData(/*input params*/, v);
   // lost our allocator assignment!
   // v now uses the global allocator

Nobody's really doing anything wrong here (although this isn't really the modern C++ way to do things). This is really a fundamental problem arising from the possibility to swap vectors with different allocators, and there are other situations where this can come up.

You can find some discussion about the possibilities for implementing vector swap with 'unequal allocators' here. We basically choose option 1, which is to simply declare it illegal to call swap with vectors with different allocators. So we just add an assert in our vector swap method that the two allocator pointers are equal.

In our case this works out fine, since this doesn't happen so much in practice, because cases where this does happen are caught directly by the assertion, and because it's generally straightforward to modify the relevant code paths to resolve the issue.

Comparison with std::vector, is this necessary/better??

Ok, so I've outlined the approach we take for custom allocation in our vector class.

This all works out quite nicely for us. It's straightforward to implement and to use, and consistent with the custom allocators we use more generally in PathEngine. And we already had our custom vector in place when we came to implement this, so this wasn't part of the decision about whether or not to switch to a custom vector implementation. But it's interesting, nevertheless, to compare this approach with the standard allocator mechanism provided by std::vector.

My original 'roll-your-own vector' blog post was quite controversial. There were a lot of responses strongly against the idea of implementing a custom vector, but a lot of other responses (often from the game development industry side) saying something like 'yes, we do that, but we do some detail differently', and I know that this kind of customisation is not uncommon in the industry.

These two different viewpoints makes it worthwhile to explore this question in a bit more detail, then, I think.

I already discussed the potential pitfalls of switching to a custom vector implementation in the original 'roll-your-own vector' blog post, so lets look at the potential benefits of switching to a custom allocator mechanism.

Broadly speaking, this comes down to three key points:

  • Interface complexity
  • Stateful allocator support
  • Possibilities for further customisation and memory optimisation

Interface complexity

If we look at an example allocator implementation for each setup we can see that there's a significant difference in the amount of code required. The following code is taken from my previous post, and was used to fill allocated memory with non zero values, to check for zero initialisation:

// STL allocator version
template <class T>
class cNonZeroedAllocator
{
public:
    typedef T value_type;
    typedef value_type* pointer;
    typedef const value_type* const_pointer;
    typedef value_type& reference;
    typedef const value_type& const_reference;
    typedef typename std::size_t size_type;
    typedef std::ptrdiff_t difference_type;
    template <class tTarget>
    struct rebind
    {
        typedef cNonZeroedAllocator<tTarget> other;
    };
    cNonZeroedAllocator() {}
    ~cNonZeroedAllocator() {}
    template <class T2>
    cNonZeroedAllocator(cNonZeroedAllocator<T2> const&)
    {
    }
    pointer
    address(reference ref)
    {
        return &ref;
    }
    const_pointer
    address(const_reference ref)
    {
        return &ref;
    }
    pointer
    allocate(size_type count, const void* = 0)
    {
        size_type byteSize = count * sizeof(T);
        void* result = malloc(byteSize);
        signed char* asCharPtr;
        asCharPtr = reinterpret_cast<signed char*>(result);
        for(size_type i = 0; i != byteSize; ++i)
        {
            asCharPtr[i] = -1;
        }
        return reinterpret_cast<pointer>(result);
    }
    void deallocate(pointer ptr, size_type)
    {
        free(ptr);
    }

    size_type
    max_size() const
    {
        return 0xffffffffUL / sizeof(T);
    }
    void
    construct(pointer ptr, const T& t)
    {
        new(ptr) T(t);
    }
    void
    destroy(pointer ptr)
    {
        ptr->~T();
    }
    template <class T2> bool
    operator==(cNonZeroedAllocator<T2> const&) const
    {
        return true;
    }
    template <class T2> bool
    operator!=(cNonZeroedAllocator<T2> const&) const
    {
        return false;
    }
};

But with our custom allocator interface this can now be implemented as follows:

// custom allocator version
class cNonZeroedAllocator : public iAllocator
{
public:
    void*
    allocate(tUnsigned32 size)
    {
        void* result = malloc(static_cast<size_t>(size));
        signed char* asCharPtr;
        asCharPtr = reinterpret_cast<signed char*>(result);
        for(tUnsigned32 i = 0; i != size; ++i)
        {
            asCharPtr[i] = -1;
        }
        return result;
    }
    void
    deallocate(void* ptr)
    {
        free(ptr);
    }
};

As we saw previously a lot of stuff in the STL allocator relates to some obsolete design decisions, and is unlikely to actually be used in practice. The custom allocator interface also completely abstracts out the concept of constructed object type, and works only in terms of actual memory sizes and pointers, which seems more natural and whilst doing everything we need for the allocator use cases in PathEngine.

For me this is one advantage of the custom allocation setup, then, although probably not something that would by itself justify switching to a custom vector.

If you use allocators that depend on customisation of the other parts of the STL allocator interface (other than for data alignment) please let me know in the comments thread. I'm quite interested to hear about this! (There's some discussion about data alignment customisation below.)

Stateful allocator requirement

Stateful allocator support is a specific customer requirement for PathEngine.

Clients need to be able to set custom allocation hooks and have all allocations made by the SDK (including vector buffer allocations) routed to custom client-side allocation code. Furthermore, multiple allocation hooks can be supplied, with the actual allocation strategy selected depending on the actual local execution context.

It's not feasible to supply allocation context to all of our vector based code as a template parameter, and so we need our vector objects to support stateful allocators.

Stateful allocators with the virtual allocator interface

Stateful allocators are straightforward with our custom allocator setup. Vectors can be assigned different concrete allocator implementations and these concrete allocator implementations can include internal state, without code that works on the vectors needing to know anything about these details.

Stateful allocators with the STL

As discussed earlier, internal allocator state is something that was specifically forbidden by the original STL allocator specification. This is something that has been revisited in C++11, however, and stateful allocators are now explicitly supported, but it also looks like it's possible to use stateful allocators in practice with many pre-C++11 compile environments.

The reasons for disallowing stateful allocators relate to two specific problem situations:

  • Splicing nodes between linked lists with different allocation strategies
  • Swapping vectors with different allocation strategies

C++11 addresses these issues with allocator traits, which specify what to do with allocators in problem cases, with stateful allocators then explicitly supported. This stackoverflow answer discusses what happens, specifically, with C++11, in the vector swap case.

With PathEngine we want to be able to support clients with different compilation environments, and it's an advantage not to require C++11 support. But according to this stackoverflow answer, you can also actually get away with using stateful allocators in most cases, without explicit C++11 support, as long as you avoid these problem cases.

Since we already prohibit the vector problem case (swap with unequal allocators), that means that we probably can actually implement our stateful allocator requirement with std::vector and STL allocators in practice, without requiring C++11 support.

There's just one proviso, with or without C++11 support, due to allowances for legacy compiler behaviour in allocator traits. Specifically, it doesn't look like we can get the same assertion behaviour in vector swap. If propagate_on_container_swap::value is set to false for either allocator then the result is 'undefined behaviour', so this could just swap the allocators silently, and we'd have to be quite careful about these kinds of problem cases!

Building on stateful allocators to address other issues

If you can use stateful allocators with the STL then this changes things a bit. A lot of things become possible just by adding suitable internal state to standard STL allocator implementations. But you can also now use this allocator internal state as a kind of bootstrap to work around other issues with STL allocators.

The trick is wrap up the same kind of virtual allocator interface setup we use in PathEngine in an STL allocator wrapper class. You could do this (for example) by putting a pointer to our iAllocator interface inside an STL allocator class (as internal state), and then forward the actual allocation and deallocation calls as virtual function calls through this pointer.

So, at the cost of another layer of complexity (which can be mostly hidden from the main application code), it should now be possible to:

  • remove unnecessary boiler plate from concrete allocator implementations (since these now just implement iAllocator), and
  • use different concrete allocator types without changing the actual vector type.

Although I'm still not keen on STL allocators, and prefer the direct simplicity of our custom allocator setup as opposed to covering up the mess of the STL allocator interface in this way, I have to admit that this does effectively remove two of the key benefits of our custom allocator setup. Let's move on to the third point, then!

Refer to the bloomberg allocator model for one example of this kind of setup in practice (and see also this presentation about bloomberg allocators in the context C++11 allocator changes).

Memory optimisation

The other potential benefit of custom allocation over STL allocators is basically the possibility to mess around with the allocation interface.

With STL allocators we're restricted to using the allocate() and deallocate() methods exactly as defined in the original allocator specification. But with our custom allocator we're basically free to mess with these method definitions (in consultation with our clients!), or to add additional methods, and generally change the interface to better suit our clients needs.

There is some discussion of this issue in this proposal for improving STL allocators, which talks about ways in which the memory allocation interface provided by STL allocators can be sub-optimal.

Some customisations implemented in the Bitsquid allocators are:

  • an 'align' parameter for the allocation method, and
  • a query for the size of allocated blocks

PathEngine allocators don't include either of these customisations, although this is stuff that we can add quite easily if required by our clients. Our allocator does include the following extra methods:

    virtual void*
    expand(
            void* oldPtr,
            tUnsigned32 oldSize,
            tUnsigned32 oldSize_Used,
            tUnsigned32 newSize
            ) = 0;
// helper
    template <class T> void
    expand_Array(
            T*& ptr,
            tUnsigned32 oldArraySize,
            tUnsigned32 oldArraySize_Used,
            tUnsigned32 newArraySize
            )
    {
        ptr = static_cast<T*>(expand(
            ptr,
            sizeof(T) * oldArraySize,
            sizeof(T) * oldArraySize_Used,
            sizeof(T) * newArraySize
            ));
    }

What this does, essentially, is to provide a way for concrete allocator classes to use the realloc() system call, or similar memory allocation functionality in a custom head, if this is desired.

As before, the expand_Array() method is there for convenience, and concrete classes only need to implement the expand() method. This takes a pointer to an existing memory block, and can either add space to the end of this existing block (if possible), or allocate a larger block somewhere else and move existing data to that new location (based on the oldSize_Used parameter).

Implementing expand()

A couple of example implementations for expand() are as follows:

// in cMallocAllocator, using realloc()
    void*
    expand(
        void* oldPtr,
        tUnsigned32 oldSize,
        tUnsigned32 oldSize_Used,
        tUnsigned32 newSize
        )
    {
        assert(oldPtr);
        assert(oldSize);
        assert(oldSize_Used <= oldSize);
        assert(newSize > oldSize);
        return realloc(oldPtr, static_cast<size_t>(newSize));
    }
// as allocate and move
    void*
    expand(
        void* oldPtr,
        tUnsigned32 oldSize,
        tUnsigned32 oldSize_Used,
        tUnsigned32 newSize
        )
    {
        assert(oldPtr);
        assert(oldSize);
        assert(oldSize_Used <= oldSize);
        assert(newSize > oldSize);
        void* newPtr = allocate(newSize);
        memcpy(newPtr, oldPtr, static_cast<size_t>(oldSize_Used));
        deallocate(oldPtr);
        return newPtr;
    }

So this can either call through directly to something like realloc(), or emulate realloc() with a sequence of allocation, memory copy and deallocation operations.

Benchmarking with realloc()

With this expand() method included in our allocator it's pretty straightforward to update our custom vector to use realloc(), and it's easy to see how this can potentially optimise memory use, but does this actually make a difference in practice?

I tried some benchmarking and it turns out that this depends very much on the actual memory heap implementation in use.

I tested this first of all with the following simple benchmark:

template <class tVector> static void
PushBackBenchmark(tVector& target)
{
    const int pattern[] = {0,1,2,3,4,5,6,7};
    const int patternLength = sizeof(pattern) / sizeof(*pattern);
    const int iterations = 10000000;
    tSigned32 patternI = 0;
    for(tSigned32 i = 0; i != iterations; ++i)
    {
        target.push_back(pattern[patternI]);
        ++patternI;
        if(patternI == patternLength)
        {
            patternI = 0;
        }
    }
}

(Wrapped up in some code for timing over a bunch of iterations, with result checking to avoid the push_back being optimised out.)

This is obviously very far from a real useage situation, but the results were quite interesting:

OS container type time
Linux std::vector 0.0579 seconds
Linux cVector without realloc 0.0280 seconds
Linux cVector with realloc 0.0236 seconds
Windows std::vector 0.0583 seconds
Windows cVector without realloc 0.0367 seconds
Windows cVector with realloc 0.0367 seconds

So the first thing that stands out from these results is that using realloc() doesn't make any significant difference on windows. I double checked this, and while expand() is definitely avoiding memory copies a significant proportion of the time, this is either not significant in the timings, or memory copy savings are being outweighed by some extra costs in the realloc() call. Maybe realloc() is implemented badly on Windows, or maybe the memory heap on Windows is optimised for more common allocation scenarios at the expense of realloc(), I don't know. A quick google search shows that other people have seen similar issues.

Apart from that it looks like realloc() can make a significant performance difference, on some platforms (or depending on the memory heap being used). I did some extra testing, and it looks like we're getting diminishing returns after some of the other performance tweaks we made in our custom vector, specifically the tweaks to increase capacity after the first push_back, and the capacity multiplier tweak. With these tweaks backed out:

OS container type time
Linux cVector without realloc, no tweaks 0.0532 seconds
Linux cVector with realloc, no tweaks 0.0235 seconds

So, for this specific benchmark, using realloc() is very significant, and even avoids the need for those other performance tweaks.

Slightly more involved benchmark

The benchmark above is really basic, however, and certainly isn't a good general benchmark for vector memory use. In fact, with realloc(), there is only actually ever one single allocation made, which is then naturally free to expand through the available memory space!

A similar benchmark is discussed in this stackoverflow question, and in that case the benefits seemed to reduce significantly with more than one vector in use. I hacked the benchmark a bit to see what this does for us:

template <class tVector> static void
PushBackBenchmark_TwoVectors(tVector& target1, tVector& target2)
{
    const int pattern[] = {0,1,2,3,4,5,6,7};
    const int patternLength = sizeof(pattern) / sizeof(*pattern);
    const int iterations = 10000000;
    tSigned32 patternI = 0;
    for(tSigned32 i = 0; i != iterations; ++i)
    {
        target1.push_back(pattern[patternI]);
        target2.push_back(pattern[patternI]);
        ++patternI;
        if(patternI == patternLength)
        {
            patternI = 0;
        }
    }
}
template <class tVector> static void
PushBackBenchmark_ThreeVectors(tVector& target1, tVector& target2, tVector& target3)
{
    const int pattern[] = {0,1,2,3,4,5,6,7};
    const int patternLength = sizeof(pattern) / sizeof(*pattern);
    const int iterations = 10000000;
    tSigned32 patternI = 0;
    for(tSigned32 i = 0; i != iterations; ++i)
    {
        target1.push_back(pattern[patternI]);
        target2.push_back(pattern[patternI]);
        target3.push_back(pattern[patternI]);
        ++patternI;
        if(patternI == patternLength)
        {
            patternI = 0;
        }
    }
}

With PushBackBenchmark_TwoVectors():

OS container type time
Linux std::vector 0.0860 seconds
Linux cVector without realloc 0.0721 seconds
Linux cVector with realloc 0.0495 seconds

With PushBackBenchmark_ThreeVectors():

OS container type time
Linux std::vector 0.1291 seconds
Linux cVector without realloc 0.0856 seconds
Linux cVector with realloc 0.0618 seconds

That's kind of unexpected.

If we think about what's going to happen with the vector buffer allocations in this benchmark, on the assumption of sequential allocations into a simple contiguous memory region, it seems like the separate vector allocations in the modified benchmark versions should actually prevent each other from expanding. And I expected that to reduce the benefits of using realloc. But the speedup is actually a lot more significant for these benchmark versions.

I stepped through the benchmark and the vector buffer allocations are being placed sequentially in a single contiguous memory region, and do initially prevent each other from expanding, but after a while the 'hole' at the start of the memory region gets large enough to be reused, and then reallocation becomes possible, and somehow turns out to be an even more significant benefit. Maybe these benchmark versions pushed the memory use into a new segment and incurred some kind of segment setup costs?

With virtual memory and different layers of memory allocation in modern operating systems, and different approaches to heap implementations, it all works out as quite a complicated issue, but it does seem fairly clear, at least, that using realloc() is something that can potentially make a significant difference to vector performance, in at least some cases!

Realloc() in PathEngine

Those are all still very arbitrary benchmarks and it's interesting to see how much this actually makes a difference for some real uses cases. So I had a look at what difference the realloc() support makes for the vector use in PathEngine.

I tried our standard set of SDK benchmarks (with common queries in some 'normal' situations), both with and without realloc() support, and compared the timings for these two cases. It turns out that for this set of benchmarks, using realloc() doesn't make a significant difference to the benchmark timings. There are some slight improvements in some timings, but nothing very noticeable.

The queries in these benchmarks have already had quite a lot of attention for performance optimisation, of course, and there are a bunch of other performance optimisations already in the SDK that are designed to avoid the need for vector capacity increases in these situations (reuse of vectors for runtime queries, for example). Nevertheless, if we're asking whether custom allocation with realloc() is 'necessary or better' in the specific case of PathEngine vector use (and these specific benchmarks) the answer appears to be that no this doesn't really seem to make any concrete difference!

Memory customisation and STL allocators

As I've said above, this kind of customisation of the allocator interface (to add stuff like realloc() support) is something that we can't do with the standard allocator setup (even with C++11).

For completeness it's worth noting the approach suggested by Alexandrescu in this article where he shows how you can effectively shoehorn stuff like realloc() calls into STL allocators.

But this does still depends on using some custom container code to detect special allocator types, and won't work with std::vector.

Conclusion

This has ended up a lot longer than I originally intended so I'll go ahead and wrap up here!

To conclude:

  • It's not so hard to implement your own allocator setup, and integrate this with a custom vector (I hope this post gives you a good idea about what can be involved in this)
  • There are ways to do similar things with the STL, however, and overall this wouldn't really work out as a strong argument for switching to a custom vector in our case
  • A custom allocator setup will let you do some funky things with memory allocation, if your memory heap will dance the dance, but it's not always clear that this will translate into actual concrete performance benefits

A couple of things I haven't talked about:

Memory fragmentation: custom memory interfaces can also be important for avoiding memory fragmentation, and this can be an important issue. We don't have a system in place for actually measuring memory fragmentation, though, and I'd be interested to hear how other people in the industry actually quantify or benchmark this.

Memory relocation: the concept of 'relocatable allocators' is quite interesting, I think, although this has more significant implications for higher level vector based code, and requires moving further away from standard vector usage. This is something I'll maybe talk about in more depth later on..

** Comments: Please check the existing comment thread for this post before commenting. **



OpenGL

AMD community blog on “Low Overhead OpenGL”

July 29, 2014 12:36 PM

Graham Sellers from AMD recaps his GDC talk on Approaching Zero Driver Overhead (AZDO) with OpenGL.

iPhone Development Tutorials and Programming Tips

A Set Of iOS Object And Collection Categories For Easy Importing And Creation Of JSON Data

by Johann at July 29, 2014 06:36 AM

Post Category    Featured iPhone Development Resources,iOS Development Libraries,Objective-C,Open Source iOS Libraries And Tools

I’ve mentioned a number of number of libraries such as JSONModel for working with Objective-C code providing ways to convert JSON data to objects with a number of extraf eatures.

Here’s a lightweight library that provides dead simple conversion of JSON data to objects and collections called CollectionFactory from Elliot Chance.

The library provides a set of categories for NSArray, NSDictionary, NSMutableDictionary and NSObject for easily importing JSON data into these structures, and exporting data from these structures as JSON data.

The readme for CollectionFactory lists the following method additions:

NSArray

+ arrayWithJsonString: – create an NSArray from a JSON string.
+ arrayWithJsonData: – create an NSArray from a JSON data.

NSDictionary

+ dictionaryWithJsonData: – create an NSDictionary from a JSON data.
+ dictionaryWithJsonString: – create an NSDictionary from a JSON string.

NSMutableDictionary

+ mutableDictionaryWithJsonString: – create an NSMutableDictionary from a JSON string.
+ mutableDictionaryWithJsonData: – create an NSMutableDictionary from a JSON data.
+ mutableDictionaryWithJsonFile: – create an NSMutableDictionary from a file that contains JSON.

NSObject

+ objectFromJson: – convert JSON string into an object.
– dictionaryValue – convert an objects properties into an NSDictionary.
– jsonValue – translate any object into JSON.

You can find CollectionFactory on Github here.

A nice library for working with JSON data.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: A Set Of iOS Object And Collection Categories For Easy Importing And Creation Of JSON Data

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

#AltDevBlogADay

So they made you a lead; now what? (Part 2)

by Oliver Franzke at July 29, 2014 02:20 AM

The first part of this article took a closer look at why people with outstanding art, design or programming skills sometimes struggle or even fail as team leads. In addition to that part one also identified the core values of leadership as trust, direction and support.

The goal of this part is to provide newly minted leads with practical advice how to get started in their new role and it also describes different ways to develop the necessary leadership skills.

Learning leadership skills

Now that we have a better understanding of what leadership is (and isn’t) it’s time to look at different ways of developing leadership skills. Despite the claims of some books or websites there is no easy 5-step program that will make you the best team lead in 30 days. As with most soft skills it is important to identify what works for you and then to improve your strategies over time. Thankfully there are different ways to find your (unique) leadership style.

The best way to develop your skills is by learning them directly from a mentor that you respect for his or her leadership abilities. This person doesn’t necessarily have to be your supervisor, but ideally it should be someone in the studio where you work. Leadership depends on the organizational structure of a company and it is therefore much harder for someone from the outside to offer practical advice.

Make sure to meet on a regularly basis (at least once a month) in order to discuss your progress. A great mentor will be able to suggest different strategies to experiment with and can help you figure out what does and doesn’t work. These meetings also give you the opportunity to learn from his or her career by asking questions like this:

  • How would you approach this situation?
  • What is leadership?
  • Which leader do you look up to and why?
  • How did you learn your leadership skills?
  • What challenges did you face and how did you overcome them?

But even if you aren’t fortunate enough to have access to a mentor you can (and should) still learn from other game developers by observing how they interact with people and how they approach and overcome challenges. The trick is to identify and assimilate effective leadership strategies from colleagues in your company or from developers in other studios.

While mentoring is certainly the most effective way to develop your leadership skills you can also learn a lot by reading books, articles and blog posts about the topic. It’s difficult to find good material that is tailored to the games industry, but thankfully most of the general advice also applies in the context of games. The following two books helped me to learn more about leadership:

  • Team Leadership in the Games Industry” by Seth Spaulding takes a closer looks at the typical responsibilities of a team lead. The book also covers topics like the different organizational structure of games studios and how to deal with difficult situations.
  • How to Lead” by Jo Owen explores what leadership is and why it’s hard to come up with a simple definition. Even though the book is aimed at leads in the business world it contains a lot of practical tips that apply to the games industry as well.

Talks and round-table discussions are another great way to learn from experienced leaders. If you are fortunate enough to visit GDC (or other conferences) keep your eyes open for sessions about leadership. It’s a great way to connect with fellow game developers and has the advantage that you can get advice on how to overcome some of the challenges you might be facing at the moment.

But even if you can’t make it to conferences there are quite a few recorded presentations available online. I highly recommend the following two talks:

  • Concrete Practices to be a Better Leader” by Brian Sharp is a fantastic presentation about various ways to improve your leadership skills. This talk is very inspirational and contains lots of helpful techniques that can be used right away.
  • You’re Responsible” by Mike Acton is essentially a gigantic round-table discussion about the responsibilities of a team lead. As usual Mike does a great job offering practical advice along the way.

Lastly there are a lot of talks about leadership outside of the games industry available on the internet (just search for ‘leadership’ on YouTube). Personally I find some of these presentations quite interesting since they help me to develop a broader understanding of leadership by offering different ways to look at the role. For example the TED playlist “How leaders inspire“ discusses leadership styles in the context of the business world, military, college sports and even symphonic orchestras. In typical TED fashion the talks don’t contain a lot of practical advice, but they are interesting nonetheless.

Leadership starter kit

So you’ve just been promoted (or hired) and the title of your new role now contains the word ‘lead’. First of all, congratulations and well done! This is an exciting step in your career, but it’s important to realize that your day to day job will be quite different from what it used to be and that you’ll have to learn a lot of new skills.

I would like to help you getting started in your new role by offering some specific and practical advice that I found useful during this transitional period. My hope is that this ‘starter kit’ will get you going while you investigate additional ways to develop your leadership skills (see section above). The remainder of the section will therefore cover the following topics:

  • One-on-one meetings
  • Delegation
  • Responsibility
  • Mike Acton’s quick start guide

As a lead your main responsibility is to support your team, so that they can achieve the current set of goals. For that it’s crucial that you get to know the members of your team quite well, which means you should have answers to questions like these:

  • What is she good at?
  • What is he struggling with?
  • Where does she want to be in a year?
  • Is he invested in the project or would he prefer to work on something else?
  • Are there people in the company she doesn’t want to work with?
  • Does he feel properly informed about what is going on with the project / company?

You might not get sincere replies to these questions unless people are comfortable enough with you to trust you with honest answers. Sincere feedback is absolutely critical for the success of your team though which is especially true in difficult times and therefore I would argue that developing mutual trust between you and your team should be your main priority.

Building trust takes a lot of time and effort and an essential part of this process is to have a private chat with each member of your team on a regular basis (at least once a month). These one-on-one meetings can take place in a meeting room or even a nearby coffee shop. The important thing is that both of you feel comfortable having an open and honest conversation, so make sure to pick the location accordingly.

These meetings don’t necessarily have to be long. If there is nothing to talk about then you might be done after 10 minutes. At other times it may take an hour (or more) to discuss a difficult situation. Make sure to avoid possible distractions (e.g. mobile phone) during these meetings, so you can give the other person your full attention.

One-on-one meetings raise the morale because the team will realize that they can rely on you to keep them in the loop and to represent their concerns and interests. Personally I find that these conversations help me to do my job better since it’s much more likely to hear about a (potential) problem when the team feels comfortable telling me about it.

At this point you might be concerned that these meetings take time away from your ‘actual job’, but that’s not true because they are your job now. Whether you like it or not you’ll probably spend more time in meetings and less time contributing directly to the current project. Depending on the size of your company it’s safe to assume that leadership and management will take up between 20% and 50% of your time. This means that you won’t be able to take on the same amount of production tasks as before and you’ll therefore have to learn how to delegate work. I know from personal experience that this can be a tough lesson to learn in the beginning.

In addition to balancing your own workload delegation is also about helping your team to develop new skills and to improve existing ones. Just because you can complete a task more efficiently than any other person on your team doesn’t necessarily mean that you are the best choice for this particular task. Try to take the professional interest of the individual members of your team into account as much as possible when assigning tasks, because people will be more motivated to work on something they are passionate about.

Beyond these practical considerations it is important to note that delegation also has an impact on the mutual trust between you and your team. By routinely taking on ‘tough’ tasks yourself you indicate that you don’t trust your teammates to do a good job, which will ruin morale very quickly. Keep in mind that your colleagues are trained professionals just like yourself, so treat them that way!

Experiencing your entire team working together and producing great results is very empowering and it is your job to make it happen even if nobody tells you this explicitly. In an ideal world it would be obvious what your company expects from you, but in reality that will probably not be the case. It is important to understand that while you have more influence over the direction of the project, your team and even the company you also have more responsibilities now.

First and foremost you are responsible for the success (or failure) of your team and any problem preventing success should be fixed right away. This could be as simple as making sure that your team has the necessary hardware and software, but it could also involve negotiations with another department in order to resolve a conflict of interest.

One responsibility that is often overlooked by new leads is the professional development of the team. It is your job to make sure that the people on your team get the opportunities to improve their skillset. In order to do that you’ll first have to identify the short- and long-term career goals of each team member. In addition to delegating work with the right amount of challenge (as described above) it is also important to provide general career mentorship.

A video game is a complicated piece of software and making one isn’t easy. Mistakes happen and your team might cause a problem that affects another department or even the production schedule. This can be a difficult situation especially when other people are upset and emotions run high. I know it’s easier said than done, but don’t let the stress get the best of you. Rather than identifying and blaming a team member for the mistake you should accept the responsibility and figure out a way to fix the problem. You can still analyze what happened after the dust has settled, so that this issue can be prevented in the future.

It is very unfortunate that a lot of newly minted team leads have to identify additional responsibilities themselves. Thankfully some companies are the exception to the rule. At Insomniac Games, for example, new leads have access to a ‘quick start guide’ that helps them to get adjusted to their new role. This helpful document is publicly available and was written by Mike Acton who has been doing an exceptional job educating the games industry about leadership. I highly recommend that you read the guide: http://www.altdev.co/2013/11/05/gamedev-lead-quick-start-guide/

Leadership is hard (but not impossible)

Truth be told becoming a great team lead isn’t easy. In fact it might be one of the toughest challenges you’ll have to face in your career. The good news is that you are obviously interested in leadership (why else would you have read all this stuff) and want to learn more about how to become a good lead. In other words you are doing great so far!

I hope you found this article helpful and that it’ll make your transition into your new role a bit easier.

Good luck and thank you for reading!

PS.: Whether you just got promoted or have been leading a team for a long time I would love to hear from you, so please feel free to leave a comment.

PPS: I would like to thank everybody who helped me with this article. You guys rock!



iPhone Development Tutorials and Programming Tips

Open Source iOS Component And Tutorial: Apply Motion Blur Effects To An Animation

by Johann at July 28, 2014 11:12 PM

Post Category    Featured iPhone Development Resources,iOS Development Tutorials,iOS UI Controls,iPad,iPhone,Objective-C

I’ve mentioned a few projects on applying blur effects since they exploded in popularity thanks to iOS 7 most recently a library allowing you to create adjustable blurring effects asynchronously.

Here’s an open source component allowing you to apply motion blur to your animations using a custom Core Image filter created using the Core Image Kernel Language (available with iOS 8) from Arkadiusz Holko.

The blur is pre-calculated so the performance is extremely high (except on the Simulator because the GPU is used), and Arkadiusz has written a nice writeup explaining the techniques he used to apply the motion blur to animations – the creation of the filter is explained in WWDC 2014 sessions 514 and 515.

Here’s an animation from the readme showing the motion blur effect in action:
MotionBlur

You can find the MotionBlur component on Github here.

You can find the Arkadiusz’s guide explaining how the filter is applied to animations on the Holko blog.

If you’d like to watch the WWDC lecture about the advanced Core Image techniques used to create this motion blur filter you can find it on the Apple developer site here in the videos entitled “Advances in Core Image” and “Developing Core Image Filters In iOS”.

A very nice animation effect.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Open Source iOS Component And Tutorial: Apply Motion Blur Effects To An Animation

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Game From Scratch

Silo 3D modeller 2.3 released. Now available on Linux

by Mike@gamefromscratch.com at July 28, 2014 08:40 PM

Silo 3D is a nice affordable 3D package I previously featured in the GameFromScratch 3D application list.  My comment at the time was:

 

SiloLogo

I highly recommend you check out the 30 day download, but caution you that the developer support is incredibly iffy.  When evaluating your purchase, ask yourself if the version you are evaluating is worth the price of admission WITHOUT any further patches or upgrades, as there may be none!

 

 

That comment was made over two years ago about version 2.2.  Today I received an email that version 2.3 was released.  By far an away the biggest feature is a port to Linux.  Here is the announcement of 2.3’s features:

 

 Nevercenter brings versatile Silo 3D modeler to Linux, updates codebase with new 2.3 release. 

Responding to the number one request from users, Nevercenter has brought its well-loved 3D modeling software Silo to Linux for the first time with the software’s new version 2.3 update. The update is free to registered users, and also includes bug fixes and improvements to the internal codebase which benefit the Windows and OS X versions. 

Silo's utility as a focused subdivision surfaces modeling solution will be greatly enhanced with support for Linux, the operating system of choice for many professional studios and, increasingly, individuals. "Silo is designed to be lightweight and flexible," said Nevercenter president Tom Plewe. "We want it to fit as seamlessly as possible into any workflow, and obviously this is a huge step in that direction." 

Silo's internals have also received significant updates including an updated windowing system and bug fixes across all platforms, as well as added support for .stl import. 

The free update is available now to all existing Silo 2 users. A license for Silo across all three platforms can currently be purchased for the sale price of just $109 via http://www.nevercenter.com/silo , where a trial version and more information can also be found.

 

From the forum discussion on CGSociety, the following are details of the update:

 

* Moved to more modern Qt 5.x

* Linux support (RHEL/CentOS 6, recent Fedoras, recent Ubuntus have been tested; not all distros will work)

* 64-bit support for Mac OS X and Linux

* STL import

* Bug fixes

 

So more a stability release than a feature packed one.  That said, it’s nice to see any signs of life at Silo, so hopefully we will see more in the future.  So is Silo worth checking out?  Two or three years ago I would have said most certainly.  The lack of changes coupled with the improvements we’ve seen in Blender make that a bit trickier.  That said, in a world were Modo is one of the cheapest options, and it’s over a grand!!! yes, low priced alternatives are certainly always welcome.  Of course, there is a 30 day trial available.

 

Oddly enough, Silo is also available on Steam for less money Silo is available on Steam for $79.  I’m not entirely certain if there is a difference between versions.  One license gets you all three supported platforms, which is nice.

Geeks3D Forums

Notepad++ 6.6.8 released

July 28, 2014 03:04 PM

Settings on Cloud supports Google Drive now.
 Better theme support: all internal docking dialogs apply selected theme background / foreground colour.
Download Notepad++ v6.6.8 here:
 [img alt=Notep...



iPhone Development Tutorials and Programming Tips

An Xcode Plugin Highlighting Source Code Changes Based On The Git Repo

by Johann at July 28, 2014 01:30 PM

Post Category    Featured iPhone Development Resources,iOS Programming Tools And Utilities,Objective-C

A couple of months ago I mentioned a nice Xcode plugin from John Holdsworth providing a live memory browser with an easy to use interface called the XprobePlugin.

Here’s another nice Xcode plugin from John Holdsworth for visualizing differences in your project’s code against a Git repo called GitDiff.

GitDiff highlights lines that have been modified, removed, and added.

Here’s an image from the readme showing GitDiff in action:

GitDiff
You can find GitDiff on Github here.

A great plugin for visualizing changes in a project’s code.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: An Xcode Plugin Highlighting Source Code Changes Based On The Git Repo

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Game Producer Blog

Started prototyping a 2D murder mystery

by Juuso Hietalahti at July 28, 2014 01:03 PM

I started working on a 2D murder mystery, pixel art style. The basic gameplay is about deducing the murderer based on stuff you see on the screen. You play a “coroner” (or “medical examiner”) in early 20th century world and help the local police to figure out cases.

Each case can be played in a pretty short time (I think it will take like 10-20 minutes maximum to solve a case). There won’t be any pixel hunting or that type of work. All the information will be presented to you, and it’s up to you to interrogate suspects (who are also present at the scene)… and then choose who is guilty (if anyone).

After all, maybe it was a hunting accident?

coroner-case-of-the-chopped-arm

#AltDevBlogADay

Zero Initialisation for Classes

by Thomas Young at July 28, 2014 10:06 AM

(First posted to upcoder.com, number 5 in a series of posts about Vectors and Vector based containers, this version has been rewritten and updated fairly significantly since first posted.)

This is essentially a response to comments on my previous roll your own vector blog post.

In roll your own vector I talked about a change we made to the initialisation semantics for PathEngine's custom vector class. In my first followup post I looked more closely at possibilities for replacing resize() with reserve() (which can avoid the initialisation issue in many cases), but so far I'm been concentrating pretty much exclusively on zero initialisation for built-in types. In this post I come back to look at the issue of initialisation semantics for class element types.

Placement new subtleties

At it's root the changed initialisation semantics for our vector all come down to a single (quite subtle) change in the way we write one of the placement new expressions.

It's all about the placement new call for element default construction. This is required when elements need to be initialised, but no value is provided for initialisation by the calling code, for example in a call to vector resize() with no fill value argument.

As shown in my previous post, the standard way to implement this placement new is with the following syntax:

       new((void*)begin) T();

but we chose to replace this with the following, subtly different placement new syntax:

       new((void*)begin) T;

So we left out a pair of brackets.

Note that this missing pair of brackets is what I'm talking about when I refer to 'changed initialisation semantics'. (Our custom vector class does not omit initialisation completely!)

What those brackets do

So what do those brackets do, and what happens when we remove them?

Well, this is all about 'zero initialisation'.

In certain cases the memory for the object of type T being constructed will get zero initialised in the first version of the placement new call ('new((void*)begin) T()'), but not in the second version ('new((void*)begin) T').

You can see find these two initialisation types documented on cppreference.com, in 'default initialisation' and 'zero initialisation', and you can find some additional explanation of these two construction semantics on this stackoverflow answer, as well as in the related links.

This makes a difference during element construction for built in types, (as we saw with the buffer initialisation overhead in my previous post), but also for certain types classes and structs, and this is what I'll be looking at in this post.

Initialisation of built in types

It's quite well known that initialisation for built-in types works differently for global variables (which are usually created as part of the program's address space) and local variables (which are allocated on the program stack).

If we start with the following:

int
main(int argc, char* argv[])
{
    int i;
    assert(i == 0);
    return 0;
}

This runs through quite happily with the debug build, but if I turn assertions on in the release build then this assertion gets triggered. That's not really surprising. This kind of uninitialised local variable is a well known gotcha and I think most people with a reasonable amount of experience in C++ have come across something like this.

But the point is that the local variable initialisation here is using 'default initialisation', as opposed to 'zero initialisation'.

And if we change i from a local to a global variable the situation changes:

int i;
int
main(int argc, char* argv[])
{
    assert(i == 0);
    return 0;
}

This time the variable gets zero initialised, and the program runs through without assertion in both release and debug builds.

The reason for this is that global variables can be initialised in the linked binary for your program, at no cost (or else very cheaply at program startup), but local variables get instantiated on the program stack and initialising these explicitly to zero would add a bit of extra run time overhead to your program.

Since uninitialised data is a big potential source of error, many other (more modern) languages choose to always initialise data, but this inevitably adds some overhead, and part of the appeal of C++ is that it lets us get 'close to the metal' and avoid this kind of overhead.

Zero initialisation and 'value' classes

What's less well known (I think) is that this can also apply to classes, in certain cases. This is something you'll come across most commonly, I think, in the form of classes that are written to act as a kind of 'value type', and to behave in a similar way to the C++ built in types.

More specifically, it's all about classes where internal state is not initialised in during class construction, and for which you could choose to omit the class default constructor.

In PathEngine we have a number of classes like this. One example looks something like this:

class cMeshElement
{
public:
    enum eType
    {
        FACE,
        EDGE,
        VERTEX,
    };

//.. class methods

private:
    eType _type;
    tSigned32 _index;
};

Default construction of value classes

What should happen on default construction of a cMeshElement instance?

The safest thing to do will be to initialise _type and _index to some fixed, deterministic values, to eliminate the possibility of program execution being dependant on uninitialised data.

In PathEngine, however, we may need to set up some fairly large buffers with elements of this type. We don't want to limit ourselves to only ever building these buffers through a purely iterator based paradigm (as discussed in my previous post), and sometimes want to just create big uninitialised vectors of cMeshElement type directly, without buffer initialisation overhead, so we leave the data members in this class uninitialised.

Empty default constructor or no default constructor?

So we don't want to do anything on default construction.

There are two ways this can be implemented in our value type class. We can omit the class default constructor completely, or we can add an empty default constructor.

Omitting the constructor seems nice, insofar as avoids a bit of apparently unnecessary and extraneous code, but it turns out there's some unexpected complexity in the rules for C++ object construction with respect to this choice, and to whether an object is being constructed with 'zero initialisation' or 'default initialisation'.

Note that what the two terms refer to are actually two different sets of object construction semantics, with each defining a set of rules for what happens to memory during construction (depending on the exact construction situation), and 'zero initialisation' does not always result in an actual zero initialisation step.

We can test what happens in the context of our custom vector, and 'value type' elements, with the following code:

class cInitialisationReporter
{
  int i;
public:
  ~cInitialisationReporter()
  {
      std::cout << "cInitialisationReporter::i is " << i << '\n';
  }
};

class cInitialisationReporter2
{
  int i;
public:
  cInitialisationReporter2() {}
  ~cInitialisationReporter2()
  {
      std::cout << "cInitialisationReporter2::i is " << i << '\n';
  }
};
template <class T> void
SetMemAndPlacementConstruct_ZeroInitialisation()
{
  T* allocated = static_cast<T*>(malloc(sizeof(T)));
  signed char* asCharPtr = reinterpret_cast<signed char*>(allocated);
  for(int i = 0; i != sizeof(T); ++i)
  {
      asCharPtr[i] = -1;
  }
  new((void*)allocated) T();
  allocated->~T();
}
template <class T> void
SetMemAndPlacementConstruct_DefaultInitialisation()
{
  T* allocated = static_cast<T*>(malloc(sizeof(T)));
  signed char* asCharPtr = reinterpret_cast<signed char*>(allocated);
  for(int i = 0; i != sizeof(T); ++i)
  {
      asCharPtr[i] = -1;
  }
  new((void*)allocated) T;
  allocated->~T();
}

int
main(int argc, char* argv[])
{
  SetMemAndPlacementConstruct_ZeroInitialisation<cInitialisationReporter>();
  SetMemAndPlacementConstruct_ZeroInitialisation<cInitialisationReporter2>();
  SetMemAndPlacementConstruct_DefaultInitialisation<cInitialisationReporter>();
  SetMemAndPlacementConstruct_DefaultInitialisation<cInitialisationReporter2>();
  return 0;
}

This gives the following results:

cInitialisationReporter::i is 0
cInitialisationReporter2::i is -1
cInitialisationReporter::i is -1
cInitialisationReporter2::i is -1

In short:

  • If our vector uses 'zero initialisation' form (placement new with brackets), and the value type has default constructor omitted then the compiler will add code to zero element memory on construction.
  • If our vector uses 'zero initialisation' form (placement new with brackets), and the value type has an empty default then the compiler will leave element memory uninitialised on construction.
  • If the vector uses 'default initialisation' form (placement new without brackets), then the compiler will leave element memory uninitialised regardless of whether or not there is a default constructor.

Zero initialisation in std::vector

The std::vector implementations I've looked at also all perform 'zero initialisation' (and I assume this is then actually required by the standard). We can test this by supplying the following custom allocator:

template <class T>
class cNonZeroedAllocator
{
public:
    typedef T value_type;
    typedef value_type* pointer;
    typedef const value_type* const_pointer;
    typedef value_type& reference;
    typedef const value_type& const_reference;
    typedef typename std::size_t size_type;
    typedef std::ptrdiff_t difference_type;

    template <class tTarget>
    struct rebind
    {
        typedef cNonZeroedAllocator<tTarget> other;
    };

    cNonZeroedAllocator() {}
    ~cNonZeroedAllocator() {}
    template <class T2>
    cNonZeroedAllocator(cNonZeroedAllocator<T2> const&)
    {
    }

    pointer
    address(reference ref)
    {
        return &ref;
    }
    const_pointer
    address(const_reference ref)
    {
        return &ref;
    }

    pointer
    allocate(size_type count, const void* = 0)
    {
        size_type byteSize = count * sizeof(T);
        void* result = malloc(byteSize);
        signed char* asCharPtr = reinterpret_cast<signed char*>(result);
        for(size_type i = 0; i != byteSize; ++i)
        {
            asCharPtr[i] = -1;
        }
        return reinterpret_cast<pointer>(result);
    }
    void deallocate(pointer ptr, size_type)
    {
        free(ptr);
    }

    size_type
    max_size() const
    {
        return 0xffffffffUL / sizeof(T);
    }

    void
    construct(pointer ptr, const T& t)
    {
        new(ptr) T(t);
    }
    void
    destroy(pointer ptr)
    {
        ptr->~T();
    }

    template <class T2> bool
    operator==(cNonZeroedAllocator<T2> const&) const
    {
        return true;
    }
    template <class T2> bool
    operator!=(cNonZeroedAllocator<T2> const&) const
    {
        return false;
    }
};

Oh, by the way, did I mention that I don't like STL allocators? (Not yet, I will in my next post!) This is a bog standard STL allocator with the allocate method hacked to set all the bytes in the allocated memory block to non-zero values. The important bit is the implementation of the allocate and deallocate methods. The rest is just boilerplate.

To apply this in our test code:

int
main(int argc, char* argv[])
{
  std::vector<cInitialisationReporter,
    cNonZeroedAllocator<cInitialisationReporter> > v1;
  v1.resize(1);
  std::vector<cInitialisationReporter2,
    cNonZeroedAllocator<cInitialisationReporter2> > v2;
  v2.resize(1);
  return 0;
}

And this gives:

cInitialisationReporter::i is 0
cInitialisationReporter2::i is -1

Class with no default constructor + std::vector = initialisation overhead

So if I implement a 'value class' without default constructor, and then construct an std::vector with elements of this type, then I get initialisation overhead. And this accounts for part of the speedups we saw when switching to a custom vector implementation (together with the corresponding issue for built in types).

But there's a clear workaround for this issue, now, based on the above. To use std::vector, but avoid initialisation overhead for value type elements, we just need to make sure that each of our value type classes has an empty default constructor.

Extending to a wrapper for working around zero initialisation for built-in types

In the comments (commenting on the original version of this post!) Marek Knápek suggests using the following wrapper to avoid zero initialisation, in the context of built-in types:

template<typename T>
// assuming T is int, short, long, std::uint64_t, ...
// TODO: add static assert
class MyInt{
public:
MyInt()
// m_int is "garbage-initialized" here
{}
public:
T m_int;
};

And sure enough, this works (because of the empty default constructor in the wrapper class). But I really don't like using this kind of wrapper in practice, as I think that this complicates (and slightly obfuscates!) each vector definition.

Using default initialisation semantics for our custom vector avoids the need for this kind of workaround. And, more generally, if we take each of the possible construction semantics on their merits (ignoring the fact that one of these is the behaviour of the standard vector implementation), I prefer 'default initialisation' semantics, since:

  • these semantics seem more consistent and avoid surprises based on whether or not an empty default constructor is included in a class, and
  • value type classes shouldn't depend on zero initialisation, anyway (since they may be instantiated as local variables)

Type specialisation

One thing to be aware of, with this workaround, is that it looks like there can be implications for type specialisation.

When I try the following (with clang 3.2.1):

  cout
    << "is_trivially_default_constructible<cInitialisationReporter>: "
    << is_trivially_default_constructible<cInitialisationReporter>::value
    << '\n';
  cout
    << "is_trivially_default_constructible<cInitialisationReporter2>: "
    << is_trivially_default_constructible<cInitialisationReporter2>::value
    << '\n';

I get:

error: no template named 'is_trivially_default_constructible' in namespace 'std'; did you mean 'has_trivial_default_constructor'?

and then when I try with 'has_trivial_default_constructor':

  cout
    << "has_trivial_default_constructor<cInitialisationReporter>: "
    << has_trivial_default_constructor<cInitialisationReporter>::value
    << '\n';
  cout
    << "has_trivial_default_constructor<cInitialisationReporter2>: "
    << has_trivial_default_constructor<cInitialisationReporter2>::value
    << '\n';

I get:

has_trivial_default_constructor<cInitialisationReporter>: 1
has_trivial_default_constructor<cInitialisationReporter2>: 0

This doesn't matter for PathEngine since we still use an 'old school' type specialisation setup (to support older compilers), but could be something to look out for, nevertheless.

Conclusion

The overhead for zero initialisation in std::vector is something that has been an issue for us historically but it turns out that for std::vector of value type classes, zero initialisation can be avoided, without resorting to a custom vector implementation.

It's interesting to see the implications of this kind of implementation detail. Watch out how you implement 'value type' classes if they're going to be used as elements in large buffers, and maximum performance is desired!

** Comments: Please check the existing comment thread for this post before commenting. **



Timothy Lottes

Galvanize - Alcatraz

by Timothy Lottes (noreply@blogger.com) at July 28, 2014 11:02 AM

iPhone Development Tutorials and Programming Tips

Top Resources In iOS Development For Week Ended July 27th, 2014

by Johann at July 28, 2014 06:00 AM

Post Category    Featured iPhone Development Resources,News

Welcome back to our feature of the most popular new and updated iOS developer resources mentioned on the site from the last week.

The top resource this week is an open source library for applying interesting transition effects to your UILabels inspired by effects seen in iOS 8 and the Secret app.

Here are the resources:

  1.  YetiCharacterLabelExample – An open source library allowing you to apply several different animation effects to UILabels including a falling text effect, a fading label effect, and a motion effect. (share on twitter) (featured here)
  2.  RoboVM – An ahead-of-time compiler of Java Bytecode for the iOS platform for easier porting of Android apps.   (share on twitter) (featured here)

  3.  FLEX – An in-app debugging tool enabling easy editing of an apps user interface, UI hierarchy browsing and more. (share on twitter) (featured here)

Thanks for reading!


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Top Resources In iOS Development For Week Ended July 27th, 2014

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

Real-Time Rendering

Free New Computer Vision Book

by Eric at July 27, 2014 08:49 PM

The book “Computer Vision Metrics: Survey, Taxonomy, and Analysis” is available for free download as a PDF or other formats. Go to the “Source Code/Downloads” tab in the middle of the page and work your way through the labyrinth. Also, you can get the Kindle edition for free. From my pretty limited knowledge of image processing, this looks like a useful survey book, running through common techniques and pointing to relevant references. Me, I was interested in segmentation algorithms for non-photorealistic rendering, and it has a reasonable section all about this topic.

Also, don’t forget that the (also good) book “Computer Vision: Algorithms and Applications” is free for download as a PDF (and without the maze; here’s the direct link).



Game Producer Blog

Stopped survival co-op prototyping

by Juuso Hietalahti at July 27, 2014 12:52 PM

I decided to pull a plug on my co-op wilderness survival game prototype. This happened couple of weeks ago or so.

Biggest findings/reasons:

  • Online multiplayer requires time: I had major plans for different scenarios, where threats and many things would happen… but putting these together in an online multiplayer game requires quite a lot of time. I spent much time on networking, too little on gameplay. I really wanted to try out Unity networking, and it’s a really good… but testing multiplayer is a headache for team of my size (that would be me).
  • No graphics budget…: I don’t have the budget to do the things I wished to do. I must pick something simpler.
  • Not fun after one month, not fun after a year: If a prototype isn’t fun (or have something that would give reason to dig further), there’s not much reason for me to continue. I progressed too slowly for this type of game.

Too big scope for me. Going to try something smaller.

Geeks3D Forums

VLC media player 2.1.5

July 27, 2014 11:06 AM

  2.1.5 Highlights  With the capabilities of "RinceWind", 2.1.5 fixes a few bugs, and important security issues 2.1.5 fixes a few decoding bugs, on MP3, MKV, and hardware decoding on Windows. I...