Planet Gamedev

The Brain Dump

cmake and the Android NDK

by Andre Weissflog (noreply@blogger.com) at April 20, 2014 11:28 PM

TL;DR: how to build Android NDK applications with cmake instead of the custom NDK build system, this is useful for projects which already use cmake to create multiplatform/cross-compiling build files.

Update: Thanks to thp for pointing out a rather serious bug: packaging the standard shared libraries into the APK should NOT be necessary since these are pre-installed on the device. I noticed that I didn’t set a library search path to the toolchain lib dir in the linker step (-L…) which might explain the crash I had earlier, but unfortunately I can’t reproduce this crash anymore with the old behaviour (no library search path and no shared system libraries in the APK). I’ll keep an eye on that and update the blog post with my findings.


I’ve spent the last 2.5 days adding Android support to Oryol’s build system. This wasn’t exactly on my to-do list until I sorta “impulse-bought” a Nexus7 tablet last Thursday. It basically went like this “hey that looks quite neat for a non-iPad tablet => wow, scrolling feels smooth, very non-Android-like => holy shit it runs my Oryol WebGL samples at 60fps => hmm 179 Euros seems quite reasonable…” - I must say I’m impressed how far the Android “user experience” has come since I last dabbled with it. The UI finally feels completely smooth, and I didn’t have any of those Windows8-Metro-style WTF-moments yet.

Ok, so the logical next step would be to add support for Android to the Oryol build system (if you don’t know what Oryol is: it’s a new experimental C++11 multi-plat engine I started a couple months ago: https://github.com/floooh/oryol).

The Oryol build system is cmake-based, with a python script on top which simplifies managing the dozens of possible build-configs. A build-config is one specific combination of target-platform (osx, ios, win32, win64, …), build-tools (make, ninja, Visual Studio, Xcode, …) and compile-mode (Release, Debug) stored under a descriptive name (e.g. osx-xcode-debug, win32-vstudio-release, emscripten-make-debug, …).

The front-end python script called ‘oryol’ is used to juggle all the build-configs, invoke cmake with the right options, and perform command line builds.

One can for instance simply call:

> ./oryol update osx-xcode-debug

…to generate an Xcode project.

Or to perform a command line build with xcodebuild instead:

> ./oryol build osx-xcode-debug

Or to build Oryol for emscripten with make in Release mode (provided the emscripten SDK has been installed):

> ./oryol build emscripten-make-release

This also works on Windows (32- or 64-bit):

> oryol build win64-vstudio-debug
> oryol build win32-vstudio-debug

…or on Linux:

> ./oryol build linux-make-debug

Now, what I want to do with my shiny new Nexus7 is of course this:

> ./oryol build android-make-debug

This turned out to be harder then usual. But lets start at the beginning:

A cross-compiling scenario is normally well defined in the GCC/cmake world:

A toolchain wraps the target-platform’s compiler tools, system headers and libs under a standardized directory structure:

The compiler tools usually reside in a bin subdirectory, and are called gcc and g++, or in the LLVM world: clang and clang++, sometimes the tools also have a prefix: pnacl-clang and pnacl-clang++), or they have completely different names (like emcc in the emscripten SDK).

Headers and libs are often located in a usr directory (usr/include and usr/lib).

The toolchain headers contain at least the the C-Runtime headers, like stdlib.h, stdio.h and usually the C++ headers (vector, iostream, …) and often also the OpenGL headers and other platform-specific header files.

Finally the lib directory contains precompiled system libraries for the target platform (for instance libc.a, libc++.a, etc…).

With such a standard gcc-style toolchain, cross-compilation is very simple. Just make sure that the toolchain-compiler tools are called instead of the host platform’s tools, and that the toolchain headers and libs are used.

cmake standardizes this process with its so-called toolchain-files. A toolchain-file defines what compilers tools, headers and libraries should be used instead of the ‘default’ ones, and usually also overrides compile and linker flags.

The typical strategy when adding a new target platform to a cmake build system looks like this:

  • setup the target platform’s SDK
  • create a new toolchain file (obviously)
  • tell cmake where to find the compiler tools, header and libs
  • add the right compile and linker flags

Once the toolchain file has been created, call cmake with the toolchain file:

> cmake -G"Unix Makefiles" -DCMAKE_TOOLCHAIN_FILE=[path-to-toolchain-file] [path-to-project]

Then run make in verbose mode to check whether the right compiler is called, and with the right options:

> make VERBOSE=1

This approach works well for platforms like emscripten or Google Native Client. Some platforms require a bit of additional cmake-magic, a Portable Native Client executable for instance must be “finalized” after it has been linked. Additional build steps like these can be added easily in cmake with the add_custom_command macro.

Integrating Android as a new target platform isn’t so easy though:

  • the Android SDK itself only allows to create pure Java applications, for C/C++ apps, the separate Android NDK (Native Development Kit) is required
  • the NDK doesn’t produce complete Android applications, it needs the Android Java SDK for this
  • native Android code isn’t a typical executable, but lives in a shared library which is called from Java through JNI
  • the Android SDK and NDK both have their own build systems which hide a lot of complexity
  • …this complexity comes from the combination of different host platforms (OSX, Linux, Windows), target API levels (android-3 to android-19, roughly corresponding to Android versions), compiler versions (gcc4.6, gcc4.9, clang3.3, clang3.4), and finally CPU architectures and instruction sets (ARM, MIPS, X86, with several variations for ARM (armv5, armv7, with or without NEON, etc…)
  • C++ support is still bolted on, the C++ headers and libs are not in their standard locations
  • the NDK doesn’t follow the standard GCC toolchain directory structure at all

The custom build system coming with the NDK does a good job to hide all this complexity, for instance it can automatically build for all CPU architectures, but it stops after the native shared library has been compiled: it cannot create a complete Android APK. For this, the Android Java SDK tools must be called from the command line.

So back to how to make this work in cmake:

The plan looks simple enough:

  1. compile our C/C++ code into a shared library instead of an executable
  2. somehow get this into a Java APK package file…
  3. …deploy APK to Android device and run it

Step 1 starts rather innocent, create a toolchain file, look up the paths to the compiler tools, headers and libs in the NDK, then lookup the compiler and linker command line args by watching a verbose build. Then put all this stuff into the right cmake variables. At least this is how it usually works. Of course for Android it’s all a bit more complicated:

  • first we need to decide on a target CPU architecture and what compiler to use. I settled for ARM and gcc4.8, which leads us to […]/android-ndk-r9d/toolchains/arm-linux-androideabi-4.8/prebuilt
  • in there is a directory darwin-x86_64 so we need separate paths by host platform here
  • finally in there is a bin directory with the compiler tools, so GCC would be for instance at [..]/android-ndk-r9d/toolchains/arm-linux-androideabi-4.8/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-gcc
  • there’s also an include, lib and share directory but the stuff in there definitely doesn’t look like system headers and libs… bummer.
  • the system headers and libs are under the platforms directory instead: [..]/android-ndk-r9d/platforms/android-19/arch-arm/usr/include, and [..]/android-ndk-r9d/platforms/android-19/arch-arm/usr/lib
  • so far so good… put this stuff into the toolchain file and it seems to compile fine – until the first C++ header must be included - WTF?
  • on closer inspection, the system include directory doesn’t contain any C++ headers, and there’s different C++ lib implementations to choose from under [..]/android-ndk-r9d/sources/cxx-stl

This was the point where was seriously thinking about calling it a day until I stumbled across the make-standalone-toolchain.sh in build/tools. This is a helper script which will build a standard GCC-style toolchain for one specific Android API-level and target CPU:

sh make-standalone-toolchain.sh –-platform=android-19 
–-ndk-dir=/Users/[user]/android-ndk-r9d
–-install-dir=/Users/[user]/android-toolchain
–-toolchain=arm-linux-androideabi-4.8
--system=darwin-x86_64

This will extract the right tools, headers and libs, and also integrate C++ headers (by default gnustl, but can be selected with the –stl option). When the script is done, a new directory ‘android-toolchain’ has been created which follows the GCC toolchain standard, and is much easier to integrate with cmake:

The important directories are:
- [..]/android-toolchain/bin, this is where the compiler tools are located, these are still prefixed though (e.g. arm-linux-androideabi-gcc
- [..]/android-toolchain/sysroot/usr/include CRT headers, plus EGL, GLES2, etc…, but NOT the C++ headers
- [..]/android-toolchain/include the C++ headers are here, under ‘c++’
- [..]/android-toolchain/sysroot/usr/lib .a and .so system libs, libstc++.a/.so is also here, no idea why

After setting these paths in the toolchain file, and telling cmake to create shared-libs instead of exes when building for the Android platform I got the compiler and linker steps. Instead of a CoreHello executable, I got a libCoreHello.so. So far so good.

Next step was to figure out how to get this .so into a APK which can be uploaded to an Android device.

The NDK doesn’t help with this, so this is where we need the Java SDK tools, which uses yet another build system: ant. From looking at the SDK samples I figured out that it is usually enough to call ant debug or ant release within a sample directory to build an .apk file into a bin subdirectory. ant requires a build.xml file which defines the build tasks to perform. Furthermore, Android apps have an embedded AndroidManifest.xml file which describes how to run the application, and what privileges it requires. None of these exist in the NDK samples directories though…

After some more exploration it became clear: The SDK has a helper script called android which is used (among many other things) to setup a project directory structure with all required files for ant to create a working APK:

> android create project
--path MyApp
--target android-19
--name MyApp
--package com.oryol.MyApp
--activity MyActivity

This will setup a directory ‘MyApp’ with a complete Android Java skeleton app. Run ‘ant debug’ in there and it will create a ‘MyApp-debug.apk’ in the ‘bin’ subdirectory which can be deployed to the Android device with ‘adb install MyApp-debug.apk’, which when executed displays a ‘Hello World, MyActivity’ string.

Easy enough, but there are 2 problems, first: how to get our native shared library packaged and called?, and second: the Java SDK project directory hierarchy doesn’t really fit well into the source tree of a C/C++ project. There should be a directory per sample app with a couple of C++ files and a CMakeLists.txt file and nothing more.

The first problem is simple to solve: the project directory hierarchy contains a libs directory, all .so files in there will be copied into the APK by ant (to verify this: a .apk is actually a zip file, simply changed the file extension to zip and peek into the file). One important point: the lib directory contains one sub-directory-level for the CPU architecture, so once we start to support multiple CPU instruction sets we need to put them into subdirectories like this:

FlohOfWoe:libs floh$ ls
armeabi armeabi
-v7a mips x86

Since my cmake build-system currently only supports building for armeabi-v7a I’ve put my .so file in the armeabi-v7a subdirectory.

Now I thought that I had everything in place, I got an APK file with my native code .so lib in it, I used the NativeActivity and the android_native_app_glue.h approach, and logged out a “Hello World” to the system log (which can be inspected with adb logcat from the host system).

And still the App didn’t start, instead this showed up in the log:

D/AndroidRuntime(  482): Shutting down VM
W
/dalvikvm( 482): threadid=1: thread exiting with uncaught exception (group=0x41597ba8)
E
/AndroidRuntime( 482): FATAL EXCEPTION: main
E
/AndroidRuntime( 482): Process: com.oryol.CoreHello, PID: 482
E
/AndroidRuntime( 482): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.oryol.CoreHello/android.app.NativeActivity}: java.lang.IllegalArgumentException: Unable to load native library: /data/app-lib/com.oryol.CoreHello-1/libCoreHello.so
E
/AndroidRuntime( 482): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2195)

This was the second time where I banged my head against the wall for a while until I started to look into how linker dependencies are resolved for the shared library. I was pretty sure that I gave all the required libs on the linker command line (-lc -llog -landroid, etc), the error was that I assumed that these are linked statically. Instead default linking against system libraries is dynamic. The ndk-depends helps in finding the dependencies:

localhost:armeabi-v7a floh$ ~/android-ndk-r9d/ndk-depends libCoreHello.so 
libCoreHello
.so
libm
.so
liblog
.so
libdl
.so
libc
.so
libandroid
.so
libGLESv2
.so
libEGL
.so

This is basically the list of .so files which must be contained in the APK. After I copied these to the SDK project's lib directory, together with my libCoreHello.so. Update: These shared libs are not supposed to be packaged into the APK! Instead the standard system shared libraries which already exist on the device should be linked at startup.

I finally saw the sweet, sweet ‘Hello World!’ showing up in the adb log!

But I skipped one important part: so far I fixed everything manually, but of course I want automated Android batch builds, and without having those ugly Android skeleton project files in the git repository.

To solve this I did a bit of cmake-fu:

Instead of having the Android SDK project files committed into version control, I’m treating these as temporary build files.

When cmake runs for an Android build target, it does the following additional steps:

For each application target, a temporary Android SDK project is created in the build directory (basically the ‘android create project’ call described above):

# call the android SDK tool to create a new project
execute_process
(COMMAND ${ANDROID_SDK_TOOL} create project
--path ${CMAKE_CURRENT_BINARY_DIR}/android
--target ${ANDROID_PLATFORM}
--name ${target}
--package com.oryol.${target}
--activity DummyActivity
WORKING_DIRECTORY $
{CMAKE_CURRENT_BINARY_DIR})

The output directory for the shared library linker step is redirected to the ‘libs’ subdirectory of this skeleton project:

# set the output directory for the .so files to point to the android project's 'lib/[cpuarch] directory
set(ANDROID_SO_OUTDIR ${CMAKE_CURRENT_BINARY_DIR}/android/libs/${ANDROID_NDK_CPU})
set_target_properties
(${target} PROPERTIES LIBRARY_OUTPUT_DIRECTORY ${ANDROID_SO_OUTDIR})
set_target_properties
(${target} PROPERTIES LIBRARY_OUTPUT_DIRECTORY_RELEASE ${ANDROID_SO_OUTDIR})
set_target_properties
(${target} PROPERTIES LIBRARY_OUTPUT_DIRECTORY_DEBUG ${ANDROID_SO_OUTDIR})

The required system shared libraries are also copied there: (DON’T DO THIS, normally the system’s standard shared libraries should be used)

# copy shared libraries over from the Android toolchain directory
# FIXME: this should be automated as post-build-step by invoking the ndk-depends command
# to find out the .so's, and copy them over
file
(COPY ${ANDROID_SYSROOT_LIB}/libm.so DESTINATION ${ANDROID_SO_OUTDIR})
file
(COPY ${ANDROID_SYSROOT_LIB}/liblog.so DESTINATION ${ANDROID_SO_OUTDIR})
file
(COPY ${ANDROID_SYSROOT_LIB}/libdl.so DESTINATION ${ANDROID_SO_OUTDIR})
file
(COPY ${ANDROID_SYSROOT_LIB}/libc.so DESTINATION ${ANDROID_SO_OUTDIR})
file
(COPY ${ANDROID_SYSROOT_LIB}/libandroid.so DESTINATION ${ANDROID_SO_OUTDIR})
file
(COPY ${ANDROID_SYSROOT_LIB}/libGLESv2.so DESTINATION ${ANDROID_SO_OUTDIR})
file
(COPY ${ANDROID_SYSROOT_LIB}/libEGL.so DESTINATION ${ANDROID_SO_OUTDIR})

The default AndroidManifest.xml file is overwritten with a customized one:

# override AndroidManifest.xml 
file
(WRITE ${CMAKE_CURRENT_BINARY_DIR}/android/AndroidManifest.xml
"<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n"
" package=\"com.oryol.${target}\"\n"
" android:versionCode=\"1\"\n"
" android:versionName=\"1.0\">\n"
" <uses-sdk android:minSdkVersion=\"11\" android:targetSdkVersion=\"19\"/>\n"
" <uses-feature android:glEsVersion=\"0x00020000\"></uses-feature>"
" <application android:label=\"${target}\" android:hasCode=\"false\">\n"
" <activity android:name=\"android.app.NativeActivity\"\n"
" android:label=\"${target}\"\n"
" android:configChanges=\"orientation|keyboardHidden\">\n"
" <meta-data android:name=\"android.app.lib_name\" android:value=\"${target}\"/>\n"
" <intent-filter>\n"
" <action android:name=\"android.intent.action.MAIN\"/>\n"
" <category android:name=\"android.intent.category.LAUNCHER\"/>\n"
" </intent-filter>\n"
" </activity>\n"
" </application>\n"
"</manifest>\n")

And finally, a custom build-step to invoke the ant-build tool on the temporary skeleton project to create the final APK:

if ("${CMAKE_BUILD_TYPE}" STREQUAL "Debug")
set(ANT_BUILD_TYPE "debug")
else()
set(ANT_BUILD_TYPE "release")
endif
()
add_custom_command
(TARGET ${target} POST_BUILD COMMAND ${ANDROID_ANT} ${ANT_BUILD_TYPE} WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/android)

With all this in place, I can now do a:

> ./oryol make CoreHello android-make-debug

To compile and package a simple Hello World Android app!

What’s currently missing is a simple wrapper to deploy and run an app on the device:

> ./oryol deploy CoreHello
> ./oryol run CoreHello

These would be simple wrappers around the adb tool, later this should of course also work for iOS apps.

Right now the Android build system only works on OSX and only for the ARM V7A instruction set, and there’s no proper Android port of the actual code yet, just a single log message in the CoreHello sample.

Phew, that’s it! All this stuff is also available on github (https://github.com/floooh/oryol/tree/master/cmake).

Written with StackEdit.

Game From Scratch

LibGDX finally reaches version 1.0!

by Mike@gamefromscratch.com at April 20, 2014 09:00 PM

 

So it’s taken a while, 4 years to be precise, but LibGDX, the cross platform Java game development engine, has finally hit the milestone release 1.0. 

 

Everyone thought the day would never come. But here it is. libGDX 1.0 is officially released! Let me quickly run you through the most important changes:

Read the full CHANGES file for more goodies.

To try out our new setup, start here!

If you have a Gradle-based project, make sure to update the gdxVersion to “1.0.0″ and refresh your IDE project files! The new snapshot (==nightly) version is “1.0.1-SNAPSHOT”

Besides the usual bug fixing and enhancements, we also cleaned up our libGDX repo, website and wiki for the 1.0 release. The old setup UI has been deprecated, the audio and image extensions have been removed, and the demos have been gradelized and put into their own repositories. You can now also directly test the demos in your browser (or desktop, or Android device)!

Finally, we’ve setup a Patreon, that allows users to contribute to our infrastructure costs. This has been so successful, that we were able to move our build and website server to Hetzner. After the move and adding some build magic, the build now takes 10 minutes instead of 1 hour and 45 minutes. Thanks to all the patrons, you really made a difference in my life!

Going forward, we’ll try to have a much shorter release cycle (2 weeks – 1 month). The major version of libGDX will stay at 1 for the foreseeable future. The minor version will be increased when API breaking changes are introduced. The patch version will be increased in case of bug fixes and API additions. Releasing often allows you to stay as up-to-date as possible before freezing your libGDX version for a release.

 

They also have a brief blurb about the future:

With all pieces in place, Q1 2014 was used to polish up libGDX’s user experience and documentation for the 1.0 release. We now support all JVM development environments (Eclipse, IDEA, Netbeans, CLI) through our Gradle-based builds. Our build server has been upgraded so we can push out new releases much more easily (and hence regularly!). Our repository has been cleaned up, any clutter has been removed. The Wiki has been updated to reflect the latest state of APIs and setup procedures. We are ready to pull the trigger. After 4 years of development, libGDX has finally reached version 1.0.

 

There is also a detailed history of the LibGDX project.  You can read all about it in the announcement.

 

Congratulations to the LibGDX team!

Geeks3D Forums

Happy WebGL Easter holidays from Goo Technologies

April 20, 2014 05:40 PM

Happy WebGL Easter holidays from Goo Technologies :-) (goote.ch)



(Demoscene) Revision 2014

April 20, 2014 05:38 PM

Quote
Revision features lots of competitions – code for all available platforms, compose music, draw or animate beautiful graphics – there's something for everybody!

Timothy Lottes

Minimal x86-64 Elf Header For Dynamic Loading

by Timothy Lottes (noreply@blogger.com) at April 20, 2014 01:26 PM

It is possible to get the Linux x86-64 ELF overhead down to 495 bytes and include enough information to support one symbol, dlsym(), which is the only symbol required to do manual dynamic loading of whatever is needed at runtime. This is a very important step in reducing the work required for those doing custom languages. Below is a readelf -a dump of a test binary.

ReadELF

ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Advanced Micro Devices X86-64
Version: 0x1
Entry point address: 0x1f0
Start of program headers: 64 (bytes into file)
Start of section headers: 0 (bytes into file)
Flags: 0x0
Size of this header: 64 (bytes)
Size of program headers: 56 (bytes)
Number of program headers: 3
Size of section headers: 64 (bytes)
Number of section headers: 0
Section header string table index: 0

There are no sections in this file.

There are no sections in this file.

Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
INTERP 0x00000000000001c4 0x00000000000001c4 0x00000000000001c4
0x000000000000001a 0x000000000000001a RWE 1
[Requesting program interpreter: /lib/ld-linux-x86-64.so.2]
LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000
0x0000000000000361 0x0000000000000361 RWE 200000
DYNAMIC 0x00000000000000e8 0x00000000000000e8 0x00000000000000e8
0x0000000000000080 0x0000000000000080 RWE 8

Dynamic section at offset 0xe8 contains 8 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libdl.so.2]
0x0000000000000004 (HASH) 0x1b0
0x0000000000000005 (STRTAB) 0x1dd
0x0000000000000006 (SYMTAB) 0x168
0x0000000000000007 (RELA) 0x198
0x0000000000000008 (RELASZ) 24 (bytes)
0x0000000000000009 (RELAENT) 24 (bytes)
0x0000000000000000 (NULL) 0x0

There are no relocations in this file.

There are no unwind sections in this file.

Histogram for bucket list length (total of 1 buckets):
Length Number % of total Coverage
0 0 ( 0.0%)
1 1 (100.0%) 100.0%

No version information found in this file.

Details
Everything is aliased into one Read/Write/Execute binary blob with all ELF related stuff at the beginning of the blob. The ordering of the ELF related stuff is {ELF header, program header, dynamic section, symbol table, relocation table, hash table, interpreter string, dynamic string table}.

There is no need for section headers as they are redundant and not read by the dynamic loader. There is no PHDR program header. This also uses the SYSV style hash instead of the GNU style hash (same as the -hash-style=sysv option for ld). The hash table can be a simple array of 32-bit words {1,2,1,0,0}. So just one bucket, and 2 symbols in the file (undefined and dlsym).

This uses "/lib/ld-linux-x86-64.so.2" as the interpreter string. Note when messing around with ld and simple assembly programs, ld can easily place in the ELF standard /lib/ld64.so.1, which does not work on Linux because of a missing symlink, so the --dynamic-linker=/lib/ld-linux-x86-64.so.2 option would be required.

This packs the interpreter string then the dynamic string table, with the dynamic string table starting at the null terminator of the interpreter string. This one byte overlap covers the required null first string.

This uses PF_X+PF_W+PF_R for p_flags for all the program headers, and runs from offset 0 instead of 0x400000 to make file offset = virtual address.

This does not use DT_BIND_NOW or the associated flag from the dynamic section, and that seems to work at least in this case when using STT_OBJECT instead of STT_FUNC for the symbol entry for dlsym.

The readelf fails to print the relocaton info and objdump simply cannot process the binary. The binary's only symbol, dlsym, is setup with STV_DEFAULT, STB_WEAK, and STT_OBJECT. The single relocation entry is R_X86_64_JUMP_SLOT which is setup to modify a 64-bit address in the binary blob which is used directly. There is no real PLT and GOT. This is setup to do binding at load time, so the 64-bit address for dlsym is just used after load directly.

Note, other types of relocation types simply don't work and I'm not sure why. Using "LD_DEBUG=all ./a.out" to debug shows the following when things work,

relocation processing: ./a.out (lazy)
3306: symbol=dlsym; lookup in file=./a.out [0]
3306: symbol=dlsym; lookup in file=/lib/libdl.so.2 [0]
3306: binding file ./a.out [0] to /lib/libdl.so.2 [0]: normal symbol `dlsym'

And then shows something like "binding file ./a.out [0] to ./a.out [0]" (binding to itself) when things fail. From what I can remember, attempting to switch to relocations R_X86_64_64 or R_X86_64_GLOB_DAT always hit the fail case.

Fail
My new favorite measure of fail in system engineering is the ratio of the garbage required to do something vs the minimal amount of stuff actually required. In this case binaries could be as simple as {a filled in at run-time 64-bit pointer to dlsym() at address 0, program entry at address 8, and then the rest of the program}. By this metric ELF has roughly a 64x fail ratio.

Revision 2014 Tubes

by Timothy Lottes (noreply@blogger.com) at April 20, 2014 07:03 AM











iPhone Development Tutorials and Programming Tips

Component Allowing You To Easily Display Images With A Parallax Effect In A UICollectionView

by Johann at April 19, 2014 06:35 AM

Post Category    Featured iPhone Development Resources,iOS UI Controls,iPad,iPhone,Objective-C

I’ve mentioned a number of components that make use of parallax effects most recently this control for creating sliding menus with background images in parallax, and this component allowing you to add a parallax effect to a UITableView.

Here’s an open source component submitted by Mayur Joshi that allows you to easily create a scrolling view with images displayed with a parallax effect called MJParallaxCollectionView.

MJParallaxColletionView provides custom collection view cells to display the images with the scrolling effect. You can tweak the effect by adjust the animation speed and height values. A full demo is included.

A quick demo video showing MJParallaxCollectionView in action:

You can find MJParallaxCollectionView on Github here.

A nice easy to use component for setting up a nice scrolling gallery with parallax effect.


Be the first to comment...

Related Posts:

FacebookTwitterDiggStumbleUponGoogle Plus

Original article: Component Allowing You To Easily Display Images With A Parallax Effect In A UICollectionView

©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

c0de517e Rendering et alter

Wolfram's Mathematica 101

by DEADC0DE (noreply@blogger.com) at April 18, 2014 10:16 PM

After a lengthy preamble, I'll try to explain the language in the http://learnxinyminutes.com/ style, so you might prefer to skip down to the last section.

This will appear in a shorter form also on AltDevBlogADay

- Introduction

I've been using Wolfram's Mathematica since my days in university. I wasn't immediately sold as initially I saw it as a computer algebra system and preferred Maple's more math-friendly syntax for that, but with time it became a great tool in my arsenal of languages.
The way I see Mathematica fit in today's rendering engineer (or game developer in general) work is that of a data analysis tool, mostly. We increasingly have to deal with data, either acquired (e.g. measured BRDFs) or simulated (e.g. integrals of the rendering equation), get "a sense" of it, compare it with our realtime rendering models, and try to derive the right approximations for the sea of things we still can't directly solve.

What makes Mathematica good for this job, a better tool than say C++, are a few key features: it's an interactive, exploratory environment, it has strong visualization and manipulation abilities, it has a rich library providing almost everything you could think of, it's a concise language, and it has a great community (see http://mathematica.stackexchange.com/ and http://www.wolfram.com/broadcast/video.php?channel=311) and great documentation.

Two notes, before looking at the language. First, you might notice that on the technical level there are alternatives that can compete. We want a prototyping language, with lots of libraries, an interactive shell, solid visualization abilities… Python fits the bill as well, most probably Matlab and a number of its clones (Scilab, Octave), Maple and a number of others.
So why should you be interested in even learning Mathematica, if you can do most of the same things in Python, which is free? In my view, the money you pay for Wolfram's system is well spent because of the packaging. Many of the functions might be exactly the same you get in other systems (e.g. Lapack for linear algebra), but Mathematica packages them in a consistent syntax, with astonishingly good documentation, great support, testing and so on.

The second remark is, as might have noticed, that didn't mention the CAS aspects. Perhaps surprisingly, computer algebra is not the most important part for my job, as more often than not you're dealing with integrals that can't be analytically solved, or with directly with raw data. Nonetheless, Mathematica being a CAS is a great perk, as being able to easily manipulate your expressions makes also the numerical experiments more flexible, and Wolfram's is undoubtedly the best CAS out there (Sage, Maxima and so on can help, but aren't close).
Also, don't think that CAS can magically solve maths if you don't know it. It's true that it gan greatly help, as you might have forgotten all the myriads of formulas used to solve limits, derivatives, integrals or to transform trigonometric expressions and so on. But you still have to know what you're doing, sometimes even "better" than doing it yourself in a way that often we solve equations under some mental assumptions that don't hold true in general (i.e. range of the variables, domains, periodicity), and if you don't realize that, and tell the system, Mathematica won't be able to solve sometimes even "obvious" equations.

I've always encouraged my companies to get a few distributed seats of Mathematica, but remember, if you just need the occasional solution of an analytic expression, Sage, Maxima (both can be tried online) or even Wolfram Alpha can work wel. On iOS I use MathStudio but PocketCAS and iCAS (based on Reduce) look promising as well.

- Mathematica's language

It stands to reason that a CAS is built on top of a symbolic language that supports programmatic manipulation of its own programs (code as data, a.k.a. homoiconicity), and indeed this is the case here. The most famous homoiconic language is Lisp, and indeed you're familiar with the Lisp family of languages, Mathematica won't be too far off, but there are a few notable differences.
While in Lisp everything is a list, in Mathematica, everything is an expression tree. Also, expressions in mathematica can have different forms, that is, input (or display) versions of the same internal expression node. This allows you for example to have equations entered in the standard mathematical notation (TraditionalForm) via equation editors or in a textual form that can be typed without auxiliary graphical symbols (InputForm) and so on. Mathematica's environment, the notebook, is not a purely textual one, but supports graphics, so even images and graphs can be displayed as output or input, inside equations, while still maintaining the same internal representation.

Mathematica is an interactive environment, but it's not a standard REPL (read-eval-print loop), instead it relies on the concept of "notebooks" which are a collection of "cells". Each cell can be evaluated (shift-enter) and it will yield an output cell underneath them, thus allowing for changes and re-evaluation of cells in any order. Cells can also be marked as not containing Mathematica code but just text, thus the notebook is a mix of code and documentation which enables a sort of "literary programming" style.
For completeness it's worth noticing that Mathematica also has a traditional text-only interface that can be invoked by running the Kernel outside the notebook environment, which has only textual input and output and has only the standard REPL you would expect, but there's little reason to use it. There is also a more "programming" oriented environment called the Workbench, an optional product that can make your life easier if you write lots of Mathematica code and need to profile, debug and so on.

- By example crash course. In a notebook, put each group in a separate cell and evaluate.

Note: Mathics
 is an OpenSource implementation based on SciPy and Sage. It also has an online interface so you can try I expect most of the code below!

(* This is a comment, if you're entering this in a notebook remember that to evaluate the content of a cell you need to use shift-enter or the numeric pad enter *)

(* Basic math is as expected, but it's kept at arbitrary precision unless you use machine numbers *)
(1+2)*3/4
(1.+2.)*3./4.
(* % refers to the last computed value *)
%+2
(* Functions are invoked passing parameters in square braces, all built-in functions start with capitals*)
Sin[Pi/3]
(* N[] forces evaluation to machine numbers, using machine numbers makes evaluation faster, but will defeat many CAS functions *)
N[Sin[Pi/3]]
(* Infix and postfix operators all have a functional form, use FullForm to show *)
FullForm[Hold[(1+2)*3/4]]
(* Each expression in a cell will yield an output in a separate output cell. Expressions can be terminated with ; if we don't want them to emit output, which is useful when doing intermediate assignments that would yield large outputs otherwise *)
1+2;

(* Assigning a symbol to an expression *)
x = 10
(* If a symbol is not yet defined, it will be kept in its symbolic version as the evaluation can't proceed further *)
y = x*w
(* This will recursively expand z until it reaches expansion limit and errors out *)
z = z+1
(* Clears the previous assignments. It's not wise to assign as globals such common symbols, we use these here for brevity and will clear as needed *)
Clear[x,y,z] 

(* Evaluation is controlled by symbols attributes *)
x = 10
(* y will be equal to "x*2", not 20 as := is the infix version of the function SetDelayed, which doesn't evaluate the right hand...*)
y := x*2 
(* …that's because SetDelayed has attribute HoldAll, which tells the evaluator to not evaluate any of its arguments. HoldAll and HoldFirst attributes are one of the "tricky" parts, and a big difference from Lisp where you should explicitly quote to stop evaluation *)
Attributes[SetDelayed] 
(* As many functions in Mathematica are supposed to deal with symbolic expressions and not their evaluated version, you'll find that many of them have HoldAll or HoldFirst, for example Plot has HoldFirst to not evaluate its first argument, that is the expression that we want to graph *)
Plot[Sin[x], {x, 0, 6*Pi}]
(* The Hold function can be used to stop evaluation, and the Evaluate function can be used to counter-act HoldFirst or HoldAll *)
Hold[x*2]
y:=Evaluate[x*2]
y

(* A neat usage of SetDelayed is for memoization of computations, the following pi2, the first time it will be evaluated, will set itself to the numerical value of Pi*Pi to 50 decimal points *)
pi2:=pi2=N[Pi*Pi,50]
pi2

(* Defining functions can be done with the Function function, which has attributes HoldAll *)
fn=Function[{x,y}, x*y];
fn[10,20]
(* As many Mathematica built-ins, Function has multiple input forms, the following is a shorthand with unnamed parameters #1 and #2, ended with the & postfix *)
fn2=#1*#2&
fn2[10,20]
(* Third version, infix notation. Note that \[Function] is a textual representation of a graphical symbol that can be more easily entered in Mathematica with the key presses: esc f n esc, many symbols can be similarly entered, try for example esc theta esc *)
fn3={x,y}\[Function]x*y
fn3[10,20]

(* A second, very common way of defining functions is to use pattern matching and delayed evaluation, the following defines the fn4 symbol to evaluate the expression x*y when it's encountered with two arguments that will matched to the symbols x and y *)
fn4[x_,y_]:=x*y
fn4[10,20]
(* _ or Blank[] can match any Mathematica expression, _h matches only expressions with the Head[] h *)
fn5[x_Integer,y_Integer]:=x+y
fn5[10,20]
fn5[10,20.]
(* A symbol can have multiple matching rules *)
fn6[0] = 1;
fn6[x_Integer] := x*fn6[x - 1]

fn6[3]

(* In general pattern matching is more powerful than Function as it's really an evaluation rule, but it's slower to evaluate, thus not the best if a function has to be applied over large datasets *)
(* Note that pattern matching can be used also with =, not only :=, but beware that = evaluates RHS, in the following fnWrong will multiply y by 3, not by the value matching test at "call" site, as test*y gets fully evaluated and test doesn't "stay" a symbol, it evaluates to its global value *)
test = 3;

fnWrong[test_, y_] = test*y

(* Lists are defined with {} *)
a={1,2,3,{4,5},{aa,bb}}
(* Elements are accessed with [[index]], indices are one-based, negative wrap-around *)
a[[1]]
a[[-1]]
(* Ranges are expressed with ;; or Span *)
a[[2;;4]]
(* From the beginning to the second last *)
a[[;;-2]]
(* Vectors and matrices are just appropriately sized lists and lists of lists *)
b={1,2,3}
m={{1,0,0},{0,1,0},{0,0,1}}
(* . is the product for vector, matrices, and tensors *)
m.b

(* Expression manipulation and CAS. ReplaceAll or /. applies rules to an expression *)
(x+y)/.{x->2,y->Sin[Pi]}
(* Rules can contain patterns, the following will match only the x symbols that appear to a power, match the expression of the power and replace it *)
Clear[x];
1+x+x^2 +x^(t+n)/.{x^p_->f[p]}
(* In a way, replacing a symbol with a value in an expression is similar to defining functions using := or = and pattern-matching, but we have to manually replace the right symbol... *)
expr = x*10
expr/.x->5
(* Mathematica has lots of functions that deal with expressions, Integrate, Limit, D, Series, Minimize, Reduce, Refine, Factor, Expand and so on. We'll show only some basic examples. Solve finds solution to systems of equations or inequalities *)
Clear[a];
Solve[x^2+a*x+1==0, x]
(* It returns results as list of replacement rules that we can replace into the original equation *)
eq=x^2+a*x+1
sol=Solve[eq==0, x]
neweq=eq/.sol[[1]]
(* Simplifying neweq yields true as the equation is satisfied *)
Simplify[neweq]
(* Assumptions on the variables can be made *)
Simplify[Sqrt[x^2], Assumptions -> x < 0]
(* fn7 will compute the Integral and Derivative every time it's evaluated, as Function is HoldAll, fn8, using Evaluate, will force the definition to be equal to the simplified version which yields correctly back the original equation *)
fn7[x_]:=Function[x,D[Integrate[x^3,x],x]]
fn8[x_]:=Function[x,Evaluate[Simplify[D[Integrate[x^3,x],x]]]]

(* Many procedural programming primitives are supported *)
If[3>2,10,20]
For[i = 0,i < 4,i++,Print[i]]
n=1; While[n < 4,Print[n];n++]
Do[Print[n^2],{n,4}]
(* Boolean operators are C-like for the most, only Xor is not ^ which means Power instead *)
!((1>2)||(4>3))&&((1==1)&&(5<=6))
(* Equality tests can be chained *)
(5>4>3)&&(1!=2!=3)
(* == compares the result of the evaluation on both sides, === is true only if the expression are identical *)
v1=1;v2=1;
v1==v2
v1===v2
(* Boolean values are False and True. No output is Null *)

(* With, Block and Module can be used to set symbols to temporary values in an expression *)
With[{x = Sin[y]}, x*y]
Block[{x = Sin[y]}, x*y]

Module[{x = Sin[y]}, x*y]

(* The difference is subtle. With acts as a replacement rule. Block temporarily assigns the value to a symbol and the restores the previous definition. Module creates an unique, temporary symbol, which affects only the occurrences in the inner scope. *)
m=i^2
Block[{i = a}, i + m]
Module[{i = a}, i + m]
(* In general prefer Block or With, which are faster than Module. Module implements lexical scoping, Block does dynamic scoping *)
(* Block and Module don't require to specify values for the declared locals, With does. The following is fine with Block, not with With*)
Block[{i},i=10;i+m]

(* Data operations. Table generates data from expressions *)
Table[i^2,{i,1,10}]
(* Table can generate multi-dimensional arrays, i.e. matrices *)
Table[10*i+j,{i,1,4},{j,1,3}]
MatrixForm[%]
(* List elements can be manipulated using functional programming primitives, like Map which applies a function over a list *)
squareListElements[list_]:=Map[#^2&,list]
(* Short-hand, infix notation of Map[] is /@ *)
squareListElements2[list_]:=(#^2&)/@list
(* You can use MapIndexed to operate in parallel across two lists, it passes to the mapped function *)
addLists[list1_,list2_]:=MapIndexed[Function[{element,indexList},element + list2[[indexList[[1]]]] ], list1]
addLists[{1,2,3},{3,4,5}]
(* A more complete version of the above that is defined only on lists and asserts if the two lists are not equal size. Note the usage of ; to compound two expressions and the need of parenthesis *)
On[Assert]
addListsAssert[list1_List,list2_List]:=(Assert[Length[list1]==Length[list2]]; MapIndexed[Function[{element,indexList},element + list2[[indexList[[1]]]] ], list1])
(* Or Thread can be used, which "zips" two or more lists together *)
addLists2[list1_,list2_]:=MapThread[#1+#2&,{list1,list2}]
(* There are many functional list manipulation primitives, in general, using these is faster than trying to use procedural style programming. Extract from a list of the first 100 integers, the ones divisible by five *)
Select[Range[100],Mod[#,5]==0&]
(* Group together all integers from 1...100 in the same equivalence class modulo 5 *)
Gather[Range[100],Mod[#1,5]==Mod[#2,5]&]
(* Fold repeatedly applies a function to each element of a list and the result of the previous fold *)
myTotal[list_]:=Fold[#1+#2&,0,list]
(* Another way of redefining Total is to use Apply, which calls a function with as arguments, the elements of a list. The infix shorthand of Apply is @@ *)
myTotal2[list_]:=Apply[Plus,list]

(* Mathematica's CAS abilities also help with numerical algorithms, as Mathematica is able to infer some information from the equations passed in order to select or optimize the numerical methods *)
(* NMinimize does constrained and unconstrained minimization, linear and nonlinear, selecting among different algorithms as needed *)
Clear[x,y]
NMinimize[{x^2-(y-1)^2, x^2+y^2<=4}, {x,y}]
(* NIntegrate does numerical definite integrals. Uses Monte Carlo methods for many-dimensional integrands *)
NIntegrate[Sin[Sin[x]], {x,0,2}]
(* NSum approximates discrete summations, even to infinites *)
NSum[(-5)^i/i!,{i,0,Infinity}]
(* Many other analytic operators have numerical counterparts, like NLimit, ND and so on... *)
NLimit[Sin[x]/x,x->0]
ND[Exp[x],x,1]

(* Mathematica's plots produce Graphics and Graphics3D outputs, which the notebook shows in a graphical interface *)
Plot[Sin[x],{x,0,2*Pi}]
(* Graphics are objects that can be further manipulated, Show combines different graphics together into a single one *)
g1=Plot[Sin[x],{x,0,2*Pi}];
g2=Plot[Cos[x],{x,0,2*Pi}];
Show[g1,g2]
(* GraphicsGrid on the other hand takes a 2d matrix of Graphics objects and displays them on a grid *)
GraphicsGrid[{{g1,g2}}]
(* Graphics and Graphics3D can also be used directly to create primitives *)
Graphics[{Thick,Green,Rectangle[{0,-1},{2,1}],Red,Disk[],Blue,Circle[{2,0}]}]
(* Most Mathematica functions accept a list of options as the last argument. For Plots an useful one is to override the automatic range. Show by default uses the range of the first Graphics so it will cut the second plot here: *)
Show[Plot[x^2,{x,0,1}],Plot[x^3,{x,1,2}]]
(* Forcing to show all the plotted data *)
Show[Plot[x^2,{x,0,1}],Plot[x^3,{x,1,2}], PlotRange->All]

(* Very handy for explorations is the ability of having parametric graphs that can manipulated. Manipulate allows for a range of widgets to be displayed next to the output of an expression *)
Manipulate[Plot[x^p,{x,0,1}],{{p,1},1,10}]
Manipulate[Plot3D[x^p[[1]]+y^p[[2]],{x,0,1},{y,0,1}],{{p,{1,1}},{1,1},{5,5}}]
(* Manipulate output is a Dynamic cell, which is special as it get automatically re-evaluated if any of the symbols it capture changes. That's why you can see Manipulate output behaving "weirdly" if you change symbols that are used to compute its output. This allows for all kind of "spreadsheet-like" computations and interactive applications. *)

(* Debugging functional programs can be daunting. Mathematica offers a number of primitives that to a degree help. Monitor generates a temporary output that shows the computation in progress. Here the temporary output is a ProgressIndicator graphical object. Evaluations can be aborted with Alt+. *)
Monitor[Table[FactorInteger[2^(2*n)+1],{n,1,100}], ProgressIndicator[n, {1,100}]]
(* Another example, we assign the value of the function to be minimized to a local symbol, so we can display how it changes as the algorithm progresses *)
complexFn=Function[{x,y},(Mod[Mod[x,1],Mod[y,1]+0.1])*Abs[x+y]]
Plot3D[complexFn[x,y],{x,-2,2},{y,-2,2}]
Block[{temp},Monitor[NMinimize[{temp=complexFn[x,y],x+y==1},{x,y}],N[temp]]]
(* Print forces an output from intermediate computations *)
Do[Print[Prime[n]],{n,5}]
(* Mathematica also supports reflection, via Names, Definition, Information and more *)

(* Performance tuning. A first common step is to reduce the number of results Mathematica will keep around for % *)
$HistoryLength=2
(* Evaluate current memory usage *)
MemoryInUse[]
(* Share[] can sometimes shrink the memory usage by making Mathematica realize that certain subexpressions can be shared, it prints the amount of bytes saved *)
Share[]
(* Reflection can be used to know which symbols are taking the most memory *)
Reverse@Sort[{ByteCount[Symbol[#]],#}&/@Names["`*"]]
(* Timing operations is simple with AbsoluteTiming *)
AbsoluteTiming[Pause[3]]
(* Mathematica's symbolic evaluation is relatively slow. Machine numbers operations are faster, but slow compared to other languages. In general Mathematica is not made for high-performance, and if that's needed it's best to directly go to one of the ways it supports external compilation: LibraryLink, CudaLink, and OpenCLLink *)
(* On the upside, many list-based operations are trivially parallelizable via Parallelize *)
Parallelize[Table[Length[FactorInteger[10^50+n]],{n,20}]]
(* The downside is that only a few functions seems to be natively parallelized, mostly image-related, and many others require manual parallelization via domain-splitting. E.G. integrals *)
sixDimensionalFunction=Function[{a,b,c,d,e,f},Re[(a*b+c)^d/e+f]];
Total[ParallelTable[NIntegrate[sixDimensionalFunction[a,b,c,d,e,f],{a,-1,1},{b,-1,1},{c,-1,1},{d,-1,1},{e,-1,1},{f,-1+i/4,-1+(i+1)/4}],{i,0,7}]]
(* Even plotting ca be parallelized, see http://mathematica.stackexchange.com/questions/30391/parallelize-plotting. Intra-thread communication is expensive, beware of the amount of data you move! *)
(* There is a Compile functionality that can translate -some- Mathematica expressions into bytecode or C code, even parallelizing, but it's quite erratic and requires planning from the get-go of your code. See http://mathematica.stackexchange.com/questions/1803/how-to-compile-effectively/ http://mathematica.stackexchange.com/questions/1096/list-of-compilable-functions*)

- Parting thoughts
Clearly, it's impossible to cover all the library functionality that Mathematica offers. But for that the documentation is great, and usually a bit of search there and if it fails, on the stackexchange forums, will yield a very elegant solution for most issues.
Performance can be tricky, and can require more effort than using directly native CPU and GPU languages, on the other hand, support for external CPU and GPU functions is great and Mathematica is capable of invoking external compilers from strings of sourcecode, and you can use Mathematica as a template metaprogramming language, even with a bit of effort converting its expressions into other language equivalents (a good starting point is CForm[]). Being a very strong pattern-matching engine, quite some magic is possible.

Next time I might write something that shows in practice how Mathematica, via it's numerical and visualization abilities enables exploration of possible approximations of expensive rendering formulas... Stay tuned.

Further reading:


    Geeks3D Forums

    (WebGL) AlteredQualia woolly mammoth skeleton

    April 18, 2014 03:46 PM

    AlteredQualia woolly mammoth skeleton works now on Nexus 4 with GPU driver update.

    [img]https://farm3.staticflickr.com/292...



    Qualcomm Adreno GPU driver 17.04.2014

    April 18, 2014 03:42 PM

    Quote
    User-mode driver binaries for Qualcomm’s Adreno 3xx GPU on Nexus 4, 5 & 7 devices running Google Android 4.4.2 KitKat[/ur...

    Real-Time Rendering

    Free/Cheap Processing Course by Andrew Glassner

    by Eric at April 18, 2014 02:40 PM

    Andrew Glassner has made an 8 week course about the graphics language Processing. The first half of the course is free; if you find you like it, the second half is just $25. Even if you don’t want to take the course, you should watch the 2.5 minute video at the site – beware, it may change your mind. The video gives a sense of the power of Processing and some of the wonderful things you can do with it. My small bit of experience with the language showed me it’s a nice way to quickly display and fiddle around with all sorts of graphical ideas. While I dabbled for a week, Andrew used it for half a decade and has made some fascinating programs. Any language that can have such a terrible name and still thrive in so many ways definitely has something going for it.

    glassner1

    Procedural World

    Video Update for April 2014

    by Miguel Cepero (noreply@blogger.com) at April 18, 2014 12:58 PM

    Wondering what happened in the last few months? Here is an update:


    There are several things we did that were not covered in this update. You will notice a river in the background but there is no mention about water.


    It is not that we are hydrophobic or that we want to tease you about this feature, we just want to spend more time improving the rendering.

    I also go on in this update about how clean and sharp our new tools are. There is indeed a big difference in the new toolset, but still there are serious issues with aliasing when you bring detail beyond what the voxels can encode. For instance, the line tool now can do much better lines, but we still cannot do a one-voxel thick line that goes in any angle. This is because in order to fix the aliasing in this line would need sub-voxel resolution. So it OK to expect cleaner lines, but they can still break due to aliasing.



    iPhone Development Tutorials and Programming Tips

    Open Source iOS Component That Makes It Easier To Get Permission To Use Photos And Contacts

    by Johann at April 18, 2014 06:13 AM

    Post Category    Featured iPhone Development Resources,iOS UI Controls,iPad,iPhone,Objective-C

    One major issue that arises with iOS apps that use photos or contacts is that the user needs to give permission, and if the user declines to give access they will later need to go into settings and adjust the permissions.

    Here’s a library from Cluster called ClusterPrePermissions that aims to help solve this issue by displaying a prompt asking the user if they want to give permission before actually giving permission.

    ClusterPrePermission will check the authorization status of photos and contacts, and if that status not yet determined will display a customizable prompt where you can ask the user for “pre-permission”.

    And the readme states that this approach has been extremely successful for Cluster: “For photos alone, 46% of people who denied access in a pre-permissions dialog ended up granting access when asked at a better time later.”

    Here is a set of images from the readme showing ClusterPrePermissions in action:

    ClusterPrePermissions

    You can find ClusterPrePermissions on Github here.

    You can read more about this pre-permission approach in this article by Cluster’s Brenden Mulligan.

    A nice approach to solving a common issue.


    Be the first to comment...

    Related Posts:

    FacebookTwitterDiggStumbleUponGoogle Plus

    Original article: Open Source iOS Component That Makes It Easier To Get Permission To Use Photos And Contacts

    ©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

    Gamasutra Feature Articles

    Understanding the successful relaunch of Final Fantasy XIV

    April 18, 2014 04:00 AM

    Producer and director Naoki Yoshida tells Gamasutra why he rebooted Final Fantasy XIV. ...

    Game From Scratch

    Substance Painter release Beta 3 to Steam

    by Mike@gamefromscratch.com at April 18, 2014 12:14 AM

     

    I suck at texturing.

     

    There, I said it.  I love 3D modeling, I even enjoy rigging and animating.  While not my favorite activity, I am even OK with UV unwrapping…

     

    But texturing, I suck at texture mapping.

     

    Over the years I have tried a number of solutions that make texturing easier going back to the early days of 3D Studio Max plugins that allow you to paint directly on your models.  Since then, those tools have become progressively more built in to the base package but at the end of the day, the vast majority of texturing still takes place in Photoshop and I still suck at it.

     

    Enter Substance Painter.  It appeared on Steam about a month back and I’ve been playing around with it ever since.  I intend to cover it in more detail soon, in fact I would have already if it weren't for the massive influx of goodies that came with GDC this year.  Anyways, stay tuned for more details shortly…image

     

    For now, a spot a of news.  Beta 3 was just released.  Oh, and if you buy during the beta it’s 50% off.

     

    Enough talking, so what exactly is Substance Painter?

     

    Short version; it’s the program that makes me not completely suck at texturing.  That’s about the biggest endorsement I can give.

     

    Long version, well, I’ll use their wording for that:

     

    Substance Painter is a brand new 3D Painting app featuring never before seen features and workflow improvements to make the creation of textures for modern games easier than ever.


    At Allegorithmic, we have a long history of working very closely with our customers, from the small independents to the largest AAA studios. Today we want you to help us design the ultimate painting tool and bring innovation and state of the art technology to every artist out there at a fair price.

     

    Today, as the title suggests, they released Beta 3.  As to what’s new in BETA 3:

    Testing focus

    • 2D View
    Change list
    • Added: Seamless 2D View!
    • Added: Bitmap layer masks
    • Added: Environment exposure control
    • Updated: Fill Layers now use the Tools windows to set their properties
    • Updated: Materials can be applied to Fill Layers
    • Updated: Added more stencils in the stencil library
    • Updated: Particles presets updated for faster computation
    • Updated: PBR shader optimization and quality improvement for lower quality settings
    • Fixed: Layers thumbnails are linked to the currently selected channel
    • Fixed: Lots of crashes

     

    Sound interesting?  Here is a video of Substance Painter in action:

    There is also a free trial available.  It’s a stand alone program, although some of the import options are disabled right now ( I used OBJ personally, from Blender ).  Keep in mind it is a beta, and feels Beta-like at times.  Some features are currently missing and performance can occasionally suffer.  On the other hand, outside of some missing features, it feels production ready.  I hope to have a more detailed preview available soon.

     

    If you try it out, let me know what you think.



    iPhone Development Tutorials and Programming Tips

    Open Source Library For Creating Great Looking Animated And Interactive Line Graphs

    by Johann at April 17, 2014 10:57 PM

    Post Category    Featured iPhone Development Resources,iOS Development Libraries,Objective-C,Open Source iOS Libraries And Tools

    I’ve mentioned a number of graphing and charting libraries such as the excellent MagicPie for creating animated pie charts.

    Here’s a library submitted by Boris Emorine for producing great looking line graphs that has number of nice unique features.

    These features include:

    - Adjustable animations on display of the graph
    - Line smoothing using bezier curves for smoothed line graphs
    - Touch reporting and indication on the graph showing the closest point to the users touch
    - Custom alpha values so graphs can be made semi-transparent if desired
    - Easy snapshotting of the graphs

    Here’s an image from the readme showing BEMSimpleLineGraph in action:
    BEMSimpleLineGraph

    You can find BEMSimpleLineGraph on Github here.

    A very nice library for creating line graphs.


    Be the first to comment...

    Related Posts:

    FacebookTwitterDiggStumbleUponGoogle Plus

    Original article: Open Source Library For Creating Great Looking Animated And Interactive Line Graphs

    ©2014 iOS App Dev Libraries, Controls, Tutorials, Examples and Tools. All Rights Reserved.

    Game Design Aspect of the Month

    Game Design: Creating a System Formula (Part IV)

    by noreply@blogger.com (Sande Chen) at April 17, 2014 03:40 PM

    In Part I, game designer Bud Leiser explains how to use the Fibonacci series in system design. In Part II, he shows the grind gap and how the amount of grind can quickly accelerate when using the Fibonacci series. In Part III, he discusses how to evaluate the curve based on design goals. In Part IV, he suggests how to progress from the general guideline to cover all other elements in the game.

    Actually…I could see implementing this curve into a real RPG if: for the player to survive we would probably have to give lots of item drops and a low cost way of healing outside of combat. (Final Fantasy health potions anyone?).  We can also try to figure out what strategy the player will use to overcome this curve: What might happen is players grind longer at a given level to buy his armor and boots.
    They might even skip weapon levels, instead of buying each one progressively they might save up money to buy 2 levels ahead, and then use that powerful sword in combat, if he has enough health to survive 1 combat he could use cheap healing outside of combat. In other words relying on that high level sword to get him through 1 combat and not worrying about keep up with armor until absolutely necessary. If we wanted to encourage this type of play we could set the monster damage levels at rates unlikely to kill a player in a single combat. Drop potions frequently and even give the player armor pieces as common rewards. Assuming he has free time out of combat to heal up to full without being attacked, this would be a completely valid RPG style.

    -Or-

    You could create these cost progressions using “suits” (Armor, gauntlet, belt, boots, helmet, weapon). Then assign % of that to each piece. For example:

    SuitsTotal CostWeaponsSword CostArmorArmor CostHelmetHelmet Cost
    A5020%1025%1310%5
    B10020%2025%2510%10
    C15020%3025%3810%15
    D25020%5025%6310%25
    E40020%8025%10010%40
    F65020%13025%16310%65
    G105020%21025%26310%105
    H170020%34025%42510%170
    I275020%55025%68810%275
    J445020%89025%111310%445
    K720020%144025%180010%720
    L1165020%233025%291310%1165
    M1885020%377025%471310%1885
    N3050020%610025%762510%3050
    O4935020%987025%1233810%4935
    P7985020%1597025%1996310%7985

    With this we have a general idea of how much the player is making and how much things should cost.

    The most important thing is we didn’t have to spend hours making these prices individually. 

    We have at the very least a general guideline. And we once we have a guideline that works, that we understand, and that curves the way we want to (meaning the player progresses at a rate that we want them to, and slow down where we want them to). We can now add elements wherever we want. And feel free to Fudge the numbers, give the player a cool Fire Sword and increase the value 10%, or 5% or 500gp.

    [This article originally appeared on Bud Leiser's personal blog.]

    Bud Leiser beat Nintendo’s original Zelda when he was just 3 years old. Then went on to win money and prizes playing: D&D Miniatures, Dreamblade, Magic the Gathering and The Spoils. He’s just returned from Vietnam where he helped manage Wulven Studios as their Lead Game Designer. He was responsible for creating internal projects, game design documents and communicating with clients to help them succeed in the post-freemium app market.   

    Geeks3D Forums

    AMD Core Math Library (ACML) beta 6.0 available

    April 17, 2014 12:39 PM

    Quote
    ACML Beta 6.0 Release Leverages the Power of Heterogeneous Compute

    AMD is releasing a Beta of the next release of the AMD Core Math Library (ACML).  ACML provi...



    Timothy Lottes

    A Survey of Efficient Representations for Independent Unit Vectors

    by Timothy Lottes (noreply@blogger.com) at April 17, 2014 11:26 AM

    A Survey of Efficient Representations for Independent Unit Vectors