Planet Gamedev

Geeks3D Forums

WhitestormJS - WebGL engine based on Three.js, incl. physics and post effects

February 09, 2016 10:23 PM

WhitestormJS is based on the Three.js engine, which is made for developers who need a useful wrapper arou...

Gamasutra Feature Articles

Layoffs at Gigantic developer Motiga

February 09, 2016 10:08 PM

The development of the upcoming MOBA, currently in beta, appears to be in danger -- as the developer lays off staff and seeks funding to complete the game. ...

Mad Catz lays off 37% of staff in restructuring plan

February 09, 2016 09:38 PM

The embattled peripheral maker, which shed a large number of executives yesterday, is making deep cuts across the board. ...

Game From Scratch

Hands On With The Lumberyard Game Engine

by Mike@gamefromscratch.com at February 09, 2016 08:38 PM

 

Today Amazon launched the Lumberyard Game Engine a modified and completely open source version of the CryEngine from Crytek.  Free.  So, what’s the catch?  You have to run your server component either on your own server or using Amazon’s web services.  Yeah, that’s it.  A pretty sweat deal all around.  So today I took a quick look at the contents of the engine as you can see below or in the video available at the bottom of this page.  I have only a couple hours experience with the engine, so don’t expect an in-depth review, this is just a quick hands on to give you an idea of what Lumberyard is and what you get.

 

Lumberyard installs as a 10GB zip file.  Simply extract the contents of that archive to folder on your system.  It’s contents look like:

image

Next run LumberyardLauncher.bat and you get this initial configuration window:

image

It unfortunately does not seem to play nice with high DPI displays :(.  Check all the features you require then click the next error.  You will then be informed what software you need to install:

image

Thankfully (for my cellular bill at least!) each component already appears to be locally available.  Next it’s going to ask you for a variety of SDKs.  Most of them are optional, such as Photoshop, Max and Maya plugin SDKs, but some are not such as NVidia and PowerVR SDKs.  Annoyingly, some of these require registering an account to download.  Both SDKs are small downloads, the the PVRTextTool will download 100+MB of files.

image

Finally you will be asked which plugins you wish to configure, and we are finally done the configuration phase.  Now simply click Configure Project to create a new Project:

image

 

This will launch the Lumberyard Project Configurator tool:

image

 

For now I’m simply going to click Launch Editor for the SamplesProject project.  Now we wait...

image

Ugh, another login:

image

Finally!

image

The Getting started guide link doesn’t work right now.  Instead use the URL https://aws.amazon.com/documentation/lumberyard/ to access the documentation.

 

I’m going to go ahead and create a new level.

image

Straight forward so far.

 

And finally the Lumberyard editor:

image

Again, high DPI support isn’t great as you can see from the cropped menu panel across the bottom.  Let’s take a look at a some of the included tools.

 

Terrain Editor

image

 

Asset Browser

image

 

Flow Graph (Visual Scripting)

image

 

Dialog Editor

image

 

Geppetto (Animations)

image

 

Database View

Img1

 

UI Editor

image

 

Obviously this is only a very surface level look at some of the tools that are bundled in the Lumberjack Game Engine.  It certainly has me interested in learning more.  If you are interested in seeing tutorials for this game engine, let me know and I will dive in deeper.  Even if I don’t create a tutorial series, I will certainly review this game engine in the typical “Closer Look” style.

 

There is a quick video hands on available, embedded below or in 1080p here.

Gamasutra Feature Articles

Post-hack, Vtech is back, and shifting resonsibility for data security to its users

February 09, 2016 08:34 PM

"YOU ACKNOWLEDGE AND AGREE THAT ANY INFORMATION YOU SEND OR RECEIVE DURING YOUR USE OF THE SITE MAY NOT BE SECURE AND MAY BE INTERCEPTED OR LATER ACQUIRED BY UNAUTHORIZED PARTIES," the new TOS reads. ...

'Oculus Ready' PCs debut, from $949 to $2,549

February 09, 2016 07:57 PM

The cheapest bundle, including headset, will set buyers back $1499, as the launch of the much-anticipated headset draws near. ...

BioWare scribe David Gaider joins Baldur's Gate dev Beamdog as creative director

February 09, 2016 07:14 PM

The company best known for enhanced remakes of BioWare games now appears poised to leap into more original products, with the offical blog saying Gaider will "direct new creative endeavors." ...

Confetti Special FX

Amazon Lumberyard

by Wolfgang at February 09, 2016 07:07 PM

We are helping with Amazon Lumberyard:

AWS Amazon Lumberyard website

Amazon Blog

Kotaku Blog Post

Gamasutra Feature Articles

Blog: Pony Island and the horror of social anxiety

February 09, 2016 06:51 PM

"Pony Island appeared to reach outside of those confines, out into the real world, into the world of my friends. The game scared me in ways that no game has ever scared me before." ...



Come to GDC for a behind-the-scenes look at Vainglory's eSports success

February 09, 2016 06:15 PM

Kristian Segerstrale, COO and executive director of Super Evil Megacorp, will share some lessons learned from the development and promotion of the studio's standout mobile MOBA Vainglory at GDC 2016. ...

Structures and techniques for interactive story

February 09, 2016 06:01 PM

"We're going to run down a list of some of the techniques existing games have used to promote a sense of player-involvement in story. We'll start by looking at a few variants on the Path Structure." ...

Encrypting in-game text in Unity

February 09, 2016 05:20 PM

Want to include encrypted text in your game? This highly technical how-to spells out everything you need, complete with programming code and font examples. ...

Video: Watch Tim Schafer play Beyond Good & Evil alongside its designer, Michel Ancel

February 09, 2016 05:19 PM

Beyond Good & Evil designer Michel Ancel sat down with Double Fine's Tim Schafer and Greg Rice to wax lyrical about the open-world adventure game 13 years after it made its PlayStation 2 debut. ...

Game From Scratch

Superpowers Tutorial Series–Part Three: Sprites And Animations

by Mike@gamefromscratch.com at February 09, 2016 05:14 PM

 

In the first tutorial we looked at installing Superpowers, running the server and creating our first project.  In the next tutorial we got to know the editor, the relationship of Actors and Components and finally created a camera for our game.  In this tutorial we are going to draw a sprite on screen, then a sprite sheet which will enable animation.  So without further ado, let’s jump in.

 

There is one more important part of the relationships in Superpowers to understand.  As we’ve seen already, our Scene is composed of Actors, which in turn are made up of Components.  Components however can have Assets.  Assets on the other hand are NOT part of the Scene.  They actually exist at the same level as the Scene and can be used in multiple scenes.  Assets are declared on the left hand side of the interface.

 

Adding a Sprite to Your Game

On the left hand side of the screen click the page with a plus sign icon (the same we used to create the scene):

image

In the resulting dialog, select Sprite:

image

The sprite options will now be available in the inspection window on the right hand side of the screen.  In that window, select Upload and navigate to the image file you are going to use for your game.  We sure to use a web friendly format such as JPG or PNG.  Then set the grid size to the same dimensions as the Texture Size, like so:

image

 

You could also modify the pivot point of the sprite by changing the origin position.  The default of 50% will put the pivot point in the middle of the sprite, so this is the point the sprite will be drawn and transformed relative to.  You can also configure the opacity, how the sprite is rendered, etc.  The Grid Size parameter is used when you have multiple sprites in a single texture, which we will use later.

 

Adding a Sprite Component

Now that we have a sprite asset available in our game, let’s add one to our scene.  First on the left hand side, or using one of the tabs across the top of the editor, select your Scene.  Next create a new Actor, just like we covered in the previous tutorial, name it sprite or something similar.  Finally click the New Component button and select Sprite Renderer, then click Create:

image

Now there should be a SpriteRender component available.  From the Asset window on the left side of your screen, drag the sprite you added to the Sprite field of the Sprite Renderer in the Inspector, like so:

GIF

 

Tada, we’ve added a Sprite to our game world and it should now show up in the View in the center of your screen:

image

The sprite can be positioned using the transform widget, or directly in the Transform component on the right.

 

Running Your Game

Now that we’ve got a camera and something on screen, let’s take a moment to actually run our game.  There is a small amount of configuration we need to do.  On the left hand side of the screen, locate the “Settings” link, click away!:

image

 

In the resulting form, select your scene as the starting scene(via drag and drop):

image

 

We have one final task to perform.  Our Camera and our Sprite are both at the same Z location, meaning that at least initially, nothing will be in the view of the camera.  You have one of two options, you can either position all of your sprites at a –z location, or you can move your camera to z=1.  The later is the easier option, so I will go that route.  Select your camera actor, it’s transform component and set the Z value to 1:

image

 

Now we press either Play or Debug in the top left menu.  The debug option will load Chrome with the developer tools enabled, making it possible to detect errors in your code.  The Play option will run it in the Superpowers player.  Either way, we should see:

image

Congratulations on your first successful game!

 

Using a Spritesheet

Now let’s take a look at how we can used multiple sprites in a single texture, often known as a spritesheet.  I’m using this simple 3x1 sprite sheet:

sheet

 

Add it as an asset like we did earlier.  This time however, after we upload the image, we want to configure the grid size using this button:

image

 

When prompted enter the number of rows (1) and columns (3) in your image, or simply enter the width and height of each frame of animation in the text boxes.  Now lets create a new animation named walked.  Simply click the New button under animation, name it walk.  Then in the settings we set (I believe, the UI does not make it obvious), the first frame of the animation, the length and the number of frames to step by.  I also set the animation speed to 3 frames/sec meaning our 3 frames on animation will play once per second.

image

 

And the end result:

GIF

Ignore the twitch, that’s just me capturing the animated gif at the wrong end point.

Gamasutra Feature Articles

These 15 games will be playable in the Indie Megabooth Showcase at GDC 2016

February 09, 2016 05:01 PM

The Indie MEGABOOTH Showcase is coming back to GDC 2016 with a lineup of 15 interesting indie games that all conference attendees can play, including We Are Chicago and Elsinore. ...



How Unravel's memorable protagonist Yarny was woven together

February 09, 2016 04:31 PM

"My design process became more about stopping iteration. It was about accepting Yarny was finished and resisting the urge to change it. It just felt wrong." ...

Sales and profits up at Bandai Namco as video game revenue rises

February 09, 2016 04:08 PM

Bandai Namco has posted its financials for the nine month period ending December 31, and both sales and profits are on the up.  ...

Geeks3D Forums

HWiNFO32 & HWiNFO64 v5.20 Released

February 09, 2016 03:46 PM

Changes in HWiNFO32 & HWiNFO64 v5.20 - Released on:  Feb-9-2016: 

  • Fixed monitoring of total/NAND reads/writes for some SanDisk drives.
  • Extended NVMe S.M.A.R.T. status and fixed total ...



NVIDIA GeForce driver 361.28 for Linux, FreeBSD and Solaris

February 09, 2016 03:37 PM

Linux Display Driver - x86 NVIDIA Certified
[td...

Intel HD Graphics Driver 20.19.15.4380 beta

February 09, 2016 03:31 PM

This beta version driver is provided to confirm pending driver code changes that address numerous reported game play issues.  A list of issues addressed is below.  A production version driver will follow in a few weeks, if the...

Gamasutra Feature Articles

Road to the IGF: Garbos, Kvale and Nyström's Progress to 100

February 09, 2016 02:23 PM

"I completely stopped looking at an iPhone as a touch screen device and started looking at it as a magic little box full of possibilities." ...



Rovio spins off books business as restructuring continues

February 09, 2016 01:47 PM

Rovio is attempting to shed even more weight by spinning off its books business with the creation of new affiliate company, Kaiken Publishing. ...

Game From Scratch

Amazon Release Lumberyard Game Engine

by Mike@gamefromscratch.com at February 09, 2016 01:09 PM

 

Not every day that there is a new player in the AAA game space, but that’s exactly what just happened with the release of Lumberyard by Amazon.  Amazon has been getting more and more involved with gaming with the launch of their own game studio coupled with their purchased of Double Helix games back in 2014.  Their cloud computing solution, AWS (and more specifically EC2 and S3) have both proven incredibly popular with game developers, providing the networking back end for companies such as Rovio and Ubisoft.  Today however they just made a much bigger splash with the release of a complete game engine, Lumberyard.

Now Lumberyard isn’t actually a brand new engine, in fact it appears to be a mashup of a number of technologies including CryEngine, in house tools created by Double Helix Games and cloud services from AWS, specifically the new Amazon Gamelift service, which is described as:

Amazon GameLift, a managed service for deploying, operating, and scaling session-based multiplayer games, reduces the time required to build a multiplayer backend from thousands of hours to just minutes. Available for developers using Amazon Lumberyard, Amazon GameLift is built on AWS’s highly available cloud infrastructure and allows you to quickly scale high-performance game servers up and down to meet player demand – without any additional engineering effort or upfront costs.

Lumberyard will also feature Twitch integration, and perhaps most interestingly, launch with support, both in forum and tutorial form but also in a paid form, something that is often lacking.  Lumberyard tools only run on Windows 7,8 and 10, while the supported targets at launch are Windows, PS4 and Xbox One.  Of course a developer license is required to target either console.  About the technical bits of Lumberyard:

The Lumberyard development environment runs on your Windows PC or laptop. You’ll need a fast, quad-core processor, at least 8 GB of memory, 200 GB of free disk space, and a high-end video card with 2 GB or more of memory and Direct X 11 compatibility. You will also need Visual Studio 2013 Update 4 (or newer) and the Visual C++ Redistributables package for Visual Studio 2013.

The Lumberyard Zip file contains the binaries, templates, assets, and configuration files for the Lumberyard Editor. It also includes binaries and source code for the Lumberyard game engine. You can use the engine as-is, you can dig in to the source code for reference purposes, or you can customize it in order to further differentiate your game. The Zip file also contains the Lumberyard Launcher. This program makes sure that you have properly installed and configured Lumberyard and the third party runtimes, SDKs, tools, and plugins.

The Lumberyard Editor encapsulates the game under development and a suite of tools that you can use to edit the game’s assets.

The Lumberyard Editor includes a suite of editing tools (each of which could be the subject of an entire blog post) including an Asset Browser, a Layer Editor, a LOD Generator, a Texture Browser, a Material Editor, Geppetto (character and animation tools), a Mannequin Editor, Flow Graph (visual programming), an AI Debugger, a Track View Editor, an Audio Controls Editor, a Terrain Editor, a Terrain Texture Layers Editor, a Particle Editor, a Time of Day Editor, a Sun Trajectory Tool, a Composition Editor, a Database View, and a UI Editor. All of the editors (and much more) are accessible from one of the toolbars at the top.

In order to allow you to add functionality to your game in a selective, modular form, Lumberyard uses a code packaging system that we call Gems. You simply enable the desired Gems and they’ll be built and included in your finished game binary automatically. Lumberyard includes Gems for AWS access, Boids (for flocking behavior), clouds, game effects, access to GameLift, lightning, physics, rain, snow, tornadoes, user interfaces, multiplayer functions, and a collection of woodlands assets (for detailed, realistic forests).

Coding with Flow Graph and Cloud Canvas
Traditionally, logic for games was built by dedicated developers, often in C++ and with the usual turnaround time for an edit/compile/run cycle. While this option is still open to you if you use Lumberyard, you also have two other options: Lua and Flow Graph.

Flow Graph is a modern and approachable visual scripting system that allows you to implement complex game logic without writing or or modifying any code. You can use an extensive library of pre-built nodes to set up gameplay, control sounds, and manage effects.

Flow graphs are made from nodes and links; a single level can contain multiple graphs and they can all be active at the same time. Nodes represent game entities or actions. Links connect the output of one node to the input of another one. Inputs have a type (Boolean, Float, Int, String, Vector, and so forth). Output ports can be connected to an input port of any type; an automatic type conversion is performed (if possible).

There are over 30 distinct types of nodes, including a set (known as Cloud Canvas) that provide access to various AWS services. These include two nodes that provide access to Amazon Simple Queue Service (SQS),  four nodes that provide access to Amazon Simple Notification Service (SNS), seven nodes that provide read/write access to Amazon DynamoDB, one to invoke an AWS Lambda function, and another to manage player credentials using Amazon Cognito. All of the games calls to AWS are made via an AWS Identity and Access Management (IAM) user that you configure in to Cloud Canvas.

Finally we come to price.  Lumberyard is free*.  I say free* instead of free because of course there is a catch, but an incredibly fair one in my opinion.  If  you use Lumberyard you either have to host it on Amazon servers or on your own.  Basically you can’t use Lumberyard then host it on a competitor such as Azure or Rackspace.  Pricing is always a bit tricky when it comes to Amazon services, but unlike Google, they have never once screwed their user base (Google once jacked up prices by an order of magnitude, over night, forever souring me on their technology), so you are pretty safe in this regard.  More details on pricing:

Amazon GameLift is launching in the US East (Northern Virginia) and US West (Oregon) regions, and will be coming to other AWS regions as well. As part of AWS Free Usage tier, you can run a fleet comprised of one c3.large instance for up to 125 hours per month for a period of one year. After that, you pay the usual On-Demand rates for the EC2 instances that you use, plus a charge for 50 GB / month of EBS storage per instance, and $1.50 per month for every 1000 daily active users.

I intend to look closer at the Lumberyard game engine as soon as possible, so expect a preview, review or tutorial shortly.

Gamasutra Feature Articles

Hard stats on the development and sales of a 2D survival adventure/RPG on PC

February 09, 2016 12:01 PM

Candid game development costs and sales stats, as well as a breakdown on player behavior, piracy, and lessons for the future: it's all here, served up by Dead in Bermuda devs. ...



Blog: How can eSports go mainstream?

February 09, 2016 09:00 AM

"eSports is essentially a spectator sport. So, for eSports to really explode, focus needs to change from the 'best-of-the-best' to the common competitor." ...

Writing Firewatch, and capturing the beauty of being alone

February 09, 2016 09:00 AM

"People go to Wyoming because they're captivated by the beauty and the aloneness. Growing up there creates a certain feeling inside you that you don't really get rid of for the rest of your life." ...

Geeks3D Forums

Videogames released since 1971

February 09, 2016 08:11 AM

Mobygames stats released on January 15 2016:



Source: http://www.mobygames.com/forums/dga,2/dgb,3/dgm,216724/



Gamasutra Feature Articles

Amazon launches new, free, high-quality game engine: Lumberyard

February 09, 2016 08:01 AM

Built using CryEngine's tech and completely free to download and use, this new engine -- which can currently deploy products on PC, PlayStation 4, and Xbox One -- is powered by Twitch and Amazon Web Services. ...

Video game video hub GameTrailers is shutting down this week

February 09, 2016 01:14 AM

GameTrailers announced today via its Twitter account that the video game video hub is shutting down, over a year after it was sold to Defy Media by Viacom in a layoff-ridden 2014 deal.  ...

Gamasutra Feature Articles

Get a job: Zenimax Online Studios seeks a Sound Designer

February 08, 2016 10:01 PM

Elder Scrolls Online developer Zenimax Online Studios seeks to hire an experienced sound designer to work alongside the team at its studio in Rockville, Maryland. ...



Dofus dev unlocks in-game rewards based on movie ticket sales

February 08, 2016 09:45 PM

How do you get people to go watch your video game movie? If you're Dofus developer Ankama, you take a page from Kickstarter and tie tiers of in-game rewards to movie ticket sales milestones. ...

Top brass resign amid MadCatz executive shuffle

February 08, 2016 08:56 PM

Change is in the wind at video game peripheral maker Mad Catz, as president and CEO Darren Richardson, senior VP of business affairs Whitney Peterson and company chairman Thomas Brown have resigned. ...

Short vs. long-term progression in game design

February 08, 2016 08:36 PM

How do you keep players interested? "Combining both short and long-term progression provides the best combination of progression models to keep someone engaged in your title." ...

OpenGL

User Interface with Ant Tweak Bar Library Published

February 08, 2016 08:15 PM

The 48th installment in a series of tutorials dedicated to promoting modern OpenGL development, with a focus on version 3.x and beyond. This tutorial demonstrates how to integrate the Ant Tweak Bar library in an OpenGL application in order to create a user interface.

Gamasutra Feature Articles

Don't Miss: Designing for choice and exploration in Firewatch

February 08, 2016 08:12 PM

Campo Santo team members Nels Anderson and Jake Rodkin speak at length about how Firewatch is being designed to tell a story and allow meaningful player exploration/choice -- without combat. ...

State of the Industry: Enough with the skeletons

February 08, 2016 07:23 PM

Over 2,000 devs surveyed in GDC's State of the Industry survey answer the vague question: "Is there anything else you'd like to say about the game industry?" As you can imagine, it gets quite heated. ...

Road to the IGF: Red Hook Studios' Darkest Dungeon

February 08, 2016 06:52 PM

'Developing in Early Access is like working while naked in a transparent cube suspended above Times Square. But Darkest Dungeon is a stronger game for having gone through it.' ...

Respawn looks to buck the 'no campaign' trend with Titanfall 2

February 08, 2016 06:48 PM

Respawn is reportedly trying its hand at a single-player campaign and a TV spin-off for Titanfall 2, according to comments made by lead writer Jesse Stern in an interview with Forbes. ...

Techniques for procedurally generated worlds in Unity

February 08, 2016 06:04 PM

This post gets deep into world generation technique: "A heat map defines the temperature of our generated world. The heat map we are going to create will be based on latitude and height." ...

Q& A: Serious VR design lurks just beneath Job Simulator's goofy premise

February 08, 2016 05:48 PM

Beneath the humor and disembodied white gloves of Owlchemy Labs' Job Simulator 2050 is a virtual reality game that incorporates some seriously solid VR interaction design fundamentals. ...

OpenGL

Learn about Vulkan directly from Khronos in a one hour webinar

February 08, 2016 05:24 PM

Learn about Vulkan, the new graphics and compute API directly from Khronos, the people who are creating it. In this 1-hour session, we will talk about the API, and go into details about the Vulkan SDK from LunarG, and much more. Register today!

Gamasutra Feature Articles

Blog: Why are RPGs so hard to classify?

February 08, 2016 05:23 PM

"I've decided to sit down and rant a bit on why it's so hard to define this genre, and also why it's a genre that sometimes end restricting its games. Some of it will be obvious, but I hope to offer some decent insights." ...



GDC: See how Paradox balances historical accuracy with good game design

February 08, 2016 05:05 PM

Paradox Interactive senior game designer Chris King is going to delve into the tricky balancing act that comes with making games about historical events at GDC 2016. Don't miss it! ...

Daigo 'The Beast' Umehara gets nostalgic on Street Fighter II's 25th anniversary

February 08, 2016 05:02 PM

"Street Fighter II enthralled me just as it did to so many others. If there had been no SF2, I would not have been here today as a pro gamer." ...

Independent VR platform Transport nets $25 million

February 08, 2016 04:58 PM

California-based virtual reality startup Wevr has received $25 million from investors including HTC and Samsung to further develop its independent VR distribution platform, Transport.  ...

Twitch's top 10 for 2015: League of Legends cracks 1 billion hours watched

February 08, 2016 04:57 PM

"Four PC games, League of Legends, Counter-Strike: Global Offensive, DotA 2 and Hearthstone, are literally dwarfing the competition on Twitch." ...

Geeks3D Forums

Sangokushi 13 - japanese game benchmark

February 08, 2016 04:35 PM

Sangokushi 13

latest iteration of Romance of the Three Kingdoms

Ridiculous clipping bugs and unplayable...

Gamasutra Feature Articles

Blog: Austin's secret histories

February 08, 2016 03:56 PM

"I'm fascinated by all the things that lead up to people making up pastimes, toys and time-wasters generally referred to as 'games.' That includes where they live, who they associate with, how they regard themselves and each other." ...



Final Fantasy Tactics designer's crowdfunded RPG, Unsung Story, put on hold

February 08, 2016 03:13 PM

"For the financial strength of the company we need to focus on a few products in the near term that have the ability to get to a retail release before Unsung Story is able to." ...

Apple rejects The Binding of Isaac due to depictions of violence towards children

February 08, 2016 02:11 PM

An iOS port of psychological-horror shooter The Binding of Isaac: Rebirth has been rejected by Apple due to its depiction of violence towards children.  ...

Game From Scratch

BDX 0.2.3 Released

by Mike@gamefromscratch.com at February 08, 2016 01:27 PM

 

This story coming care of /r/gamedev, BDX released version 0.2.3.  BDX is a game engine hosted inside Blender using LibGDX and Java for game programming.  Essentially it enables you to define and create your game in Blender, including complete physics integration, while generating LibGDX code.  I did a pretty in-depth tutorial on working with BDX a while back.

In this release:

Here's a short change-log:

  • Per-pixel sun, point, and spot lighting. As it was before, you can simply create the lights in Blender to have them show up in-game, or spawn them during play.
  • Ability to turn off per-pixel lighting for lower-spec targeted platforms and devices.
  • Improvements to the profiler.
  • GameObjects can now switch the materials used on their mesh. You can specify the name of a material available in the scene in Blender, or you can directly provide a LibGDX material to use, in case you have one custom-made.
  • Various fixes and QOL improvements.

Check it out! We could always use some more feedback and testing.

It’s a cool project and if you are working in Blender and LibGDX is certainly something you should check out!



Atomic Game Engine 2016 Road Map Released

by Mike@gamefromscratch.com at February 08, 2016 01:17 PM

 

The road map for the Atomic Game Engine, which we looked at late last year, was just released and highlights upcoming developments for the engine.

2016 Roadmap

DISCLAIMER: As with most roadmaps, this one is subject to change. This is a snapshot of current planning and priorities, things get moved around, opportunities happen, etc. It is also not “complete”

  1. New WebSite - We need a new website, badly. The main page and landing video have not been updated since the initial March 4th Early Access!
  2. New User Experience, documentation and tutorial videos
  3. Improved iOS/Android deployment with support for shipping on App Store/Google Play. We also plan on publishing a mobile iOS/Android example
  4. Continued work on editor asset pipeline, scene editor, etc
  5. WebGL improvements, there is a lot going on currently with WebGL and we need to update the build and provide a means to communicate with the page JavaScript
  6. Script debugging with breakpoints, callstacks, locals, etc, including on device
  7. First class TypeScript support with round trip code editing, compiling, debugging
  8. Basic Oculus Rift support (Q2)
  9. Multiple top level windows for the Atomic Editor
  10. Improvements to the new Chromium WebView API
  11. Examples, examples, examples, including a bigger “full game” example
  12. Animation Editor
  13. Evaluate lightmap generation with Blender cycles
  14. The things that need to happen, or are under NDA, and are not listed on this roadmap :)

In addition to the roadmap, a thorough history of the engine and the company people it are available here.

Gamasutra Feature Articles

Maximizing game YouTube-ability with camera jitter

February 08, 2016 12:01 PM

"We all want to make games played by YouTubers. That's only reasonable, given the role that YouTube has in game publicity in 2016. But what technical and aesthetic choices can we make to get the most out of their play sessions?" ...

Why I trust Valve's judgment on seasonal sales

February 08, 2016 09:02 AM

Defender's Quest developer Lars Doucet on what Valve's getting right: "The games you see on your front page now depend mostly on you. And that's as it should be." ...

c0de517e Rendering et alter

Low-resolution effects with depth-aware upsampling

by DEADC0DE (noreply@blogger.com) at February 07, 2016 04:24 PM

I have to confess, till recently I was never fond of doing half or quarter res effects via a bilateral upsampling step. It's a very popular technique, but all the times I tried it I found it causing serious edge artifacts... 
On Fight Night Champion I ended up shipping AO and deferred shadows without any depth aware upsampling (just separating the ring and fighters from the background, and using a bias towards over-shadowing); Space Marines ended up shipping with a bilateral upsampling on AO (but no bilateral blurring or noise) but it still had artifacts. In the end it sort-of worked, via some hacks that were good enough to ship, but that I never really understood.

For Call of Duty Black Ops 3 we needed to compute some effects (volumetric lighting) at quarter-res or less, to respect the performance budgets we had, so depth-aware upsampling was definitely a necessity, so I needed to investigate a bit more into it.
A quite extreme example of "god rays" in COD:BO3
I found a solution that is very simple, that I understand quite well, and that works well in practice. I'm sure it's something many other games are doing and many other people discovered (due to its simplicity), but I'm not aware of it being presented publicly, so here it is, my notes on how not to suck at bilateral upsampling:

1) Bilateral weighting doesn't make a lot of sense for upsampling.

The most commonly used bilateral upsampling scheme works by using the same four texels that would be involved in bilinear filtering, but changing their weights by multiplying them by a function of the depth difference between the true surface (high res z-buffer) and their depths (low-res z-buffer).

This method makes little sense, really, because you can have the extreme case where the bilinear weights select only one sample, but that sample is not similar to the surface depth you need at all! Samples that are not detected to be part of the full-res surface should simply be ignored, regardless of how "strongly" biliear wants to access them...

A better option is to simply -choose- between bilinear filtering or nearest depth point sampling, based on if the low-res samples are part of the high-res surface or not. This can be done in a variety of ways, for example:

- lerp(bilinear_weights, depth_weights, f(depth_discontinuity)) * four_samples
- lerp(biliear_sample, best_depth_sample, f(depth_discontinuity))
- bilinear_fetch(lerp(bilinear_texcoords, best_depth_texcoords, f(depth_discontinuity)))

Where the weighting function f() is quite "sharp" or even just a step function. The latter scheme is similar to nVidia's "nearest depth sampling", it's the fastest alternative but in Black Ops 3 I ended up sharply going from bilateral to "depth only" weights if a too big discontinuity is detected in the four bilinear texels.

2) Choose the low-res samples to maximise the chances of finding a representative.

It's widely known that a depth buffer can't be downsampled averaging values, that would result in depths that do not exist in the original buffer, and that are not representative of any surface, but "floating" in between surfaces at edge discontinuities. So either min or max filtering is used, commonly preferring nearest-to-camera samples, with the reasoning that closest surfaces are more important, and thus should be sampled more (McGuire tested various strategies in the context of SSAO, see Table 1 here).

But if we think in terms of the reconstruction filter and its failure cases, it's clear that preferring a single set of depths doesn't make a lot of sense. We want to maximize the chance of finding, among the texels we consider for upsamping, some that represent well the surfaces in the full resolution scene. Effectively in the downsampling step we're selecting on points we want to compute the low-res effect, clearly we want to do that so we distribute samples evenly across surfaces.

A good way of doing this is to chose per each sample in the downsampled z-buffer, a surface that is different from the ones of its neighbors. There are many ways this could be done, but the simplest is to just alternate min and max downsampling in a checkerboard patter, making sure that for each 2x2 quad, if we are in a region that has multiple surfaces, at least two of them will be represented in the low-res buffer. 

In theory it's possible to push even more surfaces in a quad, for example we could record the second smallest or second biggest, or the median or any other scheme (even a quasi-random choice) to select a depth (we shouldn't use averages though, as these will generate samples that belong to no surface), but in practice this didn't seem to work great with my upsampling, I guess because it reduces spatial resolution in favour of depth resolution, but your mileage may vary depending on the effect, the upsampling filter and the downsampling ratio.

Some residual issues can be seen sometimes (upper right),
when there is no good point sample in the 2x2 neighborhood.

Further notes.

The nearest-depth upsampling with a min/max checkerboard pattern downsampling worked well enough for Black Ops 3 that no further research was done, but there are still things that could be clearly improved:

- Clustering for depth selection.
A compute shader could do actual depth clustering to try to understand how many surfaces there are in an area, and chose what depths to store and the tradeoffs between depth resolution and screenspace resolution.

- Gradients.
Depth discontinuity in the upsampling step is a very simplistic metric, more information can be used to understand if samples belong to the same surface, like normals, g-buffer attributes and so on.

- Wider filters.
Using a 2x2 quad of samples for the upsampling filter is convenient as it allows to naturally fall back to bilinear if we think the samples are representative of the high-res surface, but there is no reason to limit the search to such neighborhood, wider filters could be used, both for higher-order filtering and to have better chances of finding representative samples.

- Better filtering of the representative depth samples.
There is no reason to revert to point-sampling in presence of discontinuities (or purely depth-weighted sampling), it's still possible to reject samples that are not representative of the surface while weighting the useful ones with a filter that depends on the subtexel  position.
Special cases could be considered for horizontal and vertical edges, where we could do 1d linear interpolation on the axis of the surface. Bart Wronski has something along these lines here (and the idea of baking an UV offset to be reused by different effects also allows in general to use more complex logic, and amortize it among effects).

- "Separable" bilateral filters.
Often when depth-aware upsampling is employed we also use depth-aware (bilateral) filters, typically blurs. These are often done in separate horizontal/vertical passes, even if technically such filters are not separable at all. 
This is particularly a problem with depth-aware filters because the second pass will use values that are not anymore relative to the depths in the low-res depth buffer, but result from a combination of samples from the first pass, done at different depths.

The filter can still look right if we can always correctly reject samples not belonging to the surface at center texel of a filter, because anyway the filtered value is from the surface of the center texel, so doing the second pass using a rejection logic that uses attributes (depth...) at the center of the filtered value sort-of works (it's still a depth of the right surface). 
In practice though that's not always the case, especially if the rejection is done with depth distances only, and it causes visible bleeds in the direction of the second filter pass. A better alternative in these cases (if the surface sample rejection can't be fixed...) is to do separate passes not in an horizontal/vertical fashion but in a staggered grid (e.g. first considering a NxN box filter pass then doing a second pass by sampling every N pixels in horizontal and vertical directions).

Geeks3D Forums

Shadertoy - Elephant

February 07, 2016 02:49 PM

Quote
Elephant

Signed distance field raymarching. Procedural elephants. Or a bunch of ellipsoids and few lines and a couple of quadratic curves. Split it in layers to prevent the compiler from crashing....

c0de517e Rendering et alter

Color grading and excuses

by DEADC0DE (noreply@blogger.com) at February 06, 2016 04:21 PM

I started jotting down some notes for this post a month ago maybe, after watching bridge of spies on a plane to New York. An ok movie if you ask me, with very noticeable, heavy-handed color chocies and for some reasons a heavy barrel distortion in certain scenes. 

Heavy barrel distortion, from the Bridge of Spies trailer. Anamorphic lenses?
I'm quite curious to understand the reasoning behind said distortion, what it's meant to convey, but this is not going to be a post criticizing the overuse of grading, I think that's already something many people are beginning to notice and hopefully avoid. Also I'm not even entirely sure it's really a "problem", it might be even just fatigue

For decades we didn't have the technology to reproduce colors accurately, so realistic color depiction was the goal to achieve. With digital technology perfect colors are "easy", so we started experimenting with ways to do more, to tweak them and push them to express certain atmospheres/emotions/intentions, but nowadays we get certain schemes that are repeated over and over so mechanically it becomes stale (at least in my opinion). We'll need something different, break the rules, find another evolutionary step to keep pushing the envelope.

Next-NEXT gen? Kinemacolor
What's more interesting to me is of course the perspective of videogame rendering. 

We've been shaping our grading pretty much after the movie pipelines, we like the word "filmic", we strive to reproduce the characteristics and defects of real cameras, lenses, film stocks and so on. 
A surprising large number of games, of the most different genres, all run practically identical post-effect pipelines (at least in the broad sense, good implementations are still rare). You'll have your bloom, a "filmic" tone mapping, your color-cube grading, depth of field and motion blur, and maybe vignette and color aberration. Oh, and lens flares, of course... THE question is: why? Why we do what we do? 

Dying light shows one of the heavier-handed CA in games
One argument that I hear sometimes is that we adopt these devices because they are good, they have so much history and research behind them that we can't ignore. I'm not... too happy with this line of reasoning. 
Sure, I don't doubt that the characteristic curves of modern film emulsions were painstakingly engineered, but still we should't just copy and paste, right? We should know the reasoning that led to these choices, the assumptions made, check if these apply to us. 
And I can't believe that these chemical processes fully achieved even the ideal goals their engineers had, real-world cameras have to operate under constraints we don't have.
In fact digital cameras are already quite different than film, and yet if you look at the work of great contemporary photographers, not everybody is rushing to apply film simulation on top of them...

Furthermore, did photography try to emulate paintings? Cross-pollination is -great-, but every media has its own language, its own technical discoveries. We're really the only ones trying so hard to be emulators; Even if you look at CGI animated movies, they seldom employ many effects borrowed from real-world cameras, it's mostly videogames that are obsessed with such techniques.

Notice how little "in your face" post-fx are in a typical Pixar movie...
A better reason someone gave me was the following: games are hard enough, artists are comfortable with a given set of tools, the audience is used to a given visual language, so by not reinventing it we get good enough results, good productivity and scenes that are "readable" from the user perspective.

There is some truth behind this, and lots of honesty, it's a reasoning can lead to good results if followed carefully. But it turns out that in a lot of cases, in our industry, we don't even apply this line of thinking. And the truth is that more often than not we just copy "ourselves", we copy what someone else did in the industry without too much regard with the details, ending up in a bastard pipeline that doesn't really resemble film or cameras.

When was the last time you saw a movie and you noticed chromatic aberrations? Heavy handed flares and "bloom" (ok, other than in the infamous J.J.Abrams  Star Trek, but hey, he apologized...)? Is the motion blur noticeable? Even film grain is hardly so "in your face", in fact I bet after watching a movie, most of the times, you can't discern if it was shot on film or digitally.
Lots of the defects we simulate are not considered pleasing or artistic, they are aberrations that camera manufacturers try to get rid of, and they became quite versed at it! Hexagonal-shaped bokeh? Maybe on very cheap lenses...


http://www.cs.ubc.ca/labs/imager/tr/2012/PolynomialOptics/
On the other hand lots of other features that -do- matter are completely ignored. Lots of a lens "character" comes from its point spread function, a lens can have a lower contrast but high resolution or the opposite, field curvature can be interesting, out of focus areas don't have a fixed, uniform shape across the image plane (in general all lens aberrations change across it) and so on. We often even leave the choice of antialiasing filters to the user...

Even on the grading side we are sloppy. Are we really sure that our artists would love to work with a movie grading workflow? And how are movies graded anyways? With a constant, uniform color correction applied over the entire image? Or with the same correction applied per environment? Of course not! The grading is done shot by shot, second by second. It's done with masks and rotoscoping, gradients, non-global filters...

A colorist applying a mask
Lots of these tools are not even hard to replicate, if we wanted to; We could for example use stencils to replicate masks, to grade differently skin from sky from other parts of the scene. 
Other things are harder because we don't have shots (well, other than in cinematic sequences), but we could understand how a colorist would work, what an artist could want to express, and try to invent tools that allow a better range of adjustment. Working in worldspace or clipspace maybe, or looking at material attributes, at lighting, and so on.

Ironically people (including myself sometimes) are sometimes instinctively "against" more creative techniques that would be simple in games on the grounds that they are too "unnatural", too different from what we think it's justified by the real camera argument, that we pass on opportunities to recreate certain effects that would be quite normal in movies instead, just because they are not exactly in the same workflow.

Katana, a look development tool.

Scene color control vs post-effect grading.

I think the endgame though is to find our own ways. Why do we grade and push so much on post effects to begin with? I believe the main reason is because it's so easy, it empowers artists with global control over a scene, and allows to do large changes with minimal effort. 

If that's the case though, could we think of different ways to make the life of our artists easier? Why can't we allow the same workflows, the same speed, to operations on source assets? With the added benefit of not needlessly breaking physical laws, thus achieving control in a a more believable way....


Neutral image in the middle. On the right: warm/cold via grading, on the left a similar effect done editing lights. 
Unlike in movies and photography for us it's trivial to change the colors of all the lights (or even of all the materials). We can manipulate subsets of these, hierarchically, by semantic, locally in specific areas, by painting over the world, interpolating between different variants and so on...
Why did we push everything to the last stage of our graphics pipeline? I believe if in photography or movies there was the possibility of changing the world so cheaply, if they had the opportunities we do have, they would exploit them immediately.

Gregory Crewdson

Many of these changes are "easy" as they won't impact the runtime code, just smarter ways to organize properties. Many pipelines are even pushing towards parametric material libraries and composting for texture authoring, which would make even bulk material edits possible without breaking physical models.

We need to think and experiment more. 



P.S. 
A possible concern when thinking of light manipulation is that as the results are more realistic, it might be less appropriate for dynamic changes in game (e.g. transitions between areas). Where grading changes are not perceived as changes in the scene, lighting changes might be, thus potentially creating a more jarring effect.

It might seem I'm very critical of our industry, but there are reasons why we are "behind" other medias, I think. Our surface area is huge, engineers and artists have to care about developing their own tools -while using them-, making sure everything works for the player, make sure everything fits in a console... We're great at these things, there's no surprise then that we don't have the same amount of time to spend thinking about game photography. Our core skills are different, the game comes first.



UVic lecture slides

by DEADC0DE (noreply@blogger.com) at February 06, 2016 04:21 PM


An introduction to what is like to be a rendering engineer or researcher in a AAA gamedev production, and why you might like it. Written for a guest lecture to computer graphics undergrads at University of Victoria.

As most people don't love when I put things on scribd, for now this is hosted from my dropbox.

https://dl.dropboxusercontent.com/u/6809780/BLOG_HOSTING/AP%20Guest%20Lecture.pdf

Siggraph 2015 course

by DEADC0DE (noreply@blogger.com) at February 06, 2016 04:21 PM

As some will have noticed, Michal Iwanicki and I were speakers (well I was much more of a listener, actually) in the physically based shading course this year (thanks again to both Stephens for organizing and invitingus) presenting a talk on approximate models for rendering, while our Activision colleague and rendering lead of Sledgehammer showed some of his studio's work on real world measurements used in Advanced Warfare.

Before and after :)
If you weren't at Siggraph this year or you missed the course, fear not, we'll publish the course notes soon (I need to do some proof reading and adding bibliography) and the course notes are "the real deal", as in twenty minutes we couldn't do much more than a teaser trailer on stage.

I wanted to write though about the reasons that motivated me to present in the course that material, give some background. Creating approximations might be time consuming sometimes, but it's often not that tricky, per-se I don't think it's the most exciting topic to talk about. 
But it is important, and it is important because we are still too often too wrong. Too many times we use models that we don't completely understand, that are exact under assumptions we didn't investigate and for which we don't know what perceptual error they cause.

You can nowadays point your finger at any random real time rendering technique, really look at it closely, compare with ground truth, and you're more likely than not to find fundamental flaws and simple improvements through approximation.

This is a very painful process, but necessary. PBR is like VR, it's an all or nothing technique. You can't just use GGX and call it a day. Your art has to be (perceptually) precise, your shadows, your GI, your post effect, there is a point where everything "snaps" and things just look real, but it's very easy to be just a bit off and ruin the illusion. 
Worse, errors propagate non-locally as artists try to compensate for our mistakes in the rendering pipeline by skewing the assets to try to reach as best as they can a local minimum.

Moreover, we are also... not helped I fear by the fact that some of these errors are only ours, we commit them in application, but the theory in many cases is clear. We often got research from the seventies and the eighties that we should just read more carefully. 
For decades in computer graphics we rendered images in gamma space, but there isn't anything for a researcher to publish about linear spaces, and even today we largely ignore what colorspaces really are and what we should use, for example.

We don't challenge the assumptions we work with.

A second issue I think is sometimes it's just neater to work with assumptions that it is to work on approximations. And it is preferable to derive our math exactly via algebraic simplifications, the problem is that when we simplify by imposing an assumption, its effects should be measured precisely.

If we consider constant illumination, and no bounces, we can define ambient occlusion, and it might be an interesting tool. But in reality it doesn't exist, so when is it a reasonable approximation? Then things don't exactly work great, so we tweak the concept and create ambient obscurance, which is better, but to a degree even more arbitrary. Of course this is just an example, but note: we always knew that AO is something odd and arbitrary, it's not a secret, but even in this simple case we don't really know how wrong it is, when it's more wrong, and what could be done to make it measurably better.

You might say that even just finding the errors we are making today and what is needed to bridge the gap, make that final step that separates nice images from actual photorealism, is a non-trivial open problem (*).
It's actually much easier to implement a many exciting new rendering features in an engine than to make sure that we got even a very simple and basic renderer is (again perceptually) right. And on the other hand if your goal is photorealism it's surely better to have a very constrained renderer in very constrained environments that is more accurate than a much fancier one used with less care.

I was particularly happy at this Siggraph to see that more and more we are aware of the importance of acquired data and ground truth simulations, the importance of being "correct", and there are many researchers working to tackle these problems that might seem even to a degree less sexy than others, but are really important.

In particular right after our presentations Brent Burley showed, yet again, a perfect mix of empirical observations, data modelling and analytic approximations in his new version of Disney's BRDF, and Luca Fascione did a better job I could ever do explaining the importance of knowing your domain, knowing your errors, and the continuum of PBR evolution in the industry.

P.S. If you want to start your dive into Siggraph content right, start with Alex "Statix" Evans amazingly presentation in the Advances course: cutting edge technology presented through a journey of different prototypes and ideas. 
Incredibly inspiring, I think also because the technical details were sometimes blurred just enough to let your own imagination run wild (read: I'm not smart enough to understand all the stuff... -sadface-). 
Really happy also to see many teams share insights "early" this year, before their games ship, we really are a great community.

P.S. after the course notes you might get a better sense of why I wrote some of my past posts like:

(*) I loved the open problems course, I think we need it each year and we need to think more at what we really need to solve. This is can be a great communication channel between the industry, the hardware manufactures and the academia. Chose wisely...

Design Optimization Landscape

by DEADC0DE (noreply@blogger.com) at February 06, 2016 04:21 PM


  • How consciously do we navigate this?
    • Knowledge vs Prototyping
    • Width vs Depth of exploration
    • Speculation is "fast" to move but with uncertainty
    • Application focuses and finds new constraints, but it's expensive
  • Multidimensional and Multiobjective
  • Fuzzy/Noisy, Changing over time
  • We are all optimizers
    • We keep a model of the design landscape, updated by information (experiments, knowledge). Biased by our psychology
    • We try to "sample" promising areas to find a good solution
    • Similar to Bayesian Optimization (information directed sampling)
Bayesian Optimization. Used in black-box problems that
have a high sampling (evaluation) cost.