Dev Console Part 5

Dev Console Part 5

Previously Part 4.

The Console is settling and getting to a point where I think I’m ready to split it out into it’s own repository and out of ‘under construction’.

It bothered me that my previous AddStaticTypeByString required you to know the assembly that the thing came from, so instead of using the System.Type version we grab all loaded assemblies and ask them in sequence if they have a type that matches the name.

The big thing I wanted to be able to do is map game data to the console. For me, in Unity, this is typically a ScriptableObject or something a bit fancier. The concept is the same though, there is some object with data that other classes refer to that defines common values. For this I needed to refactor my previous ConsoleHelper Add* to support object instances. Not a big deal, all the Reflection functions in use want an object, we’d just been giving them a null as everything was static previously, now it’s just lots of passing object down. This also meant that if you pass an object you don’t need to pass type and if you pass a type we could assume it was static if we wanted to.

In a more complex terminal window hitting tab repeatedly will cycle through all the possible things you could be trying to type. I have only attempted to get this working from within the App and not via the Browser. To achieve this I’m now keeping the result[] from a call to the complete function and if the autocomplete is requested again before changing the partial phrase we move the index forward through the cached result array and show that text. In the Unity Text we are replacing the text and keeping the current cursor position and setting the cursor end position to the end of the string. There are still some edge cases that are unhandled given my simplistic approach, like moving the cursor manually not resetting the array of autocompletes.

I wanted to be able to find a command without knowing where it might be. This I thought would be especially useful in one of the previously discussed situations where users customise their console but others might want to use it. E.g. you know that you can change gravity, but it isn’t where you thought it would be at UnityEngine.Physics.gravity. The find command then ‘needs’ to be able to traverse every node in the CommandTree but I didn’t want the find command to have to be overly aware of how CommandTree works nor did I want the CommandTree to have a slightly different version of Find that ignores heirarchy and can return more than 1 item. What solved it was a visitor pattern, c# closure (first result curiously not from msdn) and recursion.

We have something like this in CommandTree. Where each command will pass itself to the visitor and if the visitor requests it, then recur for all child commands.

and something like this to use it.

This gets the job done in a way I can deal with, it also meant that I could remove CUDLR’s previous help command, as it built and stored a giant string as commands were added so it could display it later. I didn’t want that giant string to be around always. So now I have an all command, that uses the same visitor style.

In the previous post I mentioned that I profile system might be a useful thing people could create, well the idea stuck so I started adding it. I now store a ScriptableObject with a bunch of csv TextAssets in it.

Example csv

That gets converted to a List of.

And here we are loading that example csv that is called Default in the console to add additional elements.

One other bit I found I needed was an attribute to flag, fields, props, methods or classes to be ignored by a AddAllToConsole. I created an attribute called CommandIgnore that I can apply to any thing and then my console helper variants will skip over it. The primary use I have for this is being able to have static classes or global data that I want in the console but that still allows me to refactor those classes in the usual way and not worry about exposing partial or useless functions to the console.

Dev Console Part 4

Dev Console Part 4

Previously Part 3.

In the previous post I showed CUDLR from within Unity. This is done by some simple uGUI and a script that passes a line of text to CUDLR.Console.Run when the appropriate key is hit. I then added a callback to CUDLR to so I could get notifed when it wanted to log things so I could show them in my GUI elements. I didn’t want to just pipe these through Debug.Log as they are used for the console communicating with it’s user and are not things that need to show up in the Debug.Log peeking I have set up or in a file log if someone wanted to save all Debug.Logs out.

I also made a handful of small changes to CUDLR;

  • It stored items in its CommandTree via CommandAttributes directly, this irked me when it manually made one that wasn’t actually attached to anything. This means some data is duplicated now but nothing reports to be an attribute when it actually isn’t
  • used spaces as both the separator between child elements and parameters. I switched this to the more familiar ‘.’ for child elements.
    • This also meant that I refactored the few places it was previously relying on being able to regex or split on ‘ ‘ and made a dedicated function for extracting command location from parameters.
  • CommandTree had several recursive functions that did similar things for finding the desired command, then running it or auto completing name or adding a new command. I refactored that into a more general FindCommand function, this was easier and required because I also wanted to switch from spaces to ‘.’ between child objects.
  • It’s function signature took string[]. Simplified that to string as I wanted my auto gen wrappers to do the conversion for me between a parameter string to object[].
  • Command entries in the tree didn’t know their name, only the parent object did. Added field to each entry so it knows what it is.
  • This allowed me to change how commands are stored to always be lower case and their own name storage to keep original cases. Which meant that I could easily make the auto completion case insensitive.
  • Made CUDLR.Console automatic registration of commands use the RegisterCommand path instead of slightly different method for them, means less things to test as now it is just a big loop that then uses the same thing everything else does.
  • Made RegisterCommand use the auto generated wrapper we built up in previous posts, this removed the need for CUDLR’s two delegates one with no params one with string[]

That’s great but lets talk autocomplete. The previous default behavior in the browser is you hit tab, and it uses the Complete method which returns an array of all partial matches. If there is only 1 partial match then replace the input box with that text. I wanted a bit more, if I type ‘ti’ and the only partial match is ‘Time’ then I run another auto complete on Time so it gives me a list of all the sub commands within the holder.

Similarly, if I type ‘phy’ but I have ‘Physics’ and ‘Physics2D’, I don’t want to have to type the ‘sics’. It should be smart enough to complete that part for me. I use something similar to this to determine it.

I found some fun consequences of being able to automatically map objects and statics to the console, much of CUDLR.Console is static, so we can add the console to the console. Which means that we can run console commands via the console. Right now not overly useful, but I really enjoy that it just works and it may become useful if we push the console further towards a way of configuring and creating behaviours.

I was also able to get the console to add new types to itself at runtime. I enter this into the console.

Which ends up calling code that is something like this.

This is crazy, we could now remotely connect to a running game via the browser, make it expose static data to us (as long as we know the full class name and the assembly it comes from). Without us even considering that we might have wanted to do that at build time. This also means that if ports are open you could play some amazing pranks on your colleagues. More practically it means you could have different sets of console commands exposed by default to different team members, eg artist wants lots of graphics and light settings exposed by don’t want to wade through physics or game balance data. But the moment they want access to other things they can add that to their console. It would might help in, “Um, can you come look at this weird thing” type situations, having another team member come over and add the elements to the console that they are used to and dig in to find out what is happening, they could also do this remotely, just post your IP in a slack channel.

In the future I’ll be looking into non-static things, most likely with a focus on data in ScriptableObjects.

Part 5 here.

Dev Console Part 3

Dev Console Part 3

Previously Part 2.

Now to get to making binding easy to do. GConsole, CUDLR, etc. all make use of c# Attributes to attach additional information to methods. If you’ve used Unity much you probably have seen Range or Tooltip, these are used by unity editor drawer code to change how things are shown in the inspector. The existing console systems use attributes to give names, descriptions, help text and other data to methods that the console system can get access to and report to the user, CUDLR attribute here. Then the console system can find all types that have these attributes and add them to their internal list of commands. So that’s understood and kinda solved.

I wanted to be able to add methods, fields and properties to the console without having to attach attributes to them. Partly, if it isn’t stupid easy to do, I know at some point I’ll stop adding things to the console as a matter of course. The other part is that I want to be able to add elements to the console that I cannot add attributes to, for example Unity Physics class. A solution involving writing a facade for Unity built-ins just so I can add attributes to them isn’t a good use of anyone’s time. Here’s what I used while determining the method.

Reflection to the rescue again. AddAllStaticsToConsole uses GetMethods with BindFlags to filter non public and non static items out, we also then fitler out all ‘special’ methods, these things that get auto generated for properties, the get_ and set_. It also uses GetFields and GetProperties in a similar fashion. These all have their own types but basically we loop through the array of results and try to add them to the console if they take are convertible (we discussed this in Part 2) and we know the name from the reflection information. We refactor this into related sub functions so we can reuse them as this gets more powerful.

Moment of truth.

At this point, I’ve also switched to CUDLR from GConsole. CUDLR it’s built to work remotely via the browser and it made more sense add in-game access to CUDLR rather than make GConsole usable from the browser. CUDLR also already had the concept of a hierarchy which I wanted. I ended up making a number of changes and adding some features to CUDLR which I’ll go into in the next part.

Part 4 here.

Dev Console Part 2

Part 1 here.

Working with GConsole and want to be able to wrap existing functions more automatically

 

That’s how we get things in the console. In GConsole then when you eval the input string it matches the first part against all ‘command’ that have been registered. If it finds a match it calls the ‘method’ with the substring of the input string from after the command name till end of string. So for ease of use I wanted to be able to wrap an arbitrary function in a Func<string,string>.

That’s a few separate tasks

  • Converting string to type
  • Determining number of parameters
  • Knowing param types in the first place

Determining number of params. To start with during testing I simply used .Splint(“,”) on the inputstring, this meant that as long they were comma separated they where different. I knew this was just to get other things tested out as things like sentences within quotes and vector3s etc. want commas in them but not to be a separate param. Regex was the solution as things got more complicated.

 

 

This will hold out for a while but it’s not going to handle object hierarchies, like json, should the Console get to that level of complexity in the future.

 

Converting strings to types, assuming we know what it should be. ints and floats etc. have builtin Parse functions, if it’s a string, it already is one. For the immediate future, vectors were the other thing I really wanted, they just require breaking up the string and using multiple float Parse. Now we need a way to access these in a way that makes sense for our use case. For me it was a lookup, keyed by Type and storing a Func<object, string>. When these are eventually used in an automated fashion we put the conversion inside try catch blocks so if the conversion fails we report that to the user via the console output and don’t call the function.

 

Knowing parameter types in the first place. Reflection is very powerful. In fact we’ll end up using it for far more than just this. We get the method info out of the type. That can give us attributes applied to the method, which GConsole and CUDLR etc. already use to mark up classes or methods with names that they should be stored under in the console as well as description text to display to the user. What I need right now though is the the parameters of the method. Then we get something like this.

 

 

Next step will be more automating binding and now that we’ve proved out some concepts, moving to CUDLR.

Part 3 here.

Dev Console Part 1

Dev Console Part 1

So I was working on a project that targets mobile and I’d run into a problem that didn’t replicate in editor or on Android, only on iOS. I really just wanted to put a bunch of logs in and see what was happening. I wanted an in game dev console. If you aren’t sure what I mean by that, if you’ve played with an Id or Valve game you’ve probably enabled developer console at some point, seen logs pop up in the top left or hit ` to change maps or enable net_graph.

Similarly, Bethesda games tend to have powerful in game consoles that can execute the same sort of commands as the scripting language. Which leads to amazing things like this Monster Factory Fallout4.

So I went hunting, surely this is something people have already attempted, if not solved.

https://github.com/mminer/consolation
https://github.com/Wenzil/UnityConsole
https://github.com/gzuidhof/GConsole
https://github.com/hecomi/uREPL
https://github.com/proletariatgames/CUDLR
https://github.com/SpaceMadness/lunar-unity-console
Yep, people have tried this before.

To fix my problem I just needed to see logs so I went with something very similar to Consolation. It uses the old immediate mode Unity GUI calls. I didn’t care for that, pixel based sizes and GUI Skins are not fun to work with. The idea is the same though, hook up Unity Log Callback and append some strings to an auto scrolling UI.Text. I also made it fade out nicely with a LeanTween.

It did let me find my problem, it was a file and directory formatting problem that meant a file was failing to be found on iOS. Fixed.

I left the dev console enabled and found that I like having the logs visible during development. It makes me feel more aware of how the game is running and more connected to the code that I wrote. Rather than it feeling like a disconnected artifact of a build process. For this particular project I had already started building a developer only panel, that gives run-time access to a number of slider values and player resets and unlock shortcuts. So the logical step was to make this an actual console not just a way of seeing logs and move the functionality that I was manually building and adding to the . For this I looked to GConsole as it was simple to integrate and small enough that I could read all the code myself if it didn’t meet my needs. I quickly found that I wanted more from it.

I want to keep working on this and add more features to it, no one of the consoles I found that already exists quite does all the things that I want.

So here are some non-ordered requirements;

  • See logs
    • preview as they come in
    • expand console to scroll back through them
  • Run commands, eg quit app, load a scene, give player xp
  • Inspect and change vars, eg player’s hp, gravity, spawn timer
  • Inspect and change cl vars, eg resolution, vsync
  • Get snapshots of data, eg. GameObject hierarchy, Active enemies
  • Not have to reinvent all functionality that exists elsewhere just for the console
    • This is crucial, if functionality that already has to be manually wrapped or duplicated for the console to access it, then it just doesn’t get done as the cost of get that functionality wrapped and configurable by the console vs adding new features or bug fixing or polishing just doesn’t hold up.

Some examples of what we would like the console input and outputs to look like

GConsole has a simple to understand code base, part of how it achieves this is by only accepting Func<string, string> as methods. This pushes the developer to wrap existing functionality in a function that takes a single string that could be anything from the user. I know that I’d get sick of writing string extraction and conversion code and wrapping existing functionality.

CUDLR is awesome, it operates like a slim http server in your game that you write custom routes (routes a common web concept). It’s designed to be able to remotely debug and see the console via the browser, so locally or if you know the IP you can get the console logs and send commands and fetch routes. But it ships with no built in way of doing that from within the game itself which I’d want to do when I’m on a phone or tablet. CUDLR still gives a string[] as the parameter to it’s commands and would require custom wrapping of existing functions.

I’m going to keep experimenting and pushing this forward. Should you wish to see some in progress code you can check out https://bitbucket.org/steve_halliwell/aid-common console stuff is currently in UnderConstruction/ConsoleHelper

Part 2 here.

Studio W01 – Feedback and other core mechanics

A busy week and no signs of slowing down. As promised this week we’ll look at rituals, brainstorming, pitching, feedback mechanisms and a quick look at the response to studio orientation.

Orientation

Many of my students are already familiar with a lot of the basics and processes we will use in studio as they have been integrated into Production 1 and other previous trimester units. Specifically they already knew what KPIs were, and that we would use Google Drive for all work (WIP and final).

They had heard about LOs/Baseline but not really understood; they saw them as 23 (your number may vary) assessments that must be completed, this is entirely wrong. They are a minimum threshold of quantity and quality that must be met in a wide range of areas, they must be met before the end of trimester to be eligible for a higher grade than a pass concede but they are not themselves tasks or assignments. They can be met at any time and in any order, they just need to be observed by facilitator and evidenced by the student.

The idea that almost everything is group work but all marked individually had to be explained many times in different ways. The project may fail or do poorly, in many respects this does not matter, what does matter is the work that went into it. “How will you know what belongs to who” came up, we explained that Google Drive has track changes. We also keep track of what each person is working on in class, via their project management, task reporting and them posting evidence on their personal blog.

Holistic assessment for final grade seemed to daze them at first. Explaining what an HD required cleared this up. We discuss this topic in more detail at a later date on this blog.

We got all their blog addresses shared among them and gave them all their own Google Drive folder shared from us. We are the owner of all Google Drive folders and can see everything they are adding and working on.


Rituals

These will be an ongoing topic throughout the blog series, this week we’ve used;

  • Openers/Stand ups
    • Normally done only on the first session of the week, it’s a chance to identify any problems that have occurred between classes, and anything ahead or behind of schedule.
    • We also open this up to talk about what has been happening in industry recently, this could be a release or an interview or a company opening/closing, tech announced etc.
  • Close outs
    • The opposite of Openers, normally done at the end of the last session of the week
    • What is everyone working on between now and next session
    • Is there already something that they have realised they need help with or would like more info about need week
    • What are they blogging about this week, sometimes we assign them (more on this in week 2), sometimes they have to nominate something
  • Class room behaviour
    • Facilitators wear many hats, we are mentor, teacher, team lead, co-worker, producer, executive producer.
    • If anyone says a word or uses a phrase or we’re in a context you don’t understand you MUST let us know. It’s OK to not understand something, you’re here to learn, that’s kinda the point. What isn’t OK is to say nothing and reveal that you’ve been lost and doing nothing for weeks.

Feedback Mechanisms

We want to promote feedback from and to all involved. For pitches and presentations of any kind, we follow the following stages:

Clarifying 

A short question that has a short answer. Yes/No questions for sure. It is also intended to get small bits of information that were missed or needed a tiny bit more than was given during the presentation. The response should never be more than 1 short sentence. These are to fill in gaps of understanding that are missing in the person who listened to the presentation. If the answer is unknown that is fine, a simple, ‘I don’t know yet’ is fine, do NOT design on the spot. Similarly if you have an answer but it requires lots of explaining then note it down. This means the presentation needs work, the way that you explain the thing isn’t quite there yet, but you’ll get there next time. This section is purely about providing information to the questioner if it exists, if it does not exist then simply state that it not known yet. It is not a section for defending your current decision or explaining your process.

Probing

During presentations we write these down, these are deep questions that seek to prompt new thoughts or deeper understandings in a certain area.

Often during a presentation you have an idea pop into your head that you want to tell the presenter, so they can incorporate it. That’s good, it’s better than you tuning out. However, If you say ‘Hey, your game needs a double jump’, then it is easy for them to either dismiss it or take it on board without really considering it. That’s not how we make the concept stronger, especially when the 20 people in the room all want to add something. So instead you need to figure out why your brain said ‘it needs a double jump’, maybe you were thinking about the level layout they proposed and thought it could make more use of the height given the current proposed camera position. So you would write down a probing question about camera location as it relates to vertical movement or ‘how important is verticality to your game?’

I tend to think about probing questions as adding new nodes to the presenters mind map, it gives them a new thing to build off of and cross link with their existing ideas, both the ones they’ve put in the concept presented and the ones they’ve ruled out or not used yet. You need to be aware that you aren’t simply giving the presenter advice in disguise, don’t try to trick them into embedding your idea, that’s not what we want here.

Warm

Specific things about the concept, or whatever it was that was presented, that you liked. These are things that you are basically voting to stay in the concept. Could be things that you feel work well or suit or speak to you personally. The important part here is we want the why behind the positives;

  • so they can be considered in future projects
  • it can be reinforced for the presenter
  • all in attendance why that part works for you
  • it may not be for the reasons they intended, they may not have intended it at all.

Further, if it does turn out that the thing you like or something connected to it does need to change the reasoning behind it can be considered and hopefully integrated in some other way. These are normally a short 1 or 2 sentences. Make sure you don’t start telling the presenter what to do with it, at least not yet, just specifically what is it and why it works. 

We don’t tend to allow the presenter to respond during warm feedback, merely absorb and make notes.

Cool

This is the opposite of warm in that it should be specific things that:

  • you don’t feel sit right
  • are poor
  • need more attention or perhaps need to be dropped entirely.
  • Things that don’t mesh or seem irrelevant.

This will also include things that are entirely necessary but aren’t there yet, they need more attention in the next draft. Specifically what isn’t where it should be and why. It may not be that it needs to be changed or removed at all, but giving the feedback can allow the presenter to see they need to change the way they explain the thing, rather than need to change it.

We don’t tend to allow the presenter to respond during cool feedback, merely absorb and make notes. Even if there is cool feedback that is due to a misunderstanding they need to know that and acknowledge it, not fight the person giving the feedback. We’ve found that allowing the presenter to defend against cool feedback gives the wrong impression. It leads less people to want to give the feedback because they think they might be in for a fight. It also makes the presenter brush off the feedback if, in their mind they can argue for why it doesn’t matter.


Brainstorming

The conception phase doesn’t have any explicit LOs assigned to it but it is implicit in the process. You need ideas, lots of them and then you work them. Pull them apart put them back together in new ways, pitch concepts to others and use the feedback mechanisms. The intent is to get to a best/better version of the idea and to try to find the obvious realisations as soon as possible, rather than making it because you need to make something.

We recommend mind maps, word associations, and solo and group brainstorming. We also recommend separating pure generation of ideas/concepts/sub parts from determining if they are feasible, cohesive or any good at all in the first place. We don’t want to kill off a line of thought just because at that moment it seems like it doesn’t match the other ideas on the page. The things that spring from that idea might match or its siblings, further, the core idea may shift over time such that it ends up matching. We call these 2 phases expansion and contraction.

Contraction is where you start trying to join different parts of the mind map together to form a cohesive idea, it’s where you can start ruling things out for not fitting or being too large in scope for the time permitted. It is also where you start to see a version 2 list forming, this is a list of all the cool ideas that you wish you could implement but cannot due to time or cost or non-matching, etc.


Pitch limits

Pitching is a tough thing to do and very easy to do poorly. Aside from the limits there are a few pointers;

  • don’t be a snake oil salesman
  • don’t just read a prepared speech
  • be excited; if you aren’t excited about the concept then why should anyone else be.

We put limits on how pitches can be done to avoid some of the more obvious short falls.

  • Time limits
    • You can’t waffle on about every little detail you’ve constructed for your world lore in your Tetris clone in 3 minutes so you need to figure out what is core to your concept and how best to deliver it.
    • This also helps with concepting, if you can’t narrow the core down to a clear few minutes then it needs more time to bake; maybe you don’t know what the core actually is yet.
  • No text on slides, only a title
    • It’s too easy to put your dot point talking notes or entire paragraphs on a slide, this is terrible, it’s not a presentation anymore it’s a pamphlet that is in the wrong medium, it creates a disconnect between speaker and audience and is more often than not boring as you read faster than they speak.
    • Images, graphs, diagrams etc. to support or illustrate talking points are far more engaging (multimodal).
  • Limited number of slides
    • Further assists in cutting down on chaff.

 

Next week monitoring student progress, play testing, blogs and brief 2.

Studio W00 – Briefs and Processes Matter

In this blog series we’ll be discussing the studio units I run at SAE. Discussion will include:

  • interpretations
  • planning and executing
  • tips and tricks
  • anything else that comes up.

The plan is for a blog to go up each week. At times it may seem a bit cagey; as this is public and available to students I don’t want to spoil any of their surprises.

Let’s dive in.

At this point in time, the big milestones locked are in, we know:

  • when projects start and end
  • when they require testing, and
  • when they will have design interventions, etc.

Project structure

The Basic structure of any non trivial project we do, is:

  • Brief
    • Creative limitations and timelines
  • Primer
    • Class discussion planned around the key topics we expect them to tackle with the brief
  • Groups
    • If it’s group work we assign them into groups in accordance with our policy
  • Brainstorm
    • Fairly unstructured, beyond, there should be two phases
      • Expansion; get all ideas out of your head and onto the page. Doesn’t matter if they seem good or not, or too big or whatever, we want it all. Mind maps recommended
      • Contraction; which things mesh together which don’t which, which make sense, which are out of scope but what’s cool about them?
  • Concepting
    • These should be rough. The type will be dependent on the discipline, but might be:
      • Sketches
      • Thumbnails
      • Storyboards
      • Mock-ups
      • Rough high concept doc
  • Pitch
    • Feedback
  • Refine, take the feedback and use it to polish, add to and remove from the concept
  • Plan
    • HCD, GDD, TDD, Task Breakdown and schedule, Questionnaire, etc.
  • Design Intervention
    • Why have you done things the way you have, are they the best ways, the only ways?
  • Create
  • Playtest, getting the first complete draft of the thing in front of people who aren’t creating it and who aren’t already intimately aware of it’s development.
    • Questionnaires answered
  • Pivot
    • What is working, what isn’t, what is the project actually about now that it’s met the public, what can we cut?
  • Update Plan
  • Create
  • Present
    • Noting changes, probably with more playtesting to confirm their effects.
  • Wrap
  • Post-mortem

Project Briefs

Briefs are a tricky beast, they define the limits of what a project can become but not what it actually is. The team will determine what exactly the thing is. The creative limitations serve as the beginning of the mind map. The limits need enough ‘air’ in them so that teams don’t all end up in exactly the same place but not so much ‘air’ that the team spends time searching for the ‘right’ concept. We want teams to have individual ownership of the thing, to feel that their’s is a unique endeavour. There should not be an obvious ‘answer’. The brief should provide a challenge to be overcome by the team or the individual (humans naturally enjoy this), it’s a key part of why the projects succeed.

Elements of a brief:

  • Goal
  • Creative Limits
    • Could be limits in terms of time, style/genre, technology used, group size. It could even specify what part of the deliverable is, a mechanic or asset or scene. Eg. A 3d game, where no player or NPC can be injured or hurt. It must be set in an interior environment and use a third person camera set up. You can find more, simplified versions of previous briefs by checking the briefs section here.
  • Deliverables
  • Milestones

Shared learning experiences

The other big win for all the projects being different but also the same, is they can share the learning experiences that are all slightly different and with different areas of specific interest. It also allows for cross-pollination but doesn’t end up with all the projects homogenising. Here’s an example of our first brief.

We’ve also got a pretty good idea of what the 2 other briefs for the unit will be, to the same amount of detail as brief 1 but those will have to stay secret for now.

Orienting students to the Studio Model

Orienting students to the potentially vastly different environment of studio is part of what happens in week 1. We have to talk through a bunch of big things:

Students are far more responsible for their learning than ever before. Explain the following things:

  • We’re here to facilitate, we create the environment,the support and assistance so they can learn, but ultimately it’s far more up to them. We are not going to tell them facts and expect them to be repeated to us in a few weeks time.
  • We let them now that it’s okay to not know the answer to things, that’s kinda the whole point of education. If they don’t know a word or phrase or the context, they have to say something. If they stay silent then we assume they’ve got it and will move on to the next thing.
  • We do a LOT of things flipped, we want to spend time figuring out how to use things effectively and creatively not get stuck on basic concepts. We let them know that we expect them to do lots of basic knowledge research outside of class.

Explain how projects work:

  • They start from day 1, the structure isn’t learn a bunch of stuff then try to use it later in a project. We will be asking them to do and make things that are beyond their current skill.
  • Everything is marked individually though, it doesn’t matter how good the thing is at the end, it matters far more what they contributed to it and what they learned in the process.
  • They will exhibit to the public, in one if not many ways during and at the end of trimester.
    • For this unit, this also means play-testing each project, usually multiple times.

How grading works

  • Learning Outcomes are what they need to get, they describe the quantity and quality of a skill or aspect required by the unit. They can be met at any point, we need to see it, on the public facing blog.
  • We talk to them briefly about KPIs, hopefully these are being renamed soon. Basically it’s a collection of soft, hirability skills that we want students to be aware of and actively improve upon over the course of their degree.
    • We flag that they need to read these carefully and that there will be more info about the process in week 5.
  • Holisitc and final grade, is based on how far above the minimum amount of work in both quantity and quality over the trimester.
    • If all they’ve done is just met the baseline descriptors (not succeeded or tried to exceed them), done no extra work/projects, made no effort to do so and have not started executing a plan to improve soft skills then they will not get better than a Pass for the unit.

So the overall structure of Studio 1 is well understood by us at this point, has enough flexibility for us to move things around to suit the cohort and fit in structured content if needed. Students have a very clear picture of what the immediate schedule is for the current project. Beyond that they know that there about big things, like when projects are intended to start and end, but these are intentionally vague so there is slack built into the schedule.

Our projects

Project 1 is solo and only 1 week long, it’s a good framing device for acclimatising students to what studio workload is like. The content of the unit itself and gives them a chance to make a decent chunk of progress on a wide variety of LOs (Learning Outcomes). This also assists us with determining groups for the upcoming group projects, so we can take into account students strengths, preferences, existing relationships and areas of interest.

Project 2 will focus on dynamics will be in groups, last for 4-5 weeks with play testing in the middle. Project 3 the same but focus on aesthetics.

What’s next?

Next week we’ll talk about rituals, pitches, feedback mechanisms, brainstorming, and probably more.

On Post-Mortems

The Goal

To improve future projects, for yourself and others. This goes beyond just not making the same mistakes again. It’s also about finding optimisations and efficiencies, the flip side, what was really good about the project and how can we ensure those good things occur again, more frequently, more reliably. You’ll propose changes to process, policy, behaviour, culture, technology, workflows, pipelines all with the goal of minimising undesirable traits and maximising the positive traits of the project (sprint, milestone, whatever) that just concluded.

Mind set

If you’re having trouble nailing down how this should play out.

It’s part coroner in CSI. Stating observations. Explaining circumstances and investigating evidence. This is what happened and why it happened the way it did. What allowed for and caused this trait to manifest.

“Yes the impact killed them but without the preceding 20 story fall they’d be fine. Also this was no suicide, the heavy, pre-mortem bruising on the torso and lower back, indicate they were pushed backwards into and over the railing of the building.”

 

It’s part mad scientist, making notes for the next round of tests. “The subject did exhibit the desired increased strength and reflexes, the increases in agent Y and slight alterations in chemical formula from the previous round of testing have had their desired effect. Unfortunately the… overheating problem is still resulting in brain damage and spontaneous cardiac arrest during sustained exertion. We cannot lower the testing period any more without falling outside of given parameters, we either need to develop some sort of cooling harness for the subjects or alter the base formula or introduce some sort of temperature stabilizer to the mixture itself.” This comes in when it is unclear the causation of the trait you are examining, so then you are creating the boundaries for your proposal.

Not a dev diary

They are useful, especially as a first step to reflection, but do not fall into the trap of writing a dev diary instead of a post mortem. There are a couple of big differences.

A dev diary is a record of things that happened, could be added features could be bugs found/fixed, could be anything really. It merely records them. Some of this will definitely overlap with a post mortem, you cannot make recommendations for the future without explaining the circumstance/event and reasoning behind said recommendation.

 

Other part of the difference is the distinction between data and knowledge. If you are not familiar with this distinction check out the links below. The super condensed version is data is just an item or event, something which can be observed/recorded. Information is data collected and categorised/organised in some way that makes it useful. Knowledge is more meta, it is the relationships or insights or understanding that is taken from the information.

http://www.cognitivedesignsolutions.com/KM/Understanding.htm

http://www.informationisbeautiful.net/2010/data-information-knowledge-wisdom/

A Dev diary will tend towards data and information, a Post Mortem seeks to grow, foster and pass on knowledge or wisdom.

 

I’m going to get a tad academic here for a moment, but don’t worry these concepts are also very applicable to games, the obvious ones being tutorials and skill atoms.

The cognitive dimension being used/displayed. A post mortem will require not just recollection and regurgitation of information. It requires reflection on yourself, the project as a product and the processes/procedures used during the project. It requires you to analyse that reflection for cause and effect. Why and what allowed/forced it to be that way. Then propose (create) solutions/recommendations based on those reflections and their causes/ramifications.

http://www.celt.iastate.edu/teaching-resources/effective-practice/revised-blooms-taxonomy/

 

A dev diary is remembering and understanding. A Post mortem is across a broad range of these areas. It will most likely require application of conceptual knowledge of related areas to propose a solution, possibly even creation. Evaluation of known things and hypothetical things. Requiring reflection and self-knowledge. In short, pretty much every cell in the grid could be used in a non-trivial post-mortem. If you are doing a post mortem as a student, the person grading your post mortem will be looking for many if not all of those things on show in your writings (or conversations or vlogs or whatever your institute accepts as a submission).

A contrived example

Let’s run through a simple scenario of falling over in the shower, in a couple of different ways, to try and highlight this difference.

 

Example X

Issue: Fell during morning shower and injured left knee and left wrist.

Circumstance: Morning shower is part of routine.

Cause: Shower was slippery.

Recommendation: Be more cautious in the shower.

 

Example Y

Issue: Fell during morning shower and injured left knee and left wrist.

Circumstance: Shower was more slippery than usual.

Cause: After further investigation it was found that the shower had been cleaned very recently, by a third party, making the surface unexpectedly slippery.

Recommendation: Be more cautious in the shower and check if the surface has been recently cleaned.

 

Example Z

Issue: Fell during morning shower and injured left knee and left wrist.

Circumstance: Shower was more slippery than usual, it had been cleaned very recently making the surface unexpectedly slippery.

Cause: Poor communication, scheduling and signposting. Communication is a two way street, more pro-active communication is also required.

Recommendation: If cleaning is not already on a public schedule it should be, so all parties can be aware if they choose to be. When cleaning is happening and has recently occurred there should be a visual signpost, post-it note or similar there with time stamp. This will be better visual queue than the cleaning itself which may go unnoticed. This will also cover the eventuality that a cleaning occurs off schedule or parties are not present during the announcement of the commencement of cleaning.

 

Example X falls into the ‘don’t work with anyone named bob’ category, it states something happened and that it was undesirable. It doesn’t explain how that came to pass or how to manage/avoid it in the future.

Example Y spends time explaining the surrounding events. It’s analysis and proposal just digs further in. It ultimately still doesn’t get the the root of the problem, it’s analysing symptoms of something we might actually be able to improve or fix.

Example Z states the shower as the event that displayed the problem, then analyses it to discover a cause underneath. You can think of this similar to Why is the kettle boiling? Is it because electrons are being forced through a metal coil with high resistance, resulting in heat which is then transferred to the water, or is it because you wanted a cup of tea. In a post mortem you probably want to state both of them. How it happened, why it happened and then propose management of it for future projects. Discussing if your proposals target the how or the why the event occurred.

Check out other’s post mortems:

http://www.pixelprospector.com/the-big-list-of-postmortems/

http://www.gamasutra.com/features/postmortem/

Moving projects to Unity 5

After Unity 5 released I decided to upgrade a few of my older projects and see what happened. So I opened up SourceTree and made sure there was a commit to safely rollback to if everything exploded.

JSD

Just Shmup Dammit. Unity removed the shortcut properties .rigidbody .audio etc. as they were performance traps. They look like direct variable access, they are not. I’ve already talked about this in a previous performance post. This and 1 instance of a tag compare were the only things that changed in the automatic upgrade process. The tag compare was a compile error, all that needed to be done was remove quotes around a variable name.

Everything then ran as before.

JGDLib

Unity updated the scripts, highlighting a few places where I had been lazy and had multiple .rigidbody in the same block. Other than that all test scenes ran as before.

Project Red-Room

Well and truly still in it’s early stages, first person exploration game. Obviously required first person controller of some sort and makes heavy use of Pro Core’s ProBuilder. This asset required a new version to function correctly in Unity5. This required a fair bit of back n forth, lots of discard local changes, only due to a bug in the version I was using at the time, which I assume has been fixed since, it was only converting ProBuilder prefabs if they were in the currently active scene when I imported the newer version of the asset. This might be down to how Unity5 changed prefab storage in scenes.

I had been piggybacking off of Unity’s first person controller and using many of Unity’s Image effects. All of these have changed. At the very least in 5 they are all C# now. That’s great but it meant redoing some settings to update to them. The newer first person controller makes more sense, to me at least. It’s 1 script not 3 interacting js behaviours. However it has lost some functionality, handling moving platforms and easy ways to toggle off features. So these had to be added back into the newer version.

PhysX 2 to 3 changed a bit. My door prefab’s were extremely broken. They were having the physics freakout we all love with ragdolls. Hinge’s needed new settings, so the door wasn’t trying to be pulled inside the doorframe/wall next to it. The other change that got me was the door frame itself, previously a solid single object was no longer allowed due to the way I had set it up, it was not a convex shape. So altering the door prefab to use 2 boxes and 2 capsules instead of the door frame mesh itself solved this issue.

Certain shaders seem to now be much slower than previously. A grid of 100+ characters was causing frames to drop to the teens. I’m fairly certain it was not previously. Fortunately in this case there was no reason for them to be using a special shader so changing their material to use the new Unity 5 standard shader has solved this problem. This put the fps in the same area back up to well above 60.

Fog has changed. It and many other settings now live in the Lighting window and also requires that the global fog image effect is added to the camera.

How the podcast gets to you, Part 2

In part 1 I touched on the website the recording process, this time editing a processes.

Editing

After recording I have 2ish gig of wav files on the mac book air. I take this home after recording finishes and copy all the raw data into a new ISO standard date format folder on my NAS. I won’t edit tonight as that takes up to an hour and its after mid night normally at this point.

When it comes time to actually edit, I get all the raw wavs on my pc, copied from the NAS. As mentioned previously there’s are in parts, broken up so I know where to edit, just have to cut top n tail. I record all the warm up and prep time. Why? Well some of it may be relevant to future discussions but actually it’s because sometimes its quite funny and I want to make sure we have it for future use. When recording we all simultaneously clap once, to mark the start and end of ‘for real’ parts of the recording, this makes it easier to spot in editing and gives us a point of mentality shift, from warmin up to radio time.

Steps for editing in Adobe Audition

  1. Find sample of background noise only, normally easy to find immediately before the clap. Sample and apply removal to entire file.
  2. Find starting clap, select from beginning of file to just after clap, save selection as _pre and then delete it.
  3. Find ending clap, select it and delete. No need to save as this is normally only seconds long and is just a recording of me getting up to stop the recording and restart it.
  4. Depending on the recording at this point I might use the DePopper, DeClipper and DeEsser
  5. Now I have a de-noised sample of just the podcast content but it’s all over the place. Aaron is super loud, Tony and I are quiet. I have a favourite set up for doing a bunch of processing in 1 button it does…
  6. Normalisation
  7. Eq – very slight boost to low and high freq, makes voice sound fuller
  8. Dynamics processing – making quiet things louder
  9. After those are complete, its time for Match Volume. So loudness is actually quite tricky, I’ll leave it up to you, dear reader, to look up LUFS. After some research I found the loudness for most podcasts is -18 LUFS. So we use that too.

After all that I have 1 section, normally the discussion of 1 game, ready. Repeat for each section normally 4, 1 for each game and the wrap up. Most of those steps are very quick, Match Volume, Noise Removal and the De* can take minutes each as they are more intensive.

Now comes Merge. I start a multitrack session place each clip 1 after the other in order and then export the multitrack session to file, a wave.

With this giant wave file of the entire podcast I now run the delete silence process over it. I have it set up to look for silences longer than 1 seconds and reduce them to half a second. This removes gaps between my top n tail cuts and any long pauses in the conversation where one of us, normally me, is collecting thoughts before responding.

Now export as mp3. 96kb is more than enough for our voices. Unfortunately Audition doesn’t seem to allow me to edit the id3 tag, so I have a separate program for editing the mp3 tags. Then the edited wavs and the mp3 are copied back to the NAS in their respective folders and the mp3, which is normally around 100meg is uploaded via ftp to our hosting. Given my woeful upload speed this takes about 1.5hrs.

Processes

We don’t currently have a LOT of stuff that we have to do each week to get this thing out the door. But even with so little we’ve got some very clear processes we go through each week. We have a shared google drive, that houses business related documents, the random rolls and a folder for each of us to take notes about the game we play that week, we don’t peek in each other’s folder.

Everything we post via wordpress is cross posted to our twitter and our fb page. This is normally a podcast+post on sunday night and a mid week post revealing the games for the upcoming week. For each of these we use the policy of ‘eyes on’. We start the post and save it as draft and notify the others that they need to check it. By that we mean we’ll read over it, make changes as required. This catches spelling mistakes and means that nothing is first draft. If I start the post then both Tony and Aaron will have made changes and bettered it by the time I look at it again before posting. It also greatly helps in terms of keeping a unified voice of ‘The1st10minutes’ so things don’t obviously feel like Steve wrote it as opposed to Tony etc.

For actual content, we know what games each of us are playing but we stop each other from talking about it before we go to record, thus “Save it for the podcast, buddy!”

We start a timer for 10 minutes from the moment we hit New Game, in whatever it is we are playing. Failing bizarre circumstance we play it as the developer intended. When the timer goes off, we stop playing and start taking notes. Normally we use OBS to record the play through so we can scrub back through if need be to refresh our memory and fact check. As for notes we keep developer, year of release, platforms, publisher. Apart from that we all take notes a little bit differently. I scribble down taking points, things that I want to mention, how long till I was in control or things that jump to mind on reflection.

That’s about it for now, I’m sure there’ll be more soon.

How the podcast gets to you, Part 1

I thought it was probably about time to talk about the podcast, The1st10Minutes. We started recording late last year and it sort of just fell to me to be the guy that does most of the back end stuff. That includes, recording, editing, website.

Website

This is probably the easiest now. I bought the domain, easy $20 a year via godaddy, they already have all of my other domains. Web hosting, also easy, I already use hostgator, it’s under $10 a month for unlimited bandwidth and I don’t need much more. It only has to serve simple webpages (wordpress), rss feeds and files (mp3s).

I’ve used wordpress for a while now. If there’s a simple dynamic website task you want to do there is probably already a plugin out there that can do it for you. eg. Seriously Simple Podcasting plugin, we give it some info and it generates the iTunes compatible rss feed so we can be in iTunes, SNAP to auto crosspost to FB and Twitter and Disqus to handle comments. Finding a theme took longer than I would have liked and I ended up just purchasing 1 for about $40 bucks. It’s clean and is intended for podcasts so it doesn’t look like a bog roll. Unfortunately, it keeps breaking. Well plug-ins that it supports have changed and it’s struggling to update in a reasonable timeframe. So much so that I’ve disabled it right now until it calms down.

Uploading is annoying, the upload speed from my apartment in the city is pretty terrible, ~20kBps. So the podcast which is normally 2.5hrs long as a mono 96kb mp3 is around 100meg. So I have to ftp the file to wordpress, both because my hosting as a php upload limit and at that speed it takes over an hour normally to upload.

The initial setup time for website, social accounts and figuring out the process for getting into iTunes and testing the system was probably a day (<8hrs). On an average week maintenance of the website and populating it with content probably accounts for around an hour.

Recording

Apart from Ep 001 which was recorded in a quiet but small room, we’ve been recording in the theatrette here at work. It’s a larger space with adequate dampening. Background noise is low and consistent (very important) and while there are some echoes they are not unpleasing. Echoes I can’t do anything about, but background noise I can get in editing, especially when it is quieter than the speakers and is consistent throughout.

We record after hours, the podcast is normally between 2-3hrs but it takes us from 6pm to about 11pm to record. Some of this is dead time but we need breaks, talking nonstop is tiresome, thirsty work. We don’t have the most ergonomic set up. We all huddle around 1 microphone (a blue yeti) right now. So we need breaks frequently.

I record all of this via the usb mic into a Mac Book Air, recording directly into Adobe Audition. We stop and restart recording at each break so it is easy to trim later and doesn’t require hunting for problems or for me to be adding in marks/queues as it records. This results in about 2gig of wav data.

 

Up next time, editing and processes.

Unity and getting The Player

So I saw something very much like

A few things first. If its performant then it’s ok, sure. I don’t have the Unity source code so I can only infer things from what they have talked about.

Lets psuedo what this is probably trying to do

 

Now lets create the psuedo for what is actually happening

That’s pretty expensive and not what we actually wanted.

Lets talk the quick n dirty way to make this a bit more like what the original pseudo intended. The original code indicates that there is only 1 player and we want access to it somewhere else. Sounds a lot like the exact conditions for a singleton. For a MonoBehaviour we can do this in a number of ways. We’ll discuss more robust solutions another time. So the least robust but quickest is.

In the Player.cs

 

and then the orig code becomes

 

Further

 

If somebool is a property and not a field or has knock on effects, we perhaps even go further and look at using the OnBecameVisible, OnBecameInvisible messages.

Shader updates and inspectors

It’s been a busy few weeks. I’ve been knocked out a bit with a cold/flu thing and PAX. So there’s not as much to report as I would like but here’s the highlights.

PAX was great. Being in the bigger venue meant we could actually get to panels without waiting multiple hours in the cold and rain. Got to see lots of panels with indie devs and talk to them on the show floor. Numerous people told me GCAP is probably what I actually wanted to go to and their not wrong. It just wasn’t going to work out this year. Next year hopefully.

Shaders

updated substances and all the maps shader

I’ve updated the shader in a bunch of ways. It’s more optimised, previously it was just using a separate texture for each map, now its all packed in channels. Considerably less total gpu ram usage and runs faster, few things that were previously floatx and how just single floats and things that used to be separate samples are now just swizzles. This simple sample scene is running comfortable at hunders of frames even on my mac air.

ao sample grey scale

Grey AO

ao sample colour

Coloured AO

I was talking to Tony at work about what I’d been messing with and he mentioned that it really bugged him that AO was always grey scale. So I have a first draft coloured AO in the shader and a node to generate it from diffuse, ao and height in Substance Designer. The early results are interesting, its kind of a very local faked GI or PRT. In any event I like the way it looks, it makes everything appear much richer instead of washed out.

custom material inspector

Next up was a custom material inspector for the shader, just so it can hide properties that aren’t being used, eg. here lightramp textures are not shown as they are not turned on, same with occlusion strength. Also I wanted to have both sliders and numbers showing for float ranges.

Also created a simple light ramp and gradient ramp texture creator (the yellow and blue ball above is using a dual colour ramp shader) so I didn’t have to keep going back n forth to photoshop etc. to create ramps. This is done by creating a custom editor window.

texture ramp creator

 

Generators

I wanted to be able to chain together and merge different patterns and randoms etc. to generate dynamic anim curve style data. Specifically so I could not have to create custom animations for things like moving/flickering lights in the scene I’m working on. My first attempt worked great and code made sense and was a small and easy to maintain and extend. Unfortunately the way I was using inher didn’t play nice with unity inspector. So I redeveloped it as flat (and right now just a single class). The code is not as nice (it’s not soul code anymore) but it is entirely configurable from within the unity inspector.

custom generator inspector

This required both a custom inspector for GenerateDriver class and PropertyDrawer for Generator class. I’m particularly fond if the real time preview. This head bob a few of these moving different local pos axis and lerping between idle and moving bob based on velocity, ~10 lines.

headbob_generator

 

Finally here is some very early stuff I’ve been working on for the Red Room. Mostly just upping my proficiency with Substance Designer.

redroomwall_preview

redroomfloor_preview

 

Parallax Offset

So sometime last week, while playing Substance Designer for a I’m working on, I put some height maps into Unity’s Parallax Bump Spec shader. This sent me down an interesting rabbit hole. First off Unity’s parallax is an interesting hack/shortcut.

 

No parallax

normalmaponly

Unity’s parallax

unity para

That’s it. Very cheap and for low heights the effect is still impressive. But at large heights and longer objects the textures begin to slide, thing start to look like they’re made out of chrome and their texture is actually the environment map. Why? Well its not how Parallax offset mapping/parallax occlusion mapping/Steep parallax offset mapping/relief mapping etc. work. They all rely on some form of ray marching through texels of a height field/map. Basically we have some starting point, the uv being render, we then move in the view direction across the surface of the texture incrementally. The distance per step is determined by viewing angle. Each step it samples height, if the height is less, we’ve intersected the height field surface. When it can be afforded a binary search is then performed between the previous and current sample to find a more accurate height for this frag, it greatly reduces height banding. This gives us a new ‘depth’ of the frag and a uv offset to sample all other maps with. here http://graphics.cs.brown.edu/games/SteepParallax/ and here give good overviews http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-closer-look-at-parallax-occlusion-mapping-r3262

Parallax Offset

para

Well occlusion had to come next didn’t it. Its expensive. It uses the same principle. Once you find the offset, you then use the same ray marching ideas to move towards the surface of the height field in the direction of the light dir. If you make it to the surface without dipping below the height then not occluded, if you do hit something the light doesn’t reach you.

Smoothing these out was needed, I wanted to find a cheap way to remove the misses and the aliasing. My current method is to use lower mips. One of the recommendations for height map aliasing is smoother maps in the first place, which I have also played with a bit in my demo scene. Mips will smooth for me automatically (they’re already there I just needed to use them) and will still allow for extreme close ups to have that detail. Unfortunately it was not as easy as tex2dlod, unity surface shaders don’t support it it seems. But they do support tex2dGrad. This worked, much of the aliasing and sparkling was gone the very sharp edge remained. To smooth it (or rather make it less regular), compare the sample to that of an even lower lod and fade based on difference between the two.

Parallax Occlusion

para occ

The solution is not overly accurate but it is smooth, consistent and predictable. There is little to no, sliding, popping or sparkling. Also it only works in dx11 mode in Unity. I wanted to see if it was possible using only Unity’s surface shaders instead of writing glsl or cg from scratch, it is but it means changing renderer causes it to fail, due to using tex2dgrad inside control flow statements: X6077. This strikes me as odd as I’m fairly sure what I’m doing is legal, esp since unity graphics emulation thinks it’ll work on openGL ES 2.0. It might be pure related to Unity’s conversion from surface shader to cg is renderer dependant and is stumbling somewhere.

Some of the problems with this method;
Everything is pushed in (away from surface norm), so things shrink.
For great heights texels on slopes get pulled in ways they were never intended to, could try to remedy this with tangent space tri-planar mapping, at least add some noise over the stretch.
Its expensive, some quick numbers out of the Unity viewport, para occ is ~450fps, para without occ is ~650fps, Unity’s offset is 2000-400fps. That’s on a gtx670 in a window that is about 1600×900.

After much fiddling with tuning paras, like min and max steps, I was happy… for a time. I wanted to see what else I could push with the other map types out of Substance. Ambient Occlusion was the next map of interest. I’ve previously used AO in substance as an information source for the diffuse map. I tried mixing AO with both depth and nDotL (normal . light). The results again and not accurate at all but the effect was surprisingly good, to me at least. It gave fake soft shadows across the surface.

Directional Ambient Occlusion

amb dir only

Final result.

all

This is by no means the best, most complete or most performant way to accomplish these tasks. Displacement mapping would most likely be much faster and produce smoother results.

 

Different angle

Normal map only

none skew

 

Unity Parallax

unity skew

 

Parallax Occlusion with Directional Ambient

para all skew

 

Thoughts on 10 year game

We gave the Studio 1 games development students a quick exercise to get them into the swing of things. The theme of the unit, the skills they will be aiming to develop and a bit of a rude awakening as to how much is expected of them from this point onwards.

 

The brief was come up with a concept in 1 day that could be played for 10 years that can be prototyped in 1 week. Students struggled with the idea of 10 years, this was expected and intended. I expected them to be looking at the basics of game design, which they should all be familiar with by this point. This should include, Theory of fun, Player types, 4 keys 2 fun, modes of play, etc. Then determine which of these most easily lends themselves to longevity. Many students, without seeming to realise it, focused on novelty. They were very adverse to their game ever feeling the same one play to the next. I’d partially attribute this to the semi-recent resurgence rogue-likes or PDLs (Procedural Death Labyrinths).

 

Anyway the point of this post was to briefly talk about the tactic I would take with this brief. Many of the games I played way too much of are not rogue-likes. Tetris, Doom, Mario 1 and 3, Goldeneye, Counter-Strike, Half-life (1, 2 and ep 2), Sim City 2000, Batman Arkham series, Picross and so on but you get the idea. None of these fall easily into the chess basket, games of skill built around competition, there is always more to learn and always a challenger to face. In CS I spent more time playing against bots than people, partially because I’ve never loved pure competition, partially poor internet connection at the time. They are not Poker or Solitaire either. To me they all have fairly well defined systems, they are predictable and most important, I think, they allow zen. When you can play the game without actively thinking about it. Like playing a song on guitar or piano you learned many years ago or how you can prepare one of your favourite meals while still holding a conversation or the weird and kind of terrifying moment when you pull into your driveway but don’t really remember driving home. They’re games that are really good at flow.

 

For me this state doesn’t require a strong hour to hour gameplay loop. It requires that the second to second loops are physically and mentally taxing enough to trap my brain and then minute to minute that varies the stress level so player’s don’t become so overwhelmed that they have to stop or so detached that they become bored. Some sort of rhythm mechanics would be my starting point.

 

Proportional Navigation

Another in the line of mesmerizing demos. Check it out here. At some point soon this will replace the current naive tracking the missiles do in Just Shmup Dammit and eventually become part of JGDLib. I was prompted to look into it after some of the students chose to look into missiles swarms ala Itano circus. They mostly ended up with variations of flocking algorithms. This is not that, it’s what’s actually used in many real missiles and apparently a few of the more simulation based games. 

Readying Something For The Assetstore – Part 4

So remember when I said I was going to feature freeze JGDLib, well I lied. After watching the Unite keynote, I decided I needed to celebrate the incoming PBS pipeline with a collection of entirely unrealistic, definitely not physically based shaders.

Check it out here

These now require a little bit of documentation, then it will be feature freeze and commence scrounging for some testers.

Reading Something For The Assetstore – Part 3

Documentation is complete and I’m feature freezing JGDLib at this point so I actually push ahead with its release. Without doing this, I fear I’ll fall into the programmer trap of continually finding new awesome things to add to the library but never release it. No one benefits from that.

So what happens now? Quite a lot actually. I registered my company name and want to put this under than, so its website needs to be up and running (a topic for another blog). I need to do some closed testing with users. This will most likely be work colleges. The focus will be; usability of the software itself, finding both weaknesses documentation and bug hunting. My main concern right now is that I know how to use these because I wrote them, but I need to be so obvious, self-explaining and well documented that I’m not flooded with support emails from people who have paid for lib post release.

On a side note, the last thing I added to the lib was a projectile solver. It solves the launch angle for a constant speed projectile to hit a moving target. It handles either, both or neither falling under gravity. It’s kinda mesmerizing.

Fixing Mono Auto-Formatting

So at work Unity and Mono seem quite unhappy with the lack of admin rights. The most annoying consequences of that is Mono defaulting to a crazy auto formatting. With a project open in mono we can fix this by changing two settings.

Go Project->Solution Options. In Code Formatting change c# source code to Microsoft Visual Studio.

Then the same settings in Mono. Tools->Options. Again in the source code section, Code Formatting, c# to Microsoft Visual Studio.

While you’re there I also normally use behaviour

Readying something for the AssetStore – Part 2

So I think I’ve decided to charge something for what I am now calling JDGLib, Just Games Dammit being the business game I registered.  I came to this conclusion for a number of reasons;

  • It’ll be great to see any money come from this effort
  • I’ll feel obligated to respond to support emails/requests so at least if it costs money I’ll feel like that support is already paid for
  • Paid asset will almost certainly lower the number of users, which makes me less anxious

Also good doco will have to be my savior here. So this is what I’ve completed so far. I’ll also need to do up rational, class explanations and usage for Object Pools, Easing and Camera Shake. I’ll link to these in the asset store listing so I’m envisioning that the clear use cases and usage methods will stop people from buying it thinking its something it isn’t.

Unity Optimisation Basics Part 3

Tags

Which of the following should execute faster?

If you guessed any of them, you are wrong. That’s right, calling them 10000 times a second there is no single method that consistently executes faster than the others. (Not entirely true but I’m making a point here.)

So the compiler optimised out the literal string for us, that’s good to know. Lets look at the profile results.

tagtest

J’accuse. Everything is allocing but CompareTag, the method in gameobject that is recommended by Unity as the method to, you guessed it, compare tags. Where at all possible use CompareTag. the GC is your enemy. It’s an interesting situation. The alloc comes from get_tag(), which is triggered by the .tag property, which could be a call to unity native code that results in a mono string being created or just due to the immutable nature of strings in mono. I know I’m guilty of it too but if you can find a .tag anywhere in your code base do your darnedest to get rid of it.

Strings

Speaking of strings. String concat or string.Format or StringBuilder for simple wave counter in JSD.

Format looks nicer, kinda, probably a lot if you like printf.

stringtest

Well crap, Format is out, it’s the slowest and the most trashy. StringBuilder vs regular concatenation is a bit trickier. For my purposes I can avoid calling the function for all but when the current wave actually changes, so less than once a second, keeping the alloc as low as possible suits me.

 

Unity Optimisation Basics Part 2

Math is hard

Currently in JSD, the Seeking missiles are taking up WAY more cpu per frame than I would have anticipated. As much as 30% of the total cpu load per frame when there’s about 40-60 of them active. This is pretty much all its doing.

Part of the problem is gameobject compares aren’t trival, see. The  translate and smooth rotate, are expensive. For almost all of the other objects in the game these happen during enable and never again. If I made the missile a ‘plasma ball’ then I could simplify the maths a bit here by not needing to update the orientation of the object .

SetActive()

I was doing a little bit of hunting around to find more on the cost of GameObject.Activate, which gets hit in the profiler for SetActive(true). I found this. Needless to say I had to test this theory that enable might be better than active. So Lets dig in.

This section is 3 methods running simultaneously so you can visualise the ratio of performance they require.

First up Instantiate and Destroy 100 simple boxes with RigidBody every frame.

all_instHiLi all_destHiLiI’ve highlighted the Destroy here too. Its separate as Destroy seems to add the item to a list Unity keeps internally and handles next frame (as I understand it.). Most of the memory here seems to be in native land not in GC land but we are still generating some garbage. Its also responsible for a big chunk of the ‘Other’ time, which I can only assume is Unity native code creating the GO, components and serialise, deserialising from the prefab.

Next, Deactivate 100 in a list, activate the next 100 in the list. This occilates each frame. This is the basics of an Object Pool.

all_activateHiLiNot a crazy amount faster but it is faster and there is no Destroy and no GC to speak of.

Ok lets try this enable disable trick.

all_endisHiLi:O

 

Lets see them in isolation.

Instantiate and Destroy – 9.3ms to 10.3ms

solo_instNoGCInst without the GC showing

solo_instGCInst WITH the GC showing, ouch.

 

Activate/Deactivate – 6.7ms to 8.9ms

solo_actNot as good as we’d like but definitely better and no GC.

 

Enable/Disable on collider and renderer – 2.5ms to 3.0ms

solo_endisMinamalLittle bit of GC but this is insanely good, what’s going on.

Enable/Disable Generlised – 8.3ms – 15ms
Here I add the 2d variants, audio and animator components to the mix to create a more general solution, not the object being made just to the enable disable code, as activate/deactivate and instantiate/destroy would handle any combo and any number of components. In real world it would need any scripts you’ve added to the component to be disabled too, most likely.

solo_endisGeneral

GC is going crazy due to all of the GetComponent, remember that .rigidbody or .audio etc. are properties that basically end up calling GetComponent.

Ok, so once we try to extend the enable/disable trick to a more general solution it becomes worse than just instantiating and destroying. It is a good trick to remember though, if you had a very simple object it is a far more surgical method.

For the sake of science I tested this same set up for all modes with 10 and 500 cubes. Physics engine didn’t much care for 500 each, it actially made GameObject.Activate way more expensive than anything else. But the test was running at fractions of an fps due to way too many overlapping cubes.

With 10, it tells a different story.

all_10obj

Yes the order of best is the same, Inst about .8ms, Act about 0.7ms, Enab about 0.4ms. But those spikes, those spikes are where the Enable/Disable takes 7ms, yes SEVEN up from 0.4. It’s entirely down to it triggering a GC collect.

What if it only ever asks for things that do exist? I tried it with a few empty component scripts I create, no more GC. Remove them from the prefab, GC comes back. So if it only ever looks for components that do exist then its faster with no down sides. Its worth investigating a custom per prefab pool that uses this knowledge.

Well now I have to test what a find all Behaviours will do… Well that kinda sucks. Collider and Renderer are not behaviours, so you need to special case them. Something like

But that totally works and is nicer to look at, kinda. It runs at 0.3ms average but those spikes don’t go away, due to us now using GetComponents alloc’ing an array.

So not set active?

Well no, its a general solution that handles all cases, is mostly unintrusive to your codebase and can easily be written to never incur any GC Alloc. More over, its worst case is far more predictable and acceptable than the others in their general usage. Given that the GC is mostly out of our control and Unity doesn’t provide a no alloc way of fetching components the we cannot control the perf spikes due to the garbage being created. So unless you have a very specific set of components that you know exist and can access directly or already have around, you probably don’t want to enable/disable. All that being said, I’ve added ‘Intrustive enable disable pool object’ to my todo list.

Don’t do things you don’t need to

This isn’t at all Unity specific, its just general programming. What’s the best way of optimising ‘this’ code? Not doing it at all. For example, in some of my testing I wrote this;

Works and looks fine. But it’s (potentially) doing twice as many GetComponents as it needs to. Changing it to this

gave a non-trival performance improvement. Doesn’t lower GC Alloc as this seems to only come from requesting a Component that does not exist. It’s the same reason Unity recommends that you grab local references to any and all components to you need to access in your scripts during Start or OnEnable instead of during Update, FixedUpdate, etc.

Readying something for the AssetStore – Part 1

So I decided that since I often want the same collection of features in all my Unity projects/protos I should bundle all of that together into a lib and make it available on the asset store. These things being Splines, Easing, Pools, Live csv data and a few others are all things that I’ve written partially because I wanted to or because things on the assetstore didn’t quite have the features I wanted.  Bits and pieces of it I’ve already given to students on several occasions and the process is quite odd.

On one hand I think these are very useful and in some cases more feature rich than their competitors on the AssetStore right now, so I could charge money for this lib. At the same time, they make sense to me, not necessarily to everyone in the world and especially not to the non-programming users of Unity, of which there are many. I don’t feel I have the time to answer lots of support requests. So do I charge lots for it so not many people pick it up on a whim or nothing(or not much) so people don’t expect a high level of support. To further complicate this I’d love to be able to just tell the students to go get my lib from the asset store but if its expensive then I cannot really do this, they don’t have money and its a conflict of interest.

While I’m on the topic; code libs are easy to demo in the AssetStore package set up. They don’t have cool screenshots. The best solution I’ve seen others come up with is to create basic feature tutorials as screen shots or captures of the editor with arrows and text everywhere explaining what you’ve added and how easy it is to use to do x,y,z now.

I plan to discuss these little systems in detail and my journey of getting them onto the store in future posts.

Unity Optimisation Basics Part 1

I recently decided to run the unity profiler over JSD and see what could be improved, this is just a quick review of my findings.

Searching is bad.

I already avoid all Find and GetComponents when ever I can so there weren’t many of them to find. I don’t recall if it was AltDevBlog or the Unity Blog itself or in a Unite or Learn video but it doesn’t matter. GameObject.Find*, Object.Find* and GetComponent are all expensive and wherever it was, explained that its simply because unity does a linear search through the collection to find them. Makes sense in a way, even if they were O(log n) it would still be expensive to do them all the time and would require the internal storage to be not cache sensitive or lazy.

Collision layers, use them.

JSD has a lot of trigger rigidbodies, I had set it up so bullets didn’t overlap with withselves and never gave it another thought, until I saw 126 calls to OnTriggerEnter in 1 frame. Lots of bullets overlapping enemies however. It did a few checks and threw it away. Changing bullets to be in either a only collides with player layer and vice versa (an only collides with enemies layer) reduced this down to much more acceptable levels. This also drastically reduced the number of contact pairs the phys engine has to create and maintain which greatly lowered the overall and consistly high amount of cpu time spent in the physics engine.

yield rarely

The bullet spawners had a yield still in them from an earlier debugging attempt to prevent infinite loops during testing, this seemingly harmless line

was causing an additional 10% overhead on that function call. This again is not surprising, if you haven’t have a look at http://www.altdev.co/2011/07/07/unity3d-coroutines-in-detail/ its not an inconsiderable amount of work to make that co-routine happen, even if the line with the yield never gets hit.

Pools are not a silver bullet

I use object pools (of my own creation) to avoid trashing memory which is mandatory on mobile if not everywhere. You should avoid allocs during gameplay like the plague. I found however that activating objects (.activeInHierarchy = true) is still quite expensive.

Surprisingly, splines are not using much at all, neither is overall rendering.

Unity in a Repo

I’ve thrown together a quick little google doc of how I use source control and unity together. This isn’t complex but I found in a quick google that a simple guide was hard to locate.

It deals much more with how I use unity in a repo it is not meant to be a guide of how you MUST use it just a way that works for me. It is not really a discussion or explanation of how source control works there are plenty of other places to understand that.

View it here

Just Asteroid Dammit Core

Didn’t get to spend as much time on this as I would have liked. But we do have a ship and asteroids and sub asteroids spawning and floating around with the 2d physics system.

I made them all super bouncy by adding a PhysicsMaterial2D as the default material was a little flat. The trick for my wall wrap is boxes on the edges of the screen using a OnTriggerExit2D, which flips their position based on a scale and if their velocity matches a dot product. The dot product ensures that they are trying to leave the screen not enter it from that side, without it objects would flicker back and forth between sides.

Try it here

Basic sort algorithms

I have an exam coming up for my Grad Cert. It covers a bunch of data structures and algorithms as well as Java.

I don’t know about you but what I remember about the internal works of search algorithms is minimal, because they don’t tend to affect my day to day life, I know which ones I might want to use for certain purposes but generally its .Sort or sort(it, end) etc. As the languages and platforms I use tend to have well tested and pretty optimal solutions built in, these normally being QuickSort or IntroSort.

So in order to better commit these to memory I implemented a bunch of them, not in Java of course (:D), in c++. You can check it out if you like in my bitbucket profile page.

Refs;
Wikipedia obviously.
Hungarian sorting dances
because they are still one of the best visualisations of the algs out there.
http://www.sorting-algorithms.com/ is a new one that is also quite good as it gives very high level overviews of the algs and is interesting to watch the algs race.

Just Invaders Dammit Core

Next up was space invaders.

Try it here

With this one the its a few little tricks that are to be learned from. Don’t want the player to go off the screen? Well there’s heaps of whats to get that to work but for a core gameplay prototype, just put a clamp in there and dial it in. In this case player’s transform x is not allowed to go beyond -2.9-2.9.

Want a grid of enemies, well generate them. The enemy manager takes a starting point transform and how many rows and cols you want and how far apart they should be. In this case that seemed easier than copy pasting GOs. It also lines up nicely as we then child all those enemies to the enemy manager and it controls their movement and uses their colliders to determine when to change direction. It also chooses which enemy should fire.

I’m using triggers on constrained rigidbodies on almost everything here.

Just Flap Dammit Core

I posed my new Scripting for Game Development students with creating a core prototype of Flappy Bird. Doesn’t take too long to get the core running and its well under 100 lines of code.

You can try mine here

For me, the trick at this point to getting it to feel even remotely good is totally bullshit physics values. Right now the bird is .5m, gravity is -16m/s/s, mass is 1kg with a drag of .4 and a impulse strength of ~10n.

Update: With the addition of about 15 more lines we now have personal bests to beat.

Try out version 0.2 here

JSD PA4

Updated JSD or in a few hours according to android dev console. Game now pulls data from live google spreadsheets and uses NGUI for much slicker GUI. You can now also incrementally refund any upgrades purchased instead of just refunding everything ever.

jsdpa4 new gui 1JSDPA4 new gui 2

Shaky Cam WIP 2

Threw together a quick demo to show the variance possible in the shaky cam setup I described earlier. It sends the shame shake vectors to all cameras, the different results are due to different spring constants and different rigidbody linear dampening values.

Try it here

Totally not peggle – Part 3

Playable here

Didn’t get to spend as much time on it tonight as the last two but managed to get camera shake and beginnings of some basic emotion. Some pegs spawn with eyes, right now all they do is ‘panic’ (pupil shrinks) and look at a ball if one is close. Otherwise they just blink randomly and look around. Looks cool though.

Shaky cam

Build here

Quick job of putting together a 3 dim of freedom spring with linear limits. I tried using the built in springs but they were a little crazy. At the moment it still relies on the rb to maintain velocity and dampening, I’m not sure I like that so I’ll probably move that out so it simulates itself in isolation of whatever the physics engine may want to do.

Totally not peggle – Part 2

Playable here

Completed this after work, apart from a short break here and there ( for dinner and such). Lots of basic juice as been added. Music is from Incompetech. Sound effects created in Bfxr. Easing is done using a little Easing utility I wrote a while back. I’m pretty happy with it for what is all up only about a day worth of work.

The big things I haven’t gotten to yet are; cracks in the pegs, proper menu and world map, camera shake and touch friendly input. Perhaps tomorrow.

Totally not peggle – Part 1

Playable here

Something I threw together after work today. Made with Unity using 4.3’s new 2d sprites and NGUI. Box2d (which is what the new 2d physics system in unity is) is SOOO much better at this kind of interaction than physX was.

Think I’ll add more to it later tonight and tomorrow after work.

 

UPDATE: Didn’t end up doing anymore last night. Ok so HERE is the same thing but with a auto generated grid level. Now to juice it.

Just Shmup Dammit PA3

Just pushed out Alpha 3 update last night. So I had to rewrite the upgrade storage system because it was driving me crazy not its basically a readonly Dict<string, double[]> stored for look up. Player stats then look at this and store a local cached copy appropriate for their level and only look at that again when they level up a stat. Much easier to work with.

2013-11-27 10.15.17

Also got rid of the intuition numbers and am now pulling data out of calculated spreadsheet data. And moved to Unity 4.3, so now all the bullets are sprites not just quads. It didn’t really seem to change the median performance but it may have smoothed it out a bit.

Missiles now indicate their target enemy with a blue laser sight and they slow down over time, which leads into the 2 new upgrades missile speed and missile agility.

Slomo indicator, hourglass on the left, so that you can tell at what speed and if you have slomo activated. Also smoothed out the timescale changes so it doesn’t jump from 1 to .6 in 1 frame, now it lerps between them.

On the horizon: facebook integration, analytics, more meshes, more textures, more bullet patterns, landscape support.

Just Shmup Dammit: Presentations

The other CSU subject I was undertaking this semester was Mobile Application Development. In this subject we were to present an idea and explain the platform it was targeting, why and an overview of how we were going to implement it.

jsd_ps2_clipped

The second was essentially a progress milestone showing off what you had accomplished in the short time, ~12 weeks but only about 8 hours a week. I managed to get a small shmup-upgrader made with Unity3D completed in this time, it is not at alpha yet but is mostly complete game loop, with all original assets created by me in Blender, Photoshop and Substance Designer.

These presentations are available here and the game (in its second Pre-Alpha release) is on the Google Play here.

Just Another Card Game: physical prototype

The last project style assessment the Games subject had was a physical game prototype and high concept document. Since making a physical (paper) prototype was mandated I choose to create a card game. This was spurred on by primarily playing SolForge on iOS and seeing others play Magic: Duels of the planeswalkers, HearthStone etc.

card game play test

I approached this by creating a game that is a chaotic collision of Solforge’s combat, deck building (a la Dominion) and the joy of being a jerk in games like Uno.

The brainstorming started with distilling the rules a player would have to know down, so removing things like action and buy limits and then designing 3 unique card types. Monster’s being the obvious one, the other two are to alter the state of play for players. A duration card is like a weather or headline from Arkham Horror, it changes or adds new rules to everyone’s turn. An Instant card, as the name implies, is instant and can be played by any player at any time for free. The goal of these cards is to make the simple rule set always seem fresh and to keep all player’s attention on the game and avoid the “let me know when its my turn again” problem many turn based games face.

Next was a long list of all the possible effects and giving them approx. utility to player’s and thus deriving the card’s cost and relative worth. The initial list was well over 100 different effect types, this was found to be way too many and some were far too similar.

I’ve tided up and shared the folder with the draft cards, the pdf that was printed out, questionnaire results, rules and High Concept Doc that was submitted.

 

View Files

Far cry 3 game overview/deconstruction

I’ve tided up the folder containing all the work I did for one of the earlier assessments for my CSU Games subject, the goal was to choose a AAA game and identify and analyse its basic elements; Dramatic, systematic and gameplay.

fc3fireArmPS

This was done primarily by following the prescribed text ‘Game Design Workshop: A Playcentric Approach‘ and also some other basic ideas; concept docs, Caillois‘s modes of play, extended version of Bartle’s player types, Koster’s Theory of fun and an attempt at using Dan Cook’s Skill Atoms. I was fairly pleased with the result.

 

View Files

CSU Semester Complete

This semester of CSU has wrapped up and I’m pretty happy with my results, HDs in both subjects 😀 (Games 1 and Mobile Application Development). I’l be posting up in Portfolio, some of my work for these shortly.

Just Shmup Dammit PA2

A friend managed to clear the ‘last’ level of the enemy waves so I felt obliged to push some of the minor work I’ve done since a1 went up.

A2:
added versioning to upgrade data and splash
changed missile launch pattern
player grace period after hit
clamp inside screen
vibrate on dmg

On the horizon;
All new progression based on spreadsheets not gut instinct
more upgradable stuff
More waves
better spawning
More Textures
More feedback

AAA Games

This is an adaptation of an assessment completed for ITC467 Games 1 in Graduate Certificate in Mobile Applications Development at CSU.

Triple A is not a grading of quality, an expectation of the delivered product or even the nature of product being delivered. It is a credit rating. It is earned by having a strong credit history, generally also with some measure of liquid or solid assets and their ability or likelihood of being able to repay a debt. For this reason it is confusing when someone says they only play triple A games.

To understand this we need to dig through a few layers. A triple a studio is capable of spending very large sums of money. This normally translates into high visual fidelity with all the latest bells and whistles as this is an easy thing to invest large quantities of money into. This large investment necessitates a large volume of sales. Why is this the case? In different circumstances, a product that costs more to produce could simply be sold at a higher price to the consumer. The phrase ‘you get what you pay for’ comes to mind. Games, however, follow a fairly rigid pricing model. This may be due in part or in whole to; consumer expectations, saturation points (if it was priced higher total sales would be lower due to consumers refusing to the price point and total profit would be lower), effort barriers and perhaps most restrictive platform holder rules. Price point is locked so all that can vary is sales numbers, this leaves AAA games in the same situation as Hollywood big budget movies, they need to be blockbuster, runaway successes to be profitable. For example, hitman absolution sold millions of copies but fell short of expectations, only 3.6 of 5 million.

Just like big Hollywood productions these games are then required to possess high mass market appeal this tends to lead to known quantities for AAA games, genres and themes that are known to have high sales, predominantly based on what sold well last year. This also explains the high rate of ‘sequelisation’ in the games industry. Also getting the base game working tends to represent a considerable cost but then adding 10% more (new weapons, enemy types, textures) and all new level geometry for the sequel is, by comparison, quite cheap. This is also reinforced by the study that found sequels tend to fare better than originals or untested new IPs.

With this in mind we find that Far Cry 3 by Ubisoft Montreal is the quintessential AAA game. It was released in late 2012 on all current generation platforms, PC, X360 and PS3. It has sold over 4.5 million copies and is essentially an amalgamation of Assassin’s Creed, another Ubisoft property (which is annualised and sells millions of copies) and the First Person Shooter genre, which is arguably the most popular genre in gaming, with annualised franchises selling 10s of millions of copies every year and setting and then breaking its own record for sales in 1 day, 5.6 million copies. It is worth noting that Far Cry 3 is an unrelated sequel to Far Cry 2 also by Ubisoft Montreal released in 2008 which sold over 2.9 million copies. Like a hollywood action film, it is a visual fiesta, island paradise with explosions and violence. Motion captured and voice acted cinematics make use of all the available dx11 technology. A story about an unlikely american everyman that is now the only one that can save the day. Its story and mechanics are designed so that no one could be offended by them.

 

Love Thy Spreadsheet

Spreadsheets are great. Well maybe not great, but a valuable multi-tool. On any given day I’m probably using at least 2 spreadsheets. In particular I’m talking about Google Spreadsheets. A single backed up place that holds my tax claimables, accessible on my phone, is amazing.

Any iterative or step based equations I need (or just want to) is easy to calc. Given the multidimensional nature of spreadsheets multi-variable equations are also easy to play with. For example solving how much money you need in the bank before never have to work again, taking into account yearly expenditure, inflation, interest rates.

For grading assessments google spreadsheets allows me to write custom functions in js like;

Allows things like ;

asMarkSample

Using grades per section currently feels much more natural than trying to use a Likert scale or number out of 100 ends up being too limiting, too open or tends towards odd clumping. Also the ease of putting in basic statistics on the grades as they are emerging is a very useful self moderator and easy way to compare marking between multiple graders. That can be done in Excel or OpenOffice Calc or Google Spreadsheets but the following is a bit unique to spreadsheets.

gradeSummary

First the squiggly. Its a SparkLine, that’s drawing the frequency of certain grades. Here’s the code in the cell;

and the code for my function;

Below that is just a Frequency against grade cutoffs. Spreadsheets automatically extends multiple results from a function down the column.

FYI; I love that Spreadsheets just gives you javascript arrays as the data and will accept them back as such.