Tuesday 1 May 2018

DESI1212 2017-18 Games Incubator - Evaluation


 DESI1212 2017-18 Games Incubator - Evaluation

As the team's character designer, character modeler and character animator, almost all character related development was to be handled by me, including the implementation of any assets created and their integration with gameplay systems.

Roles:

Character Designer:

Designing the characters for this game provided its own unique challenges and quirks. I decided very quickly that despite our low-poly aesthetic I wanted characters to have exaggerated and interesting designers and animations.


The original idea for our sole character in the final release, Big Boy, was always a top-heavy character with very little features. The focus of this character was on having a strong silhouette via body proportions alone, especially when compared to characters that may follow. I designed this character without knowledge of whether we would have a class-based system where characters had different stats or if we would force players to be a character depending on their team, so I designed Big Boy to have a silhouette unique enough that his appearance alone would distinguish him from future characters.



As mentioned previously, focus was placed on the silhouette and posing of the character. I wanted our characters to always display interesting poses and a strong silhouette that differentiated them from each other. The 12 principles of animation would play a large role in achieving these exaggerated animations.

Originally we intended to have multiple player characters to keep in theme with our game's concept of being multiple factions forced to fight. However, due to time management issues and issues ensuring Big Boy worked correctly first, many of the characters never made it beyond initial concept art.





While Big Boy was the only character to make it into the final submission of the game, out of the characters I drew concept art for he was the closest character to achieving the goals I set for our character designs.

Character Modeler:

Our team agreed on a low-poly aesthetic relatively quickly and this immediately presented the challenge of how my goals for interesting silhouettes and dynamic poses could be achieved on character models that, by design, have far less polygons than most video game characters, especially those designed for exaggerated posing and articulation. One solution to this problem was to model the characters in such a way that their topology created interesting silhouettes. While this isn't as visible on Big Boy, the female human character exhibits this approach to design.


















The first model of Big Boy, as seen above, was merely a test skeletal mesh for the coders to work on character related systems with and test attaching weapons. This model was very close to the final and it's main use at this stage was to test whether it would be possible to achieve my goals for the character.While the rigging wasn't as thorough as it would be later, it showed that even a featureless and low-poly character like this can convey character and provide a powerful silhouette through posing and animation alone.

The final version of Big Boy has finger joints and a joint for attaching weapons to his hands.
The second and only other character to make it beyond concept art, the human character, posed their own challenge towards my goals. I designed the character with realistic human proportions, but also considered exaggerating their lower body and giving them a slimmer appearance as a direct opposite to Big Boy's design, which exaggerates the upper body.

The first pass of the character model, with an early attempt at a visible head.
The same model with UV mapping applied for team colours.
A further refined version of the model.



The final version of the model, with updates boots and a head.
A view of the topology of the model from the front orthographic camera.



As mentioned previously, techniques were used to achieve a defined silhouette and shape the model in interesting, more organic ways:


By rotating the edge ring of the character's clothing it is possible to achieve more interesting looking clothing folds without using any additional polygons. With slight exaggeration it becomes easier to tell the difference in material between certain items of clothing the character is wearing and give the characters a stylised appearance despite the low polygon count.







For comparison, the image on the left shows the same sleeve with the edges rotated to how they would have been when first creating the arm. Manipulation of topology in such ways is crucial for more organic and natural character modelling at such low polygon counts.






For this project I knew I wanted our characters to have colour customisation from the very beginning, as such they were designed in such a way to not have colours clashing too many times, instead opting for colour options effecting entire layers or sections of clothing. This meant character colours would be far more readable at any distance as the surface area a single colour would effect would be large enough to be seen.

For character customisation for work, I had to write a shader that would use a mask to apply a colour defined by the user on certain areas of the model according to the UV map.

The Albedo texture. This won't be seen in game and is simply used as a default.
The mask texture.
The material. Using the generic ColorMask parameters the 3 different sections of the texture can be coloured freely.
The strength of this shader is that thanks to simply requiring 3 generic colour inputs we can allow players to freely colour their characters, or assign presets if need be. When implementing this shader we were not certain if we would include any team game modes but if we did it would simply be a matter of assigning players colours based on their team, or providing them certain shades of a team colour to still allow them to have a choice while helping differentiate teams.






A model for one of the Overlord characters that would be featured in our game. This model, unlike the characters, features blend shapes for facial animation. Thanks to blend shapes we can easily animate the models using preset vertex movements, inside Maya and even within Unity itself. Only one was completed in time for the submission.

Animator:

As the team's animator I was responsible for all character animation and the implementation of those animations. Due to being a third-person perspective multiplayer shooter, there was a lot of work required to ensure that characters fun to play as and against. For a multiplayer environment it was important that characters provided all the information players would need simply from their silhouette and posing.

For instance, it should be easy to tell what weapon the player character currently has equipped from all angles. To achieve this I made sure that the different planned weapon-specific animations showed what kind of weapon a character was using by their silhouette alone.

Heavy weapons would be held over the shoulder, providing a much more top-heavy silhouette to the character.

Medium weapons, such as assault rifles, would be held at hip-height, around the middle of the character, reinforcing their status as medium-threat weapons.
Ultimately due to time-constraint and animation controller issues we only managed to implement the medium weapon animations.

Animating and implementing character movement animations was a very complex task, specifically when it came to combining multiple animations together. The character was first animated with a set of basic unarmed movement animations for 4 directions, forward, backward, left and right.




Crouch Walk Forward
Crouch Idle

Jump Start

Jump Idle
Jump Land

Assault Rifle Idle

Assault Rifle Run Forward

Assault Rifle Fire
Using an animation controller in Unity, I designate what animations should play in different situations and how. For example, all directional movement is handled by a 'blend tree'. A blend tree allows us to define an array of animations that will play depending on the values of parameters they are assigned to. The blend tree will then smoothly blend between them based on those values.


The node graph for the base layer.
 As seen above, the base layer contains nodes for standing movement, crouching movement and jump animation states. The animation controller can transition to these different states when specific criteria are met.

The parameters tab shows all of the parameters I have set up for this animation controller.
The parameters 'Horizontal' and 'Vertical' are used for player movement and refer to the movement direction the player is inputting on their control device, whether it's a keyboard or controller. This value is set via code in the player movement script and updated when needed.


Visualised on the right is the blend tree motion field. The blend tree blends animations based on what the current horizontal and vertical values are, with the maximum being 1 and the minimum being -1. For example: If the vertical parameter is 0.5 and the horizontal 0 then the blend tree will blend the idle animation and the forward run animation equally. Likewise, if both vertical and horizontal were to be 1 then that would be the upper-right of the field, meaning forward and right, and it would produce a blend of the forward run and right run animations giving us a diagonal run cycle.

'isCrouching' is self explanatory and will cause the movement blend tree node to transition to the crouched animation blend tree and that node will transition back when the player is no longer crouching. 'HoldType' is used to determine what weapon the player is currently using. Using code to set this variable we can switch between animation sets via transitions, allowing for multiple weapon animations.



Using the layer system we can override animations easily. By defining a mask we can set what joints are affected by this layer meaning we can make the player character use the upper body animations of the medium weapon animations while the lower body continues to use the default unarmed animations. By using the 'sync' tickbox the layer will copy the node structure of the specified layer, meaning we only need to set up one set of nodes and then use the syncing option to change the animations to the weapon specific ones when needed.

With multiplayer video games it is not only important that we communicate what weapon a player is holding at any time but also where they are aiming, especially in the case of a third person shooter. To do so we can use the blend tree system and update a aim angle parameter that will allow the blend tree to blend between the two poses.

Blending between the two poses.

Evaluation:

Overall, I believe this project could have gone so much further and my work in the final game is not what I wish it were. A lot of these issues stem from lack of familiarity with Unity and some issues with Unity that were too time-consuming to realistically overcome near the end of the project. One such issue was the default movement animations and the weapon movement animations not synchronising as desired. This is caused by a difference in animation duration between the default running animations and the medium weapon animations.



Both of these animations are the exact same length in Maya, yet in Unity they are different lengths. Had I managed my time more efficiently I would have been able to spend more time trying to solve this issue but unfortunately despite all my attempts I could not. This, among with other issues caused with my lack of knowledge with Unity, made even the most basic of tasks for me in Unreal Engine 4 a larger task than I anticipated. While I ultimately learned a lot due to this and I am now far more knowledgeable of Unity than I was before, I wish I allocated my time more efficiently to properly implement all of the features and content I sought to include.

If I were to continue working on this project my first priority would be fixing the animation controller sync issues that currently cause the player to animate incorrectly. As a lot of issues and unfinished aspirations are due to time management issues, being given more time or the ability to work on the project again would give me the time I need to properly realise all of the work I aimed to finish.

Thursday 19 April 2018

CINE1102 2017-18 Animation Studio: Assignment 2

CINE1102 2017-18 Animation Studio: Assignment 2 - Virtual Reality Project

Intro:

For this project we were tasked with creating and delivering

We decided to work as a group for this project to attempt to produce a piece of work that would allow us to focus on our own strengths and build our team skills. Due to this project stemming from my original alternate reality/virtual reality concept I decided to work as the team's gameplay designer as I have experience with Unreal Engine 4 and I wanted to challenge myself by developing a virtual reality game.

2.1 Role -  Gameplay Designer:

A majority of my focus on this project was designing and coding the player character and the interaction mechanics in the project. With no experience with virtual reality development but a lot of experience developing for Unreal Engine 4, this project was the perfect opportunity for me to develop new skills and increasing my familiarity with Unreal Engine 4's blueprint system.

From the beginning I wanted anything interactive in the environment to be as modular as possible, allowing the level designers to easily customise the base interact-able blueprint into new objects with unique actions when interacted with. The benefits of this system meant that not only is it extremely easy to quickly populate the environment with interactive objects, but all of the code for picking them up and interacting with them is already handled by the base. This means no code is actually required in the child blueprint itself for the object to be picked up, and coding what happens when the player pulls the trigger is as simple as coding from the event that is fired in the base blueprint when the player pulls the trigger.

2.2 Interactable Objects:

Interactable objects featured in the project, the first four being various base blueprints and the others being children of those bases, excluding 'Interactable Interface', which is used to allow the player pawn and the bases to interact.
Simply put, by creating the initial, basic 'base' blueprint, we can create 'child' blueprints from it that inherit the code from the parent.
The blueprint for the Base Interactable blueprint.
The base interactable blueprint contains all of the code for what happens when the blueprint actor is grabbed via the controller grip buttons. Since this outcome will be the same for all interactive objects that can be picked up, we put this code in this blueprint and leave object-specific actions to the child blueprints.

The graph for a child blueprint, this specific one being a flashlight prop.
By triggering the 'Action' event that is fired when the player pulls the trigger, specific code can be run. As shown above, code specific to an object can be added into the child blueprint, meaning each child blueprint can have unique code.

By default, interactables are coded to be attached to the hand at the rotation and position they were at relative to the player's hand, this allows for a slightly more realistic approach to holding objects. But for instances where objects should be held in a certain way, I added the ability to force objects to snap to a specific rotation relative to the hand. This would work well with weapons and other objects that should aim ahead of the player's hands at all time and ensure the player doesn't have to spend extra time rotating the object to be in a suitable position.


Using these comprehensive settings in the 'class defaults' section members of the team can easily change how each actor behaves:
  • Snap: Whether the object snaps to the player's hands at a default rotation.
  • Hold Location: Unused.
  • Use Alternate Hold: Determines if the actor uses its default position when snapped or the values specified in the next two variables.
  • Hold Transform: The position the object snaps to, relative to the hand.
  • Hold Angle: The angle the object snaps to, relative to the hand.
2.3 Player Blueprint:

Coding the player character was a very demanding task as it went through many iterations with the final one being almost completely original code. Originally we intended to allow players to choose between teleportation and traditional locomotion however we decided that we would stick with one locomotion method, which was the traditional system.

The movement code for the player.
Due to the default virtual reality pawn using teleportation locomotion, I had to code a traditional movement system using the thumb-sticks of the left controller.

The turning code.
Due to difficulties with testing and playing whilst sat down, I decided to implement the ability to rotate the player using the right hand controller, allowing the player to stand still and play, without the issue of worrying about the headset cord.

Using a 'blueprint interface' we can make blueprint actors communicate. As seen in the above image the player actor detects any children of the blueprint interactable base and triggers the 'Interact event' that is sent to the detected actor. If the player is already holding the object it will instead drop the actor.



2.4 Level Interactables:

Code for detecting the player's hand and setting 'hint' text to visible.
Unlike the interactive objects that can be picked up, level interactables remain stationary and can not be manipulated besides being activated. This is used for lights and other level objects that can be activated.

A fan blueprint that is a child of the base level interactive blueprint. The wireframe collision box is used to detect the player's hands and allow for the activation of the object.

Upon receiving the activation input, the fan model will begin playing a timeline animation.
Code for grabbing.
Due to being set up in this way, it is incredibly simple for team members to quickly create child blueprints that can inherit the parent code and all that is required of the team member is to code using already established coding.


Level interactable blueprints feature one class variable: 'Touch to Interact'. By enabling this the blueprint will instead activate when the player's hand touches the detection collision box in the blueprint. This can be used the simulate the feeling of the player physically pressing a button, instead of having them pressing the trigger button to activate a button.

3.1 Character Rigging and Animation

During this project I was also the sole animator and character rigger of the team, responsible for rigging and then animating characters for animated hologram sequences featured in the game. This project was my first time rigging and animating characters sculpted in ZBrush, as such it was quite challenging adjusting my rigging techniques to the topology found in sculpted character meshes.

A routine I ensured I followed was the structuring of character skeletons and naming conventions used for their joints. This allows us to theoretically reuse animations across the cast of characters using animation re-targeting if need be.

The mother.
The father.
The rat.
The mother character has blend shapes for talking, allowing easy animation of dialogue without having to use alternative animation methods such as face joints. Blend shapes can be easily controlled in Unreal Engine 4 and this was a large deciding factor in using them instead of face joints.


In addition to the facial blend shapes for emotion and lip-syncing, I included blend shapes for hologram distortion effects, to make the holograms in the game more visually interesting. Thanks to Unreal Engine 4's animation editor the motions for characters can be animated in Maya and the blend shapes can be controlled and added to the animation in Unreal Engine itself.

The original 'talking' animation for the mother, viewed in Autodesk Maya.
The final version viewed in Unreal Engine after using the software's animation editing tools to animate the distortion flexes.

Thanks to this method of animation distortion effects, it allows the animator to have more control over the effect than if we were to use shader effects to achieve a similar result. It also allows us to purposefully distort the first and last frames of the animation to create a seamless loop transition.



I was also responsible for the rigging and animation of the hands that would represent the controllers of the player, instead of the default Oculus Touch model that is used in the final version of the game. We intended for the hand to play context-sensitive animations, such as forming a pointing gesture when their hand is in the vicinity of a interactive object or closing when holding an object.

4 Evaluation:

Our initial ideas for the game did not pan out how we had hoped do to many issues involving time management and problems with our production pipeline. Pinpointing specific reasons for the state of the game can be simplified to poor time management as a majority of our work was left till near the end of development for multiple reasons. With uncertainty with what we could realistically achieve in the time we could work on the project, and uncertainty with next to no limitations for what we could create, a lot of time was spent figuring out what we would actually create. Due to none of us having experience developing for virtual reality and having limited access to a virtual reality headset testing or prototyping ideas for ourselves was extremely difficult.

Another issue with our project was a lack of communication and a lack of experience developing for video games among some team members. As the gameplay designer I should have ensured that our team members were producing assets as efficiently as possible, however I was so focused on the virtual reality side of development that I failed to provide the team with guidelines for how our assets would be produced and implemented. Due to this, many assets have inconsistent quality and certain models are imported as separate scenes, meaning objects like books on bookshelves were exported far away from the scene origin. This inconsistency is due to differing approaches to the video game modeling pipeline due to inexperience with video game development that could have been corrected by enforcing a strict approach to how we handled assets.

This project was a valuable learning experience as I now know what I can do to improve my ability to work with teams and help my team members by establishing guidelines for our development pipeline. It has also taught me that I can should always experiment with ideas, even if there's a chance it may not make it into the final game, as I made discover or develop something that could make a huge improvement.

If I were given another chance to develop this project I would have prioritised ensuring our team understood the virtual reality development pipeline and I would have focused on ensuring all of my gameplay related coding is finished sooner to ensure the team have as much time as possible to become familiar with it. While the modular interactable blueprint system I coded was useful, we didn't manage our time properly and as such we did not make many objects using it and all it was used for in the end was to replace level blueprint code with actor blueprints. We did not realise the project we set out to create but I definitely learned a lot about Unreal Engine 4 and virtual reality.