| Publication Type | honors thesis |
| School or College | Campus Advising Solutions |
| Department | Computer Science |
| Faculty Mentor | Fernando Rodriguez |
| Creator | Bhatia, Jaithra |
| Title | A comparison of artificial intelligence techniques used for implementing non-player characters within video games |
| Date | 2022 |
| Description | If you have played video games, you have interacted with artificial intelligence (AI). AI is often used in such games to provide functions to Non-Player Characters (NPC) such as Enemy bots, game merchants, and even wildlife to help create immersion for players and try to replicate realism through interactions within these fantasy worlds. Whether you are a fan of racing car games like Forza, strategy games like Sid Meier's Civilization, or shooters like Black Ops, you will always find elements within these games controlled by AI. This thesis will further discuss how smart these AI algorithms and techniques have become over time. We will further discuss the method used in the game ShatterBox Studios created, Splinters of Regret, a game based around an atmosphere like that of a medieval dungeon, where AI is used to create smart enemy bots to give the players a more immersive environment. In this paper, we will present different AI architectures used in the video games industry and which one is better suited for Splinters of Regret. We conclude with a set of criteria to help game designers evaluate the potential AI architectures they could use when creating games. |
| Type | Text |
| Publisher | University of Utah |
| Subject | video game artificial intelligence; non-player character behavior; AI architectures in game design |
| Language | eng |
| Rights Management | © Jaithra Bhatia |
| Format Medium | application/pdf |
| Permissions Reference URL | https://collections.lib.utah.edu/ark:/87278/s6yac1dj |
| ARK | ark:/87278/s6kkses4 |
| Setname | ir_htoa |
| ID | 2889131 |
| OCR Text | Show A Comparison of Artificial Intelligence Techniques used for Implementing Non-Player Characters within Video Games by Jaithra Bhatia A Senior Honors Thesis Submitted to the Faculty of The University of Utah In Partial Fulfillment of the Requirements for the Honors Degree in Bachelor of Science In Computer Science Approved: _ ______ Fernando Rodriguez Mary Hall Thesis Faculty Supervisor Director of the School of Computing _ _____________________________ Thomas Henderson Sylvia D. Torti, PhD Honors Faculty Advisor Dean, Honors College May 2022 Copyright © 2022 All Rights Reserve ii Abstract: If you have played video games, you have interacted with artificial intelligence (AI). AI is often used in such games to provide functions to Non-Player Characters (NPC) such as Enemy bots, game merchants, and even wildlife to help create immersion for players and try to replicate realism through interactions within these fantasy worlds. Whether you are a fan of racing car games like Forza, strategy games like Sid Meier’s Civilization, or shooters like Black Ops, you will always find elements within these games controlled by AI. This thesis will further discuss how smart these AI algorithms and techniques have become over time. We will further discuss the method used in the game ShatterBox Studios created, Splinters of Regret, a game based around an atmosphere like that of a medieval dungeon, where AI is used to create smart enemy bots to give the players a more immersive environment. In this paper, we will present different AI architectures used in the video games industry and which one is better suited for Splinters of Regret. We conclude with a set of criteria to help game designers evaluate the potential AI architectures they could use when creating games. iii TABLE OF CONTENTS ABSTRACT ii INTRODUCTION 1 METHODS OF IMPLEMENTING AI FOR NPC’S 3 SPLINTERS OF REGRET 20 RESULTS 21 DISCUSSION 24 REFERENCES 26 1 INTRODUCTION Artificial intelligence is the process that enables computer programs and machines to think, learn, and solve problems like the human brain.1 AI is a type of intelligence that is not acquired naturally like humans but analyzes vast amounts of data and moves or reacts by applying various learning algorithms to the collected data to make logical decisions. To achieve more realistic and human-like outcomes, Machine Learning (ML), a branch of AI, applies algorithms to data to learn and then analyze that data to execute intelligent actions. Facebook's picture identification, Amazon's buying suggestions, Apple's Siri, and Netflix's tailored video streaming service are just a few instances of AI in action.2 Another example of AI can be found in video games. One of the first recorded applications of AI in games was the development of a self-learning checker by American pioneer Arthur Lee Samuel in the field of computer games and AI. “Since then, AI has come a long way from IBM's Deep Blue, which defeated world chess champion Garry Kasparov on May 11, 1997, to Google's AlphaGo AIGo player, which defeated the world's top human Go player.”2 Implementing NPCs that execute human-like behavior has become a common practice of applying AI in games. Games that do not employ AI might become tedious after a while,3 as predictable behavior makes it easier to defeat the enemies. Playing video games immerses you by pitting you against NPCs who react in unanticipated ways and surprise you. For example, in a first-person shooter, enemies can examine their 2 surroundings to determine what is critical to their survival or what will jeopardize their prospects of victory. What if they could learn from their mistakes, discover and recognize noises and patterns, interact with one another, and navigate in unexpected directions? Despite the fact that players have played similar games before, new experiences keep them interested and entice them to return. More experienced AI game design companies/businesses can assist players by making games that can be played for a long time. First-person shooters typically have an opponent AI that is just smart enough to be a challenge to the player as they shoot everything that moves. The player is generally the hunter, and the swarms of bots running around the screen are the prey. The roles are reversed in games like Alien: Isolation. The free-roaming alien Xenomorph is in a position of power, and the player, robbed of their power, experiences what it is like to be pursued. You have a gun, but using it attracts the all-powerful, unkillable Xenomorph. 4 In real-time strategy games, AI is also employed for pathfinding (Finding the shortest path between 2 points).5 NPCs can use pathfinding to study topography and consumable items before moving from one spot on the map to another. Foes can navigate in a dynamic environment securely and without interfering with other objects thanks to artificial intelligence. It also allows one NPC to work with multiple NPCs at the same time, allowing for group navigation. 3 Designers use specific AI architectures to make NPCs appear intelligent for this to be applicable within games to make better immersive environments for the players. There are 3 AI architectures that are commonly used in games: Finite State Machines, Behaviour Trees, and Monte-Carlo Search Trees. In the next section of the thesis, we will look into the implementations of these AI architectures and examine which technique works best for Splinters of Regret, and define criteria to help game designers select the most appropriate technique. The purpose of these criteria is to come up with the best solution as a game designer and implement them in a way to help improve gameplay experiences. 4 METHODS OF IMPLEMENTING AI FOR NON-PLAYABLE CHARACTERS Finite State Machines A Finite State Machine (FSM) can be viewed as a computational model that can transition between different defined states and exist in only one state at a time. To put it another way, each state is distinct from the others and mutually exclusive, resulting in a finite number of possible states. Inputs signal when the machine is ready to transition from one state to the next in a process-defined transition.6 To understand FSM better, let us look at traffic lights, one of the most well-known examples. A traffic light can cycle between 3 states (green, yellow, or red), but not all three states can be active at the same time. When the motion sensors on the traffic light cannot detect cars passing through for a few seconds, the color changes from green to yellow, then yellow to red. It will eventually become green again, completing a cycle. An FSM is the implementation of such cycles. Because of their inherent simplicity and predictability, FSMs are widely employed in the video gaming industry to provide simple but functional AI.7 Non-playable characters, for example, are frequently employed in action and RPG games. A simple AI model is created so that a given NPC (typically an opponent) can only choose one of several behaviors, such as attack, flee, defend, detect, and so on. They can also be used to represent UI and control methods in platforming games. In the Super Mario Bros. series of games, players can collect power-ups that give them a new ability 5 or bonus. For example, the Fire Flower transitions Mario from his normal state to a state that allows him to throw fireballs until Mario is hit by an enemy. In an FSM, a designer must experiment and test all conceivable situations that an AI may experience within the game and then apply a specific reaction to each one. In other words, an FSM AI should be able to instantaneously respond to the player's movement and execute its pre-programmed behavior. When an enemy detects the player in stealth action games like Assassin's Creed or Aragami 2, the enemy usually has 3 states of awareness: Idle State, where it walks in a given path specified to it; Altered State, where it has noticed some player movement and will move towards it; Detected State, where it has spotted the player and will engage in combat.8 If the player does not find a hiding spot in a safe area within the few seconds in which the NPC is alerted, the NPC state will switch to the detected state. This is just one of the simple ways FSM AI games are developed for stealth games, and you will undoubtedly find it at the heart of even more complicated and intelligent actions. The designer is given the control to develop the logic that triggers these transitions between states, as well as what the NPCs should do when it reaches one of these states. A similar principle is employed in other game aspects such as the Pause Menu, the cycle within pausing, loading, playing, and exiting the game. FSM is used to transition between game modes in games with many game modes, such as Minecraft, which offers Survival, Creative, Adventure, Spectator, and Hardcore modes. 6 An FSM system can be used to create practically every feature within a game that has a fixed state. If a game has a weather system, day/night cycles, environmental bonuses(i.e buffs) and penalties(i.e debuffs), and so on, they all fall under the classification of a fixed state and an FSM is most likely regulating each of them. Consider an example from the game Cuphead, a run and gun video game by Studio MDHR, which has 3 states for each boss battle. Once the boss has reached certain health, it will change its state to a more aggressive mode and start using objects from the environment as well at its disposal, making the game extremely hard but rewarding once you defeat the boss as well. Figure 1: FSM example 7 Any implementation of an FSM will need to consist of fixed states and the transitions taking place between them. A simplified chart of an FSM (Figure 1) represents the fixed states and transitions taking place for an NPC in any combat game. We will take the initial state of this NPC to be the Wander state. On the map provided, the NPC will wander in a pre-defined path until it can spot a player. Once it does, it will then transition into an attack state. For a very basic enemy, these two states would generally be enough. The diagram shows how easy it is to also expand the enemy AI further by introducing 2 more states like evading player attacks and also finding aid such as health packs around the map if the health of the enemy reaches a critical stage. The predictability of the FSM layout is an obvious drawback. Because all NPC actions are pre-programmed, playing an NPC with an FSM model for an extended period may lead the user to lose interest.9 If you were to play a shooter game like Call of Duty, with enough experience you would be able to predict when and where the enemy AI bots are going to appear which then makes the game too easy to defeat and loses player retention. A model such as a Behaviour Tree can provide more flexibility and would help avoid or reduce these issues 8 Behavior Tree A behavior tree, as opposed to a Finite State Machine or other AI programming methodologies, is a tree of hierarchical nodes that regulate the decision-making flow of an AI entity.10 A behavior tree's main architecture is made up of composite, decorator, and leaf nodes. There are numerous types of utility nodes at the tree's extremities, known as leaves, that influence the AI's approach to the appropriate command sequences for the circumstance. Figure 2: Layout of Behavior Trees A composite node is capable of having one or more leaf nodes. Depending on the composite node, one or more of these child nodes will be processed in either a first-to-last or random sequence. Then the processing of these leaf nodes will be regarded as complete when they deliver a success or failure result to the parent node. The Sequence is often the most used composite node, which is designed to run each child one at a time, reporting failure if anyone child fails and success if all succeed. A decorator node can only have one child node at a time, unlike the composite nodes. The function of a decorator node differs based on its kind: they have the ability to modify the state the child node returns, terminate the state of the child, or repeat the child's processing state. The Inverter is a common example of a decorator since it simply 9 inverts the result of the child. If a child succeeds, the parent receives failure; if the child fails, the parent receives success. Figure 3 shows us a basic structure of a composite and decorator node Figure 3: Behavior Tree Architecture Leaf nodes are the lowest-level nodes, and they are unable to have children. They are the most powerful nodes because they will be devised and implemented by the creator to execute game or character-specific activities needed to make your tree fulfill AI jobs. An NPC performing an Idle activity state in a stealth-based game is an example of this. An Idle leaf node can be seen as a command that an NPC will perform by wandering in a specified location provided; the node can return a success state when there are no obstructions and can return a failure state when the player enters the vision of the NPC. Because leaf nodes are defined by the designers, they can be quite expressive when placed in conjunction with composite and decorator nodes, allowing you to design rather strong and complex behavior trees capable of pretty intricate layered behavior.9 10 Composites and decorators are employed as functions in programming logic, while leaf nodes perform game-specific actions and are employed as functions that perform tasks for your AI characters or test their status or situation. Parameters can be used to specify these leaf nodes. The Idle leaf node, for example, may be used as a function to help specify which coordinates the AI bot should travel in. These parameters are usually stored in the context of the tree so that the AI bot can obtain these values and analyze how to use them. For example, a 'HealthPackLocation' node may detect a walking location and record it in a variable, which the Idle node might then utilize to specify the destination of where it can regain its health. This use of shared values across nodes provides extraordinary possibilities for creating complex behavior trees to provide smarter AI. Another type of Leaf node is one that calls another behavior tree, passing the existing tree's data context to the called tree.9 The simplest form of a composite node found within the behavior trees is a sequence node. This node will visit each child node in turn, starting with the first and working its way down the list of child nodes. If a child node fails, the parent will be notified as soon as possible. Only when all of the child nodes obtain the outcome of success will the sequence offer success to its parent. It is vital to note that the node types in behavior trees can be used in a variety of ways. The most apparent application of sequences is to specify a set of tasks that must be 11 accomplished in their whole, and where the failure of one job causes the rest of the sequence to become redundant. Selector nodes are a type of composite node that functions in the opposite way as sequence nodes. If any of its children succeed, a selector will return success and will not process any further children. In a logical setting, this would mean that a sequence acts as an AND function that will only return success to the parent when all the children pass success, a selector acts as an OR function that if anyone child returns success, the parent will receive success as well. This means that it will start processing the first child, and if that fails, it will process the second, and if that fails, it will process the third, until it reaches a success, at which point it will instantly announce success. It will fail if all of the children fail. A conditional statement like nested-if statements can be used to check numerous criteria to see whether any one of them is true. Their key value is their ability to portray a number of possible courses of action in order of priority from most advantageous to least advantageous and to return success if any of those courses of action were successful.10 The options are endless, and selections may be used to swiftly build powerful AI behaviors. Decorator nodes consist of the following types:10 ● Inverter: This has previously been talked about. Simply, they will reverse or nullify the effect of their child node. Failure transforms into success, and success 12 transforms into failure. Conditional tests are the most widely used example of this type of node. ● Succeeder: A Succeeder will always return success, regardless of what the child node returned. These are useful when you need to process a branch of a tree but don't want to stop processing the sequence on which the branch is placed because a failure is foreseeable or anticipated. ● Repeater: When a child node of a repeater returns a result, it is reprocessed. These are usually used near the base of the tree to keep it running continually. Repeaters can choose to run their offspring a specific number of times before returning to their parents. The complexities of leaf nodes are contingent on how the behavior tree is implemented. The leaf nodes generally consist of two types: initialization, and processing nodes. The initialization function is used to start the action that the node represents and to initialize the node. It will retrieve the parameters and perhaps start the pathfinding process using our Idle example. The processing function will run on every tick (0.05 seconds) of the behavior tree while the node is processing. This function's processing will end if it returns success or failure, and the result will be given to its parent. If it returns Idle, it will be reprocessed at the following tick, and then again until a Success or Failure is returned. The main issue with Behavior Trees is that, while the approach improves behavior organization, it does not give a model for enhancing decision-making. The 13 decision-making is restricted to conditional nodes, with no indication of how decisions to invoke different subtrees are made. Monte Carlo Tree Search is able to build up better decision-making. 10 Monte Carlo Tree Search Monte Carlo tree search (MCTS) is a more complex algorithm that has the ability to find the best move from any state present on a game-decision tree. A game tree representation can be used to model games, with nodes representing states and edges representing moves. In Figure 4, which describes a game tree for a Tic-Tac-Toe game, the root node, which is also the starting state of the tree, starts the Tic-Tac-Toe board in an empty state. After a player performs the initial move, we have three possible states at the next level (level 1) of the tree (only showcasing 3 moves for understanding). The states that are feasible after O reacts are on the following level below. This continues all the way until the end of the game.11 Figure 4: Tic-Tac-Toe Game tree Decision 14 In a game tree like this (Figure 4), there are numerous approaches to finding the optimum move. The simplest is to search every path in the tree and choose the move that will always provide the best result, even when playing against an optimal opponent. Minimax Trees, a strategy used in AI to always minimize the maximum possible loss for a player, accomplishes this behavior. However, because the game tree is so extensive in many complicated games, this is not viable.11 MCST represents a problem-solving approach based on random trials. This is the AI tactic employed by Deep Blue, the first computer program to defeat the Human Chess Champion in 1997.7 Deep Blue employs MCST at each moment in the game to examine all conceivable movements, then all possible human player movements, and finally all possible reply movements. As the branches develop from the stem, you may envisage all types of actions. As a result, this is referred to as a "search tree." This procedure needs to be repeated multiple times by the AI for a system like Deep Blue to compute the payback and determine which branch to take. Game AI is meant to replicate realism to the maximum extent possible (i.e., 'human-like') and in a complex and realistic environment such as that seen in video games. The implementation of an evaluation function that is able to accurately rate the quality of the newly generated game tree is likely the most significant component when integrating AI in video games. Because of the complexity of video games, determining an 15 appropriate evaluation function that does not have a high time complexity as well can be difficult. In video games, AI created with MCST can compute hundreds of different motions and select the one with the highest reward. Many strategic games employed a similar approach. However, because there are considerably more possible moves than in chess, it is difficult to consider them all. Instead, in these games, MCST chooses part of the available motions at random. As a result, the outcomes for human participants are far more unclear. It is not feasible, for example, to pre-program all AI motions in Civilization, a game in which people fight to create a city against AI that attempts to accomplish the same thing. MCST AI assesses some of the following stages in addition to acting on the current status, as FSM does. The image below depicts a simplified flow chart of how MCST can be employed in turn-based strategy games. Open-world games with complex AI, such as Sid Meier’s Civilization® VI, use MCST to generate varied AI behaviors in each round.12 The increase in complexity of a situation in these games is never planned, resulting in a unique gaming experience for human players each time. MCTS constructs its search tree from the ground up during the simulations, node by node, to store the statistical data obtained from these simulations. The search tree is 16 similar to the game tree, but its nodes also include the statistical data needed for MCTS to make intelligent decisions. As seen in figure 5, MCTS is often divided into four phases. In the Selection phase, existing data is used to choose subsequent child nodes to the search tree's finish. In the Expansion phase, a node is added to the search tree. The simulation is finished in the Simulation phase to decide the winner. Finally, in the Backpropagation phase, all nodes along the chosen path are refreshed with new data from the simulated game. This four-step process is continued until enough evidence is gathered to make an educated decision.12 Figure 5: Phases in an MSCT Phase 1: Selection We descend through the search tree starting from the root node by (1) selecting a move from a list of legal moves available and (2) repeatedly advancing to the matching child node. This process will continue until all of the permissible motions in a node have 17 been discovered in the search tree. We should explore new methods to learn and use current information to tap into well-established effective channels. To assist us to achieve these two objectives, we must pick child nodes using a selection function that balances exploration and exploitation. Phase 2: Expansion When the selection is finished, the search tree will have at least one unvisited move. Now, we choose one unvisited move at random and create a child node that corresponds to it (bolded in the diagram). We expand the search tree in the selection stage by adding this node as a child to the last node picked. Although some implementations choose to add numerous nodes to the tree for each simulation, adding only one node per simulation is the most memory-efficient option. The search tree may get rather enormous, especially in games with a lot of branching. Phase 3: Simulation Motions are chosen at random during the expansion phase, and the game state is continually advancing, starting with the newly created node. This cycle is continued until the game is over and a winner has been determined. During this time, no new nodes are produced. It is vital to remember at this point that nodes in the search tree correspond to nodes in the game tree. During this phase, we simply follow the game rules to (1) identify all allowed moves in the current game state, (2) select one genuine move at random, and (3) advance the game state. As the game nears its completion, this phase comes to an end. 18 Phase 4: Backpropagation Following the simulation session, the statistics for all visited nodes are updated. Each visited node's simulation count is incremented. The player's victory total may be increased depending on who wins. This is due to the fact that each node's statistics are considered for its parent node's decision, not its own. It is worth noting that, for two-player games, MCTS tries to determine the ideal motions down a path for each player independently. Initially, AI designers worked hard to make non-human creatures behave intelligently, but these characters lacked one essential feature: the ability to learn. In most video games, NPC behavior patterns are programmed, and nothing can be learned from the user.12 It does not change in reaction to the actions of human players. Most NPCs lack learning capability not just because it is difficult to train a computer to learn, but also because most designers avoid unexpected NPC behavior that may negatively impact human players' experience. Following AlphaGo's breakthrough, an AI machine that has observed millions of historical Go matches and still continues to learn to perceive the game better,7 the question then progressed to if AIs can also beat human players in a real-time strategy game (RTS). Games such as StarCraft, WarCraft, and FIFA have been able to implement such AI.7 In terms of potential options and the number of troops to manage, RTS games are substantially more challenging than simpler games like Go. In RTS games, AI has major advantages over human players, such as the ability to multitask and reply with 19 remarkable speed. Indeed, in certain games, AI designers have had to intentionally reduce an AI's capabilities in order to improve the experience of human players. 7 When learning abilities are included in-game and made the major emphasis, game designers lose total control over the gaming experience, and this method becomes less popular with creators.9 In a shooting game, for example, a human player can repeatedly appear in the same location, and the AI will assault that location gradually without examining it. Second, players can avoid confronting or ambushing the AI by using its memory. Such techniques are outside the designer's control, making it difficult to construct an intuitive AI system. We will now look into the game Splinters of Regret and the choice of AI architecture used within it. 20 SPLINTERS OF REGRET As a Bachelor of Science student in the Entertainment Arts and Engineering program, I participated in a senior Capstone project with the goal of having a finished game product published. The Game I worked on was Splinters of Regret, which is developed by the team Shatterbox Studios. Splinters of Regret is an action game where you play as a person imprisoned for the Figure 6: Splinter of Regret Cover Image sinful acts they have done in life. In this land of regrets, players face manifestations of other people's regrets as they make their way through the levels. Splinters of Regret is a bullet hell game, which comprises a hoard of mobs attacking you from every angle, and the players need to learn how to avoid these attacks and at the same time, defeat the mobs. The game relies on a lot of movement on-screen which requires the players to develop better reaction times and also better hand-eye coordination. As a developer on the team, my first few tasks were to develop mobs for the game. I ended up developing 2 mobs for the game: a long-ranged mob that can shoot laser beams toward the player, and a heavy-type mob that can produce shockwaves through its smashes. 21 Figure 7: Laser bot shooting lasers 22 RESULTS Before choosing from the 3 AI architectures mentioned in the paper, we will also assess how each one would fit in the context of Splinters of Regret. First looking at FSMs, this simplistic design would be implemented by having mainly 2 states, Evade and attack. Each time the Enemy bot attacks using its ability, it will then move from its current location to make it harder for the player to attack. While this would serve the purpose of a simplistic bot we want to create for the game, it seems a bit too simplistic, which would make the player's experience less immersive.10 Then let us look at MCST, which is the most complex of the 3 designs. While implementing such a design would allow us to create smarter enemy bots that would be able to make the most optimal decision in where it should move to evade the player and where it should aim to get a hit on the player, they would make their implementation overly complex, and the computational power to do so would vastly increase as well. MCSTs could have the opposite effect within the game, since the bots would sometimes take a while to choose the right branch from the tree and hence slow their movements down for a fast-paced action game. Since we want to spawn quite a lot of these bots within a level, each extra bot on the map could increase the computation power exponentially. As a result behavior Trees were chosen as the AI structure utilized to construct this capability, since it has the right amount of complexity needed to implement a simple enemy bot for the game. By generating and associating a sequence of nodes with some 23 functionality related to a Behavior Tree Graph, Behavior Trees in Unreal Engine 4 (UE4) is created in a visual fashion similar to Blueprints (A visual means of modeling the functionality of in-game objects). While a Behavior Tree implements logic, a separate asset known as a Blackboard is used to store information that the Behavior Tree needs to make smart decisions. A typical way would be to create a Blackboard, then add some Blackboard Keys before establishing a Behavior Tree that uses the Blackboard component.13 Figure 8: A section of a behavior tree for the Laser Bot Figure 8 is a snippet from one of the branches of the laser bot that was implemented. Behavior Trees in UE4 execute their logic from left-to-right and top-to-bottom. 24 The numerical sequence of operations is shown in the upper-right corner of the graph's nodes. A Decorator is the blue node in Figure 9 (or a Conditional in other Behavior Tree systems). It is linked to a Composite node and determines whether the Blackboard Key is true or false. This determines whether the rest of the branch can be executed. Task nodes are the purple nodes that reflect the activities that the AI may do. Figure 9: Decorator node UE4 Behavior Trees are event-driven, as opposed to prior Behavior Tree systems, to avoid doing additional work every frame. Rather than constantly checking to see if any relevant changes have occurred, the Behavior Tree passively listens for "events" that can be used to trigger tree modifications. In the example below, an event is utilized to change the Blackboard Key Can See Player. Using an event-driven design increases both performance and debugging. The algorithm's speed is greatly enhanced because it does not have to loop through the entire 25 tree every tick. Rather than constantly questioning, "Are we there yet?" we may simply wait until we are prodded and informed, "We are there!" Standard Behavior trees usually include a parallel composite node to manage concurrent behaviors, and the parallel node begins execution on all of its children at the same time. What happens next is governed by unique rules if one or more of the children in the trees die. Instead of complicated parallel nodes, UE4 Behavior Trees use Simple Parallel nodes, a particular node class called Services, and the property Observer Aborts on Decorators to create the same kinds of behaviors. 26 DISCUSSION Choosing when to implement particular AI designs within games can be quite difficult seeing how many design types are available for us to use. By working on Splinters of Regret, I feel that I have gotten a better grasp at identifying what aspects of a game require what kind of AI design. With respect to creating a small game such as our Student Project, I would always advise designing AI characters with behavior trees. There are easier to implement than MCST and provide better decision-making for the enemy characters. I would suggest using FSM for more basic functions within the game such as creating a menu system. I ended up implementing an FSM design when I was in charge of creating the pause system which has different states like Audio Settings, Graphics Settings as well as transitions between these states. When it comes to MCST, I mainly regard the use of such a design when it comes to AAA-rated games (Games developed by large Game Corporations), since such games need to perform better as they already have a large audience base and need to make sure their audience remains satisfied with their gameplay.9 They also have the facilities to implement complex designs of the MCST efficiently compared to smaller indie games. Apart from that, the MCST would also help in creating a good RTS game, since such a game requires complex AI designs to challenge the players. 27 Making a game is not easy, especially when it comes to implementing smart AI bots. But at the end of the day, if you can develop a game that can keep a player interested, then you have done a good job of developing a fun game for the players. 28 REFERENCES 1. "What is Artificial Intelligence? How Does AI Work? | Built-In." https://builtin.com/artificial-intelligence. Accessed 1 May. 2022. 2. “Ai in Game Development - a New Era of Smart Video Games.” Logicsimplified, 20 Oct. 2021, https://logicsimplified.com/newgames/artificial-intelligence-is-bringing-a-new-er a-of-smart-video-games//. 3. "(PDF) The Placebo Effect in Digital Games: Phantom Perception of ...." 6 Oct. 2015,https://www.researchgate.net/publication/282613797_The_Placebo_Effect_i n_Digital_Games_Phantom_Perception_of_Adaptive_Artificial_Intelligence. Accessed 18 Apr. 2022. 4. "7 examples of game AI that every developer should study." 5 Apr. 2016, https://www.gamedeveloper.com/design/7-examples-of-game-ai-that-every-devel oper-should-study. Accessed 18 Apr. 2022. 5. "Pathfinding algorithms: the four Pillars. | by Hybesis - H.urna | Medium." 20 Jan. 2020, https://medium.com/@urna.hybesis/pathfinding-algorithms-the-four-pillars-1ebad 85d4c6b. Accessed 1 May. 2022. 6. "What is a Finite State Machine? - Medium." 10 Mar. 2018, https://medium.com/@mlbors/what-is-a-finite-state-machine-6d8dec727e2c. Accessed 15 Apr. 2022. 29 7. Daniel Chong, et al. “Ai in Video Games: Toward a More Intelligent Game.” Science in the News, 28 Aug. 2017, https://sitn.hms.harvard.edu/flash/2017/ai-video-games-toward-intelligent-game/. 8. "Stealth Game Design." 16 May. 2014, https://www.gamedeveloper.com/design/stealth-game-design. Accessed 1 May. 2022. 9. "AI & NPC IN GAMES - International Journal of Computer ...." http://ijcert.org/ems/ijcert_papers/V3I307.pdf. Accessed 1 May. 2022. 10. Chris SimpsonBloggerJuly 18. “Behavior Trees for AI: How They Work.” Game Developer, 18 July 2014, https://www.gamedeveloper.com/programming/behavior-trees-for-ai-how-they-w ork. 11. Liu, Michael. “General Game-Playing with Monte Carlo Tree Search.” Medium, Medium, 13 July 2018, https://medium.com/@quasimik/monte-carlo-tree-search-applied-to-letterpress-34 f41c86e238. 12. "MCTS pruning in Turn-Based Strategy Games - CEUR-WS." http://ceur-ws.org/Vol-2862/paper27.pdf. Accessed 18 Apr. 2022. 13. "Behavior Tree Overview | Unreal Engine Documentation." https://docs.unrealengine.com/4.27/en-US/InteractiveExperiences/ArtificialIntelli gence/BehaviorTrees/BehaviorTreesOverview. Accessed 18 Apr. 2022. 30 Figure 1. Daniel Chong, et al. “Ai in Video Games: Toward a More Intelligent Game.” Science in the News, 28 Aug. 2017, https://sitn.hms.harvard.edu/flash/2017/ai-video-games-toward-intelligent-game/. Figure 2. Chris SimpsonBloggerJuly 18. “Behavior Trees for AI: How They Work.” Game Developer, 18 July 2014, https://www.gamedeveloper.com/programming/behavior-trees-for-ai-how-they-w ork. Figure 3. Chris SimpsonBloggerJuly 18. “Behavior Trees for AI: How They Work.” Game Developer, 18 July 2014, https://www.gamedeveloper.com/programming/behavior-trees-for-ai-how-they-w ork. Figure 4. Liu, Michael. “General Game-Playing with Monte Carlo Tree Search.” Medium, Medium, 13 July 2018, https://medium.com/@quasimik/monte-carlo-tree-search-applied-to-letterpress-34 f41c86e238. Figure 5. Liu, Michael. “General Game-Playing with Monte Carlo Tree Search.” Medium, Medium, 13 July 2018, https://medium.com/@quasimik/monte-carlo-tree-search-applied-to-letterpress-34 f41c86e238. 31 Chaslot, Guillaume, et al. “Monte-Carlo Tree Search: A New Framework for Game Ai.” Monte-Carlo Tree Search: A New Framework for Game AI, https://www.aaai.org/Papers/AIIDE/2008/AIIDE08-036.pdf. “Game - Newcastle University.” Artificial Intelligence 3: AI in Games Development; Game; https://research.ncl.ac.uk/game/mastersdegree/gametechnologies/previousinformati on/artificialintelligence3aiingamesdevelopment/. Bevilacqua, Fernando. “Finite-State Machines: Theory and Implementation.” Game Development Envato Tuts+, Envato Tuts, 24 Oct. 2013, https://gamedevelopment.tutsplus.com/tutorials/finite-state-machines-theory-and-i mplementation--gamedev-11867. Buttice, Claudio. “Finite State Machine: How It Has Affected Your Gaming for over 40 Years.” Techopedia.com, Techopedia, 20 Sept. 2019, https://www.techopedia.com/finite-state-machine-how-it-has-affected-your-gamingfor-over-40-years/2/33996. Zanid Haytam. “Behavior Trees - Introduction: Awaiting Bits.” Awaiting Bits Zanid Haytam's Personal Blog about Programming, Data Science and Random Stuff., Zanid Haytam, 7 Jan. 2020, https://blog.zhaytam.com/2020/01/07/behavior-trees-introduction/. 32 “What Is a Behavior Tree?” Opsive, 5 Sept. 2018, https://opsive.com/support/documentation/behavior-designer/what-is-a-behavior-tre e/. Scheide, Emily, et al. “Behavior Tree Learning for Robotic Task Planning through Monte Carlo Dag Search over a Formal Grammar.” 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, https://doi.org/10.1109/icra48506.2021.9561027. Hanish, A.A., and T.S. Dillon. “Object-Oriented Behaviour Modelling for Real-Time Design.” Proceedings Third International Workshop on Object-Oriented Real-Time Dependable Systems, https://doi.org/10.1109/words.1997.609928. 33 Name of Candidate: Jaithra Bhatia Date of Submission: 05/05/2022 |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6kkses4 |



