Professional Documents
Culture Documents
Submission Information
Result Information
Similarity 8%
1 10 20 30 40 50 60 70 80 90
Journal/
Publicatio
n 3.39% Words <
14,
4.75%
Exclude Information
A-Satisfactory (0-10%)
B-Upgrade (11-40%)
8 41 A C-Poor (41-60%)
D-Unacceptable (61-100%)
SIMILARITY % MATCHED SOURCES GRADE
3 ijera.com Publication
<1
4 Osteoarthritis, obesity and weight loss evidence, hypotheses and horizons Publication
<1
- a s by Bliddal-2014
5 www.mdpi.com Publication
<1
11 Deadly Liver Mob opening the door improving sexual health pathways Publication
<1
for Aborig by Biggs-2016
12 An Embodied Approach to the Study of Media Forms Introducing a Publication
<1
Social Scientifi by Holbert-2004
18 Optimal Corridor Selection for a Road Space Management Strategy Student Paper
<1
Methodology and Tool
20 downloads.hindawi.com Publication
<1
27 www.radioeng.cz Publication
<1
36 How Low Can You Go Reducing Rates of Hypoglycemia in the Non- Publication
<1
critical Care Hosp by Kulasa-2017
40 www.oapen.org Publication
<1
41 PDF File Data pdf.usaid.gov Internet Data
<1
Abstract
12
Pose estimation is a computer vision technique for predicting and tracking the location of a
17
person or an item. This is done by looking at a person's or an object's stance and orientation
together. Model analysis, image processing, digital signal processing, artificial intelligence,
and other areas fall under this category.
In high-tech domains including word processing, office automation, machine translation, and
13
real-time surveillance systems, it has great application value and theoretical relevance. As a
result, computer vision in natural settings has taken on a whole new meaning. Text
identification and detection algorithms were formerly mostly reliant on artificially built
functions and traditional image processing techniques.
Advances in computer vision such as picture distribution, material recognition, and semantic
separation have been made in recent years, thanks to the rapid development of increasingly
powerful learning algorithms.
4
As a result, researchers seeking to train advanced AI/ML models are frequently limited by both
the quality and quantity of data available. So, if you want to demonstrate that a vehicle can
drive itself, you'll need a lot of kilometres of human driving data. Researchers, on the other
20
hand, may construct a large number of training data sets and come up with creative models
using simulation, exactly as humans do in their brains.
INTRODUCTION
In today’s world, AI is gradually being used for more technological purposes that can
15
assist us humans for almost all our daily needs, be it automation companies like Tesla
which use AI to store user data to assist their customers in their daily needs or various
10
other companies which follow the trend.
It was this very scenario that led to the idea of creating an environment where we teach
and train an AI to understand and make use of various situations around itself. We will
be utilizing the power of strong and complex Deep Learning models, and it will be
12
trained on a decently big dataset of various staging environments to make predictions
about how the species would react for different situations.
8
The problem statement involves researching, studying, trying and selecting the most
optimal way To create a virtual species which is self aware and can sustain under various
environments and situations.
Supervised deep learning can help with computer vision and various aspects of
natural language processing, for example. Deep learning is becoming increasingly
crucial when it comes to sensitive applications like cancer detection. Other
applications, including analyzing vast amounts of content posted on social
networking sites every day, are particularly beneficial (with some caveats).
30
According to academics, deep reinforcement learning has yielded outstanding
outcomes in games and simulations. In recent years, reinforcement learning has
mastered a variety of games previously thought to be beyond the reach of artificial
intelligence. AI programmes have already defeated human world champions in
11
StarCraft 2, Dota, and the ancient Chinese board game Go.
It's worth noting, though, that AI programmes learn to solve issues in a fundamentally
different way than humans do. This is due to the fact that the reinforcement learning
agent starts with a blank slate and is limited in its capabilities. Finally, the AI is left to
16
learn on its own by trial and error and how to get the most out of it.
When the issue area is simple and you have adequate processing power to execute a
number of trial-and-error sessions, this model has a good chance of working.
Usually, reinforcement learning agents need a ridiculously large number of sessions
31
to master a single game. Reinforcement learning research has been restricted to
wealthy tech companies' research labs due to the high costs.
This is primarily executed with the aid of a Machine Learning model, more specifically a
sub-field of ML called Deep Learning, which uses many more hidden layers and requires
a lot more resources than traditional Machine Learning.
The way a Deep Learning model is trained is via Supervised Learning i.e. using hundreds
and thousands of sample images with their truth labels, categorized into that we want to
make predictions on ultimately and then feeding all these images to the model and using
an optimization algorithm like Gradient Descent, Adam etc. to update the model’s
parameters in this way that it captures the input to output mapping and when new inputs
start coming in the future, the model applies the knowledge that it has gained from past
examples to make predictions.
One of the notable and renowned Deep Learning techniques that is used for the task of
image classification is something called Convolutional Neural Networks. CNNs are a
form of Neural Network created specifically for capturing and processing pictures and
38
photographs. It contains kernels that are a square matrix of learnable parameters which are
convolved with the current image to move the network forward.
Objective:
To create a virtual species that much like Humans are aware of its surroundings and
doings. We proceed by first making the basic Environment, i.e., the simulation those
22
entities would be in, and then proceeding with the entities themselves. For the
environment we would be utilizing the unity or unreal engine 4/5.
Project Scope
24
To create a virtual species that is aware of its surroundings and actions in the same way that
humans are. We begin by creating the basic Environment, which is the simulation that those
entities will be in, before moving on to the entities themselves. We would use Unity or Unreal
Engine 4/5 to create the environment.
A huge amount of data is required when the aim is to design a network capable of classifying
34
between all the categories, that too with state-of-the-art accuracy. Not only that, since the model
is also to be deployed, research also needs to be done about how to most efficiently design the
model so as to minimize the size and inference time (i.e., the time that it takes to make a
prediction given a new input).
Feasibility Study
The most crucial phase in every project is establishing whether or not the concept is viable. A
Feasibility Study is what it's called. The feasibility study is used to evaluate a proposed project's
21 33
inadequacies and prospective capabilities, as well as to recommend activities that will assist the
41 5
project realize its goals. The type and content of viable studies are largely determined by the
locations where the initiatives in question are carried out. The ability of a concept to succeed or
fail is referred to as "feasibility."
It recognises whether the product/service under development has a realistic market cap, what the
investment requirements are and where to get the funds, whether and where the technical
know-how to turn the idea into a physical product is accessible, and so on. It's a strategy for
making good judgments and achieving goals.
Technical Feasibility
9
The most critical pillar for a project's future success is technical feasibility. A correct and
comprehensive analysis of the programme and project that will be built determines whether the
project's future remains stable or if it is a project with a limited scope, just as a firm foundation
invariably leads to tall skyscrapers.
A true developer and his or her team will become technically sound before arranging and
25
working with software. Rather than wasting time tinkering with the issues that lead to a project's
14 32
failure, a quick review of the project's technical aspects, life cycle, and any other relevant
components is far more productive.
14
This not only helps with the technical aspects of a project, but it also acts as a link between
departments, encouraging and motivating them to contribute their time and effort to the project's
other tasks.
20
Python also has the benefit of being cross-platform and having a large number of extended
libraries that appear to do a better job than others. Furthermore, the library in question is simple
to set up and has a number of built-in features that have aided the project's constructive learning
and development.
Due to the limited workforce, development is both cost-effective and technologically advanced,
as it is based on Python's most recent technology, which allows us to create machine learning
models and 'Convolutional Neural Networks (CNN)'. The proposal can be delivered as a
technically sound and implementable project with the right hardware and software.
Economic Feasibility
If the predicted benefits equal or surpass the expected expenses, a system is called economically
7
feasible. The cost-benefit analysis method is used to weigh the predicted costs and benefits of a
project to determine its economic feasibility.
9
Economic analysis is used to analyze the cost and effectiveness of the proposed system. The
most significant part of economic feasibility is the cost-benefit analysis. It is, as the name
suggests, an examination of the system's costs and advantages. Developing an IT application is a
long-term endeavor.
8
The application has generated revenue for the company since its inception. Profits can come in
the form of money or a better working environment. It does, however, come with dangers, as
11
estimations can be off, and the project may not be lucrative. The cost-benefit analysis aids
management in identifying a project's worth, rewards, and dangers.
The ability to use a system after it has been created and installed is referred to as "operational
18
feasibility." Will there be any user resistance, which could have an impact on the app's potential
benefits? When it is integrated into daily routine, it is quite yielding and will be readily accepted
by individuals and others.
Gantt Chart
Tools Used
19
Python: Python was used in the project as a scripting language as well as the development
15 3
language for creating the web application hosted on the internet. The reason Python was used
for the Machine Learning part of the project was because it is easy to use, has a lot of
23
scientific and advanced statistics and mathematical libraries, Machine Learning tools and
frameworks along with a very active and supportive community of developers worldwide.
3
Python was used for training the models as well as scraping information about the
various environments and constructing the database using the MediaWiki API. Python
was used due to it being easy to read, dynamically-typed and the vast number of libraries
present,which help developers focus on what matters the most instead of boilerplate
code.
Also, when we are working on something related to Machine Learning and/or Big Data, we
2
need to iterate on various experiments quickly and thus, need a language which can be used to
develop prototype solutions relatively quickly and easily, the exact facility which is provided
by Python.
Apart from this, Python was also used as the development language for the web application
17
because of Streamlit. It is a python library that allows easy development and maintenance of
web applications by using pure Python and nothing else like HTML, CSS, JS etc.
NumPy: If someone is a Python and/or ML developer, there’s almost a guarantee that they
have worked with NumPy. It provides all sorts of features and functionality to create and
manipulate arrays in python and for doing mathematical calculations like mean, median,
mode, minimum, maximum, variance, standard deviation etc.
The way NumPy differs from doing these calculations in native python itself, is because of its
speed, since NumPy has been made in such a way that it tries to first optimize the calculations
26
in the best possible way and then execute them in parallel to achieve greater performance.
Matplotlib: Matplotlib is a popular Python library which is used for creating and drawing
plots, visualizing images and drawing conclusions from them. It provided the functionality to
have different types of plot like bar graph, scatter plot, histogram, pie charts etc.
TensorFlow: It is a Machine Learning framework written in Python, which helps in creating,
manipulating and managing the process of model creation, training and evaluation as well as
creating the complete end-to-end data pipeline, right from feeding the data to the ML model
to evaluating it on the test dataset.
Tensorflow framework was used for training and evaluating the models. After training,
the models were converted using Tensorflow Lite to their portable versions. The final
models had an accuracy of ~95%. Tensorflow is one the Big 3 frameworks that any ML
developer uses for developing their ML models, be it from scratch or the ones provided
by Tensorflow itself.
TFLite: Tensorflow Lite in is offering from Tensorflow itself, which basically functions
as an intermediate exchange format for the ML model. The best thing about TFLite
models is that they are much faster and smaller in size than the traditionally trained
Tensorflow Model, since it uses various techniques like Graph Optimization, Weights
Clustering and Pruning etc.
Git & GitHub: Git is a version tracking system which is used to keep track of all the
different changes being made to the project’s source code as well as all the different
experimental and production branches that the developers are working on.
4 27 37
Github, on the other hand, is an online software hosting tool, built on top of Git, which the
developers can use to collaborate with each other while working on a common project.
Kaggle Kernel: The entire modelling process was carried out on Kaggle Kernels, which
are similar in nature to Jupyter Notebooks. Kaggle provides free GPU and TPU
resources (30 hours per week each) which can train the model many times faster than on
CPU.
Libraries like Numpy, Matplotlib, PIL etc. were used to data and image processing as
well as visualizing the input and output images.
1) For the environment:
36
a) Unreal Engine: Unreal Engine is a video game engine that debuted in 1998 with the
first-person shooter Unreal. It has since been embraced by a variety of businesses,
including the film and television industries, and is now used in a variety of
three-dimensional (3D) game genres.
6
The C++-based Unreal Engine is incredibly portable, supporting a wide range of
3
platforms including PC, mobile, console, and virtual reality. One of the major
enhancements anticipated for UE4 is real-time global illumination using voxel cone
tracing, which eliminates pre-computed lighting.
2
The Sparse Voxel Octree Global Illumination (SVOGI) feature, which was
demonstrated in the Elemental demo, was replaced with a similar but less
computationally expensive technique due to performance concerns.
One of its key features is Nanite, an engine that allows high-detail photographic
source material to be integrated into games. Epic may be able to employ the Nanite
virtualized geometry technology to leverage its prior acquisition of Quixel, the
world's largest photogrammetry collection as of 2019.
By letting the engine software manage these things, Unreal Engine 5 aimed to make
it as easy as possible for developers to create detailed game worlds without having to
spend a lot of time creating new sophisticated assets.
The levels of detail (LODs) of these imported goods are handled by Nanite, who
adjusts them to the target platform and draw distance, a job that would normally be
done by an artist.
Epic also developed the Unreal Engine 5 to take advantage of the upcoming high
speed storage solutions with the next-generation console hardware, which will use a
mix of RAM and custom solid-state drives. With potentially tens of billions of
polygons on a single screen at 4K resolution.
Epic also developed the Unreal Engine 5 to take advantage of the upcoming
high-speed storage solutions with the next-generation console hardware that will use
a mix of RAM and custom solid state drives.
The Unreal Engine, developed by Epic Games, debuted in 1998 with the first-person
shooter Unreal. Originally intended for PC first-person shooters, it has since been
embraced by a variety of businesses, including the film and television industries, and
is currently used in a variety of three-dimensional (3D) game genres. The Unreal
6
Engine is very portable, supporting a wide range of desktop, mobile, console, and
virtual reality platforms.
Individuals and small studios will profit from Blender's unified pipeline and
responsive development method. Blender is a multi-platform application that works
on Linux, Windows, and Macintosh computers. Its user interface is based on
OpenGL, which ensures a consistent experience. The list of supported systems
highlights those that are routinely evaluated by the development team to ensure
particular compatibility.
Blender is a 3D modeling application that is both free and open-source. Video editing
and game development, as well as modeling, rigging, animation, simulation,
rendering, compositing, and motion tracking, are all supported.
Blender's Python scripting API is used by advanced users to alter the application and
build specialised tools, which are regularly included in later Blender editions.
Blender's unified pipeline and flexible development style will benefit individuals and
small studios.
The feature set of Figma focuses on use in user interface and user experience design,
with an emphasis on real-time collaboration.Figma's collaboration platform can be
used for UI design, ux design, prototyping, graphic design, wireframing,
brainstorming, and templates.
Figma allows users to draw vector networks in any direction, create custom fonts,
and make instant arc designs for clocks, watches, and pie charts using the Arc tool.
Many features of Figma are automat tasks such as resizing buttons, text, layouts,
padding, direction, and spacing, and translating design changes into computer code
for handing off designs to software developers.
XD supports and can open files from Illustrator, Photoshop, Photoshop Sketch, and
AfterꢀEffects. In addition to the Adobe Creative Cloud, XD can also connect to other
tools and services such as Slack and Microsoft Teams to collaborate. XD may also
auto-adjust and transition from macOS to Windows.
Self-learning AI
Self-learning Artificial intelligence (AI) is a type of machine learning that can learn from
unlabeled data. It works on a high level by evaluating a dataset and looking for patterns
from which it might derive inferences. It basically teaches itself to "fill in the blanks."
While a student who studies Spanish for five years in school may have a solid grasp of the
language and how to apply it, the student will have learned considerably more slowly than
someone who simply moves to Mexico for a few months. The principle of
learning-by-doing is being applied to AI in the form of self-learning AI.
When training a machine on a concept for which there isn't a lot of training data,
self-learning AI comes in handy. It can also be useful for training computers on
processes that academics are unfamiliar with, making labelled training samples
difficult to come by.
Another advantage of self-learning AI is that once a new talent is mastered, it may be applied
to other related skills with greater ease. When deep learning is done in a supervised
environment, the machine must start from scratch and gradually add new actions to their
repertoire. When the setting changes, though, the abilities don't often transfer as easily.
Because AI uses unsupervised learning from the entire environment rather than a specific
40
dataset, it can be on the lookout for additional anomalies that human researchers may be
unaware of. Due to this, it is better than most people at spotting changes and patterns that
28
indicate a breach, cybersecurity is one of the top areas where self-learning AI is now being
deployed.
We also need to decide the starting, minimum, maximum learning rate along with the number of
ramps up epochs and sustain epochs as well as the value of the exponential decay constant used
for decreasing the learning rate.
The trained model can then be easily evaluated on the validation datagen using the evaluate
generator.
RESULT AND OUTPUT
The Deep Learning model, after going through the training for 10 epochs, produced an accuracy
of 94.2% on flower dataset, 94% on animal dataset and 97.4% on the bird dataset. For the
size of the data that the model has been trained on, it is pretty good accuracy.
The training curves for the loss and accuracy have been visualized below for the best model
amongst the three, the bird predictor:
To date, the majority of artificial intelligence (AI) applications that have emerged have relied on
supervised machine learning to learn rules and exceptions to norms. This is where the AI is
taught a certain topic using tagged data sets.
From AI's ability to recognise our friends' faces in our social media feeds to its ability to
remove background noise from video meetings – a useful feature given how many of us now
work from home – this approach of teaching has produced fantastic results. While labelling
and cleaning data sets so that computer algorithms can understand them takes time, the
accuracy can be rather good.
On the other hand, supervised learning only works when the consequence is fully understood
by humans. Even though their looks are identical, we can differentiate a friend's face from a
stranger's face, and we can rapidly distinguish between two voices, even if their accents are
similar.
The most visible example of AI's self-learning component is also its most vulnerable victim,
vulnerable to different attacks and cyber threats. Cyber security is vulnerable because the more
data we exchange, the more vulnerable we are to attack; at the same time, the more attackers
innovate, the more difficult it is to detect them.
It is impossible to turn off data flows. Global businesses and, especially in today's epidemic,
social relationships necessitate Internet-based communication. Given that all data is linked
and, to some extent, vulnerable to intrusion, the challenge is identifying malicious behaviour
among all the legitimate activity. In order to gain our trust, we must not only detect the overtly
hostile threat actors, but also the more subtle intruders–the insider danger or the smart attacker
who impersonates someone he or she is not.
While artificial intelligence may never be able to perfectly replicate human intelligence,
unsupervised machine learning can come close. It does not learn from training data sets, but
rather from the data environment in which it is put.
It can also recognise different shades of grey in data flows, as well as new patterns that aren't
pre-defined.
Therefore, in order to curb the problems that we may during the training and pruning
of the model we opt for:
1. Select the appropriate learning model for the situation.
Every AI model is diverse for a reason: each challenge necessitates a different solution and
requires various data resources. There is no one-size-fits-all approach to eliminating bias, but
there are several guidelines that can aid your team's formation.
Supervised and unsupervised learning methods, for example, each have their own set of benefits
and drawbacks. Unsupervised clustering or dimension reduction models can learn bias from
their input data. If belonging to group A substantially correlates with behavior B, the model can
blend the two. While supervised models provide you more control over data selection bias, they
also introduce human bias.
While your data scientists will do the majority of the job, everyone involved in an AI project
must actively defend against data selection bias. You must cross a very fine line. The training
data must be diverse and cover a wide range of categories, although segmentation in the model
can be problematic unless the real data is classified similarly.
It is inadvisable to have different models for different groups, both computationally and in
terms of public relations. When there isn't enough data for one group, you can use weighting to
make it more significant in training, but be cautious. It can lead to unexpected new prejudices.
All discriminatory models most likely operated as planned in controlled conditions. Regulators
(and the general public) rarely recognise good intentions when assigning blame for ethical
transgressions. As a result, you should aim to emulate as many real-world events as possible
when building algorithms.
Using test groups on algorithms that are currently in use, for example, is not a good idea.
Instead, test your statistical procedures on real data wherever possible.
It's easier to verify result equivalence this way, but it also implies that you're willing to tolerate
skewed data. While demonstrating equality of opportunity is more challenging, it is morally
acceptable.
Although achieving both types of equality is difficult, diligent oversight and real-world testing
of your models should give you the best opportunity.
Traditional artificial intelligence (AI) makes use of computer algorithms to create computer
programmes that are designed to make judgments or solve issues based on pre-programmed
knowledge.
Self-learning AIs, on the other hand, build on top of powerful neural networks and deep
learning frameworks like Google's Tensorflow with machine learning algorithms. At first
glance, AI appears to be the pinnacle of invention, if it weren't for one basic fact: even
self-learning AIs can't learn everything.
Self-learning AI programmes that must identify objects or images are fed massive volumes of
data in order to build a neural network that matches the situation at hand. This means that if
the AI system is to recognise a cat in a photograph, more photographs of cats must be
submitted.
You'll need to submit 100 images of the object you want to identify and 100 photos of an object
that is opposite or different from the original object into the system to construct an image
recognition software employing self-learning AI technologies.
And that isn't all. These things will take days to train on the system. The system will need test
data to see if the machine has been properly trained to recognise the correct object. Before
they start delivering correct results, self-learning AIs can spend weeks training images and
attempting countless tests.
Self-learning AIs must sustain a continual and ongoing learning process even after generating
accurate results. Without this, the self-learning AI will become increasingly inaccurate over
time, becoming frequently confused by newly obtained photos.
By teaching a virtual agent to outperform human players, we can learn how to optimise a
variety of processes in a variety of interesting subfields. It was Google DeepMind's popular
AlphaGo that did it, defeating the world's best Go player and scoring an unattainable goal at the
time.
We'll build an AI agent that can educate itself to play the popular game Snake from scratch. To
do so, we built a Deep Reinforcement Learning approach using Keras on top of Tensorflow and
PyTorch (both versions are available, you can choose the one you prefer).
This method involves the interaction of two elements: the environment (the game itself) and the
agent (Snake). The agent gathers information about its present condition (which we'll explain
later) and takes appropriate action.
The agent gradually learns which acts maximise the payoff (in our case, what actions lead to
eating the apple and avoiding the walls). There are no game rules provided. Snake is at first
unsure of what to do and conducts haphazard actions.
The goal is to come up with a way to maximise the score or reward. We'll see how a Deep
Q-Learning system picks up Snake in just 5 minutes, scoring up to 50 points and displaying a
reasonable strategy.
Implementation theory of the game
The environment and the agent are the two main components of Reinforcement Learning.
Every time the agent performs an action, the environment rewards the agent, which might
be positive or negative depending on how good the activity was from that specific state.
The agent's goal is to determine which behaviours maximise reward in all possible
scenarios.
At each iteration, the agent receives observations from the environment, which are
referred to as states. A state can be defined as its location, speed, or any combination of
environmental elements. The agent's decision-making process is referred to as policy to be
more explicit and to use a Reinforcement Learning notation.
On a theoretical level, a policy is a mapping from the state space to the action space.
A Q-table is a matrix that links an agent's present state to the various actions the agent
can perform. The probabilities of success for the actions in the table were modified based
on the rewards the agent earned during training.
To put it another way, these numbers represent the average reward the agent will receive if it
acts on the state s. This table shows the agent's policy, which specifies what actions should
be taken at each stage to maximise the expected outputs.
Code
Food collection
sensor.AddObservation(localVelocity.x);
sensor.AddObservation(localVelocity.z);
sensor.AddObservation(m_Frozen);
sensor.AddObservation(m_Shoot);
}
else if (useVectorFrozenFlag)
{
sensor.AddObservation(m_Frozen);
}
}
{
hit.collider.gameObject.GetComponent<FoodCollectorAgent>().Freeze();
}
}
}
else
{
myLaser.transform.localScale = new Vector3(0f, 0f, 0f);
}
}
void Freeze()
{
gameObject.tag = "frozenAgent";
m_Frozen = true;
m_FrozenTime = Time.time;
gameObject.GetComponentInChildren<Renderer>().material = frozenMaterial;
}
void Unfreeze()
{
m_Frozen = false;
gameObject.tag = "agent";
gameObject.GetComponentInChildren<Renderer>().material = normalMaterial;
}
void Poison()
{
m_Poisoned = true;
m_EffectTime = Time.time;
gameObject.GetComponentInChildren<Renderer>().material = badMaterial;
}
void Unpoison()
{
m_Poisoned = false;
gameObject.GetComponentInChildren<Renderer>().material = normalMaterial;
}
void Satiate()
{
m_Satiated = true;
m_EffectTime = Time.time;
gameObject.GetComponentInChildren<Renderer>().material = goodMaterial;
}
void Unsatiate()
{
m_Satiated = false;
gameObject.GetComponentInChildren<Renderer>().material = normalMaterial;
}
AddReward(-1f);
if (contribute)
{
m_FoodCollecterSettings.totalScore -= 1;
}
}
}
Survival
//Static variable for the highest score so it is only one for all the instances
public static float longest_Score;
//Current iterration
public static int iter;
//Train yard ID
public int id;
//Spider Object
public Spider spider;
//Transforms for every location
public Transform mine, forest, farm, camp;
Transform current_Target;
//Some texts
public TextMeshProUGUI[] owned_Resources, equipped_Resources, stats;
public TextMeshProUGUI time_Text, iter_Text, record_Text;
//Animator object
public Animator animator;
//Ingame stats
public float health, camp_Health, hunger, maxHealth_Camp, max_Hunger, max_Health;
float speed, range, attack_Speed, attack_Damage;
[SerializeField]
int inv_Items, food, wood, stone_E, wood_E, food_E;
//Aren't we all?
bool is_Dead;
//This method is called at the beginning and initializes all the variables
public override void Initialize()
{
scale = 1.5f;
iter = 0;
Time.timeScale = scale;
current_Target = farm;
maxHealth_Camp = 100;
max_Health = 100;
max_Hunger = 100;
health = max_Health;
hunger = max_Hunger;
camp_Health = maxHealth_Camp;
speed = 10;
range = 1;
attack_Speed = 1;
attack_Damage = 1;
}
//This method Is called at the end of each level and resets everything
public void Reset_Level()
{
iter++;
iter_Text.text = "Iteration Nr.: " + iter.ToString();
is_Dead = false;
was_Spawned = false;
is_Moving = false;
is_Busy = false;
transform.localPosition = Vector3.zero;
current_Target = farm;
maxHealth_Camp = 100;
max_Health = 100;
max_Hunger = 100;
health = max_Health;
hunger = max_Hunger;
camp_Health = maxHealth_Camp;
inv_Items = 1;
stone_E = 0;
food_E = 0;
wood_E = 1;
food = 0;
wood = 0;
day = 0;
hour = 0;
prev_Hour = 0;
}
// Update is called per frame
void Update()
{
//Changing color of the sky
light.color = gradient.Evaluate(hour / 24);
//Incrementing time
hour += Time.deltaTime / 60;
if(hour >= 18)
{
if (!was_Spawned)//if the spider was not spawned - spawn it.
{
spider.is_Awake = true;
is_Spider = true;
was_Spawned = true;
}
}
if(hour >= 24)// day passed
{
day++;
hour = 0;
was_Spawned = false;
//Moving
if (is_Moving && !is_Busy && !is_Dead)
{
if(Vector3.Distance(transform.localPosition, current_Target.localPosition) > range)
{
transform.localPosition = Vector3.MoveTowards(transform.localPosition,
current_Target.localPosition, speed * Time.deltaTime);
}
else
{
is_Moving = false;
}
}
}
sensor.AddObservation(food);
sensor.AddObservation(food_E);
sensor.AddObservation(wood_E);
sensor.AddObservation(wood);
sensor.AddObservation(inv_Items);
sensor.AddObservation(max_Health);
sensor.AddObservation(max_Hunger);
sensor.AddObservation(maxHealth_Camp);
sensor.AddObservation(is_Spider);
}
//Similar to update but with the frequency set in Decission Requester Component
public override void OnActionReceived(float[] vectorAction)
{
if(id == 0)
Update_Text();
}
else
{
StartCoroutine(Farm());
}
}
else if (vectorAction[0] == 1)//Enemy
{
if (Has_To_Move(mine))
{
}
else
{
StartCoroutine(Mine());
}
/*if (is_Spider && hunger > 35)
{
if (Has_To_Move(spider.transform))
{
}
else
{
if (vectorAction[1] == 0)
{
StartCoroutine(Deposit());
}
else
{
StartCoroutine(Eat());
}
}
}*/
}
else if (vectorAction[0] == 2)//Forest
{
if (Has_To_Move(forest))
{
}
else
{
StartCoroutine(Chop());
}
}
else if(vectorAction[0] == 3)//Camp
{
if (Has_To_Move(camp))
{
}
else
{
if (vectorAction[1] == 0)
{
StartCoroutine(Deposit());
}
else
{
StartCoroutine(Eat());
}
}
}
else//Mine
{
if (Has_To_Move(mine))
{
}
else
{
StartCoroutine(Mine());
}
}
}
else
{
StartCoroutine(Eat());
}
}
}
//Allows to control the agent for testing purpouses
//Checks if the agents needs to move by calculating the distance between it and the target
public bool Has_To_Move(Transform target_Position)
{
if (Vector3.Distance(transform.localPosition, target_Position.localPosition) > range)
{
current_Target = target_Position;
transform.LookAt(target_Position);
if (animator)
animator.Play("Walk");
is_Moving = true;
return true;
}
else
{
return false;
}
}
public void Die(string reason)
{
if (!is_Dead)
{
audioSource.clip = audioClips[5];
audioSource.Play();
is_Dead = true;
if(animator)
animator.Play("Die");
}
AddReward(-10f);
if (id == 0)
{
print("Result: " + GetCumulativeReward());
}
}
//Reset_Level();
//EndEpisode();
StartCoroutine(Death());
}
if (inv_Items > 0)// if the agents has items in inventory it will take him .5 seconds to store
them and he will be rewarded
{
audioSource.clip = audioClips[1];
audioSource.Play();
is_Busy = true;
if (animator)
animator.Play("Action");
inv_Items = 0; ;
is_Busy = false;
AddReward(0.05f);
}
else
{
if (animator)
animator.Play("Idle");
}
}
IEnumerator Eat()
{
if (hunger < max_Hunger - 10)//Eat only if there is need for that
{
if (Vector3.Distance(transform.localPosition, camp.localPosition) > range)//if we are not
in range of the campfire look if there is any food in inventory to eat
{
if (food_E > 0)
{
audioSource.clip = audioClips[0];
audioSource.Play();
is_Busy = true;
if (animator)
animator.Play("Action");
food_E--;
inv_Items--;
if (hunger < max_Hunger - 10)
{
hunger += 10;
}
else
{
hunger = max_Hunger;
}
is_Busy = false;
}
}
else
{
if (food > 0)//look if there is food in the campfire to eat
{
audioSource.clip = audioClips[0];
audioSource.Play();
is_Busy = true;
if (animator)
animator.Play("Action");
yield return new WaitForSeconds(.5f);
food--;
if (hunger < max_Hunger - 20)
{
hunger += 20;
}
else
{
hunger = max_Hunger;
}
is_Busy = false;
}
}
}
}
IEnumerator Mine()
{
if (inv_Items < 2 && hunger > 25) //if not hungry - mine
{
audioSource.clip = audioClips[2];
audioSource.Play();
is_Busy = true;
if (animator)
animator.Play("Action");
stone_E++;
inv_Items++;
hunger = Mathf.Max(hunger - 25, 0);
is_Busy = false;
AddReward(0.01f);
}
else
{
if (animator)
animator.Play("Idle");
}
}
IEnumerator Farm()
{
if (inv_Items < 2)//if not hungry - farm
{
audioSource.clip = audioClips[3];
audioSource.Play();
is_Busy = true;
if (animator)
animator.Play("Action");
food_E++;
inv_Items++;
is_Busy = false;
AddReward(0.01f);
}
else
{
if (animator)
animator.Play("Idle");
}
}
IEnumerator Chop()
{
if (inv_Items < 2 && hunger > 25)//if not hungry - chop
{
audioSource.clip = audioClips[4];
audioSource.Play();
is_Busy = true;
if (animator)
animator.Play("Action");
wood_E++;
inv_Items++;
is_Busy = false;
AddReward(0.01f);
}
else
{
if (animator)
animator.Play("Idle");
}
}
spider.Die();
print("Kileld");
hunger = Mathf.Max(hunger - 35, 0);
is_Spider = false;
is_Busy = false;
}
else
{
if (animator)
animator.Play("Idle");
}
}
}
Animal AI
{
Walking AI
public class WalkerAgent : Agent
{
[Header("Walk Speed")]
[Range(0.1f, 10)]
[SerializeField]
//The walking speed to try and achieve
private float m_TargetWalkingSpeed = 10;
[Header("Target To Walk Towards")] public Transform target; //Target the agent will walk
towards during training.
SetResetParameters();
}
//Set our goal walking speed
MTargetWalkingSpeed =
randomizeWalkSpeedEachEpisode ? Random.Range(0.1f, m_maxWalkingSpeed) :
MTargetWalkingSpeed;
SetResetParameters();
//GROUND CHECK
sensor.AddObservation(bp.groundContact.touchingGround); // Is this bp touching the
ground
sensor.AddObservation(m_OrientationCube.transform.InverseTransformDirection(bp.rb.angular
Velocity));
//Get position relative to hips in the context of our orientation cube's space
sensor.AddObservation(m_OrientationCube.transform.InverseTransformDirection(bp.rb.position
- hips.position));
/// <summary>
/// Loop over body parts to add them to observation.
/// </summary>
public override void CollectObservations(VectorSensor sensor)
{
var cubeForward = m_OrientationCube.transform.forward;
sensor.AddObservation(m_OrientationCube.transform.InverseTransformDirection(velGoal));
//rotation deltas
sensor.AddObservation(Quaternion.FromToRotation(hips.forward, cubeForward));
sensor.AddObservation(Quaternion.FromToRotation(head.forward, cubeForward));
sensor.AddObservation(m_OrientationCube.transform.InverseTransformPoint(target.transform.p
osition));
void FixedUpdate()
{
UpdateOrientationObjects();
// Set reward for this step according to mixture of the following elements.
// a. Match target speed
//This reward will approach 1 if it matches perfectly and approach zero as it deviates
var matchSpeedReward = GetMatchingVelocityReward(cubeForward *
MTargetWalkingSpeed, GetAvgVelocity());
AddReward(matchSpeedReward * lookAtTargetReward);
}
//ALL RBS
int numOfRb = 0;
foreach (var item in m_JdController.bodyPartsList)
{
numOfRb++;
velSum += item.rb.velocity;
}
var avgVel = velSum / numOfRb;
return avgVel;
}
//return the value on a declining sigmoid shaped curve that decays from 1 to 0
//This reward will approach 1 if it matches perfectly and approach zero as it deviates
return Mathf.Pow(1 - Mathf.Pow(velDeltaMagnitude / MTargetWalkingSpeed, 2), 2);
}
/// <summary>
/// Agent touched the target
/// </summary>
public void TouchedTarget()
{
AddReward(1f);
}
Animal Attack AI
target = camp;
spawn_Pos = transform.localPosition;
if (animator)
animator.Play("Idle");
distance_Player = 100;
InvokeRepeating("Calculate_Distance",0,.1f);
}
if (Not_In_Range())
{
transform.localPosition = Vector3.MoveTowards(transform.localPosition,
target.localPosition, 3 * Time.deltaTime);
if (animator)
animator.Play("Walk");
}
else
Attack();
}
}
transform.localPosition = spawn_Pos;
attack_Dealt = 0;
cooldown = 0;
}