There are various types of Agent in Artificial Intelligence (AI)
Rational agents sometime called as Artificial Intelligence. A rational agent could be a person, firm, machine, or software that makes decision. It evolves with the best outcome after considering the past and current precepts .
An Artificial system is composed of
- An agent
The agents performs its task in their environment. On the other hand environment may contain other agents. So we can define agent as anything that can be viewed as :
- Collecting information from its environment through sensors and
- Revoke action upon that environment through actuators
If you want to understand about the structure of Intelligent Agents, then we should be familiar with its Architecture and Agent Program.
Architecture is the machinery system where the agent executes on. It is a device or machine deployed with different sensors and actuators, for example : a robotic car, a camera, a PC, Health equipment’s.
An intelligent agent is a agent that can make decisions, evaluate and perform a service based on its environment, user input and their experiences. Intelligent agent can be used to autonomously collect information on a regular, programmed schedule or when prompted by the user in real time
Agent program is an application of an agent function. It is the mapping of the history of all that an agent has received till date to an action.
Agent = Architecture + Agent Program
Examples of Agent:-
In software, it has file contents, received network packages, RAM speed which act as sensors and displays on the screen, files, sent network packets acting as actuators.
A Human agent has eyes, ears, nose, skin and other organs which act as sensors and hands, legs, mouth, and other body parts acting as actuators to respond to the environment.
A Robotic agent has infrared range finders and Cameras that act as sensors and numerous motors acting as actuators.
Types of Agents in artificial intelligence
Based on the degree of perceived intelligence and capability, Agent can be categorized in FOUR Classes :
- Simple Reflex Agents
- Model-Based Reflex Agents
- Utility-Based Agents
- Goal-Based Agents
Simple reflex agents
In Simple reflex agents, the agent ignore percept history and act only on the basis of the current percept. Percept history is collection of history of all that an agent has perceived till date. The functions of simple reflex agent is based on the condition-action rule. A condition-action rule is a rule where mapping to a state is done i.e, condition to an action. It is same like the conditional clause that we implement in coding. If the condition we have applied is true, then the action is taken, else not.
Such type of agent function only succeeds when the environment is fully observable. It may be possible to outflow from infinite loops if the agent can randomize its actions.
Limitation of Simple Reflex Agents are:
- It is very limited intelligence.
- It has not knowledge of non-perceptual parts of state.
- Simple reflex agent is too big to generate and store.
- If there occurs any alteration in the environment, then the gathering of rules need to be updated.
Model-based reflex agents
It working is based on the condition that matches the current situation. A model-based agent can hold partially observable environments by use of model about the world. Such type of agent has to keep track of internal state which is adjusted by each percept and that also depends on the percept history. The present state is stored inside the agent which keeps some kind of structure describing the part of the world which cannot be seen. Updation of the state requires the information about :
- how the world changes in-dependently from the agent, and
- how the agent activities affect the world.
These types of agents take conclusion based on how far they are currently from their goal. Their every action is proposed to reduce its distance from goal and allows the agent a way to select among multiple possibilities, selecting the one which target a goal state. The knowledge that supports its choices is represented explicitly and can be modified, which makes these agents more elastic. They usually needs search and planning. It can easily be changed in Goal based agent behaviour.
The agents which are developed having their end uses as building blocks are called utility based agents. When there are numerous possible alternatives, then to resolve which one is best, utility based agents can be employed. They select actions based on a preference (utility) for each state. Sometimes attaining the desired goal is not enough.
We may look for faster, safer, cheaper journey to reach a destination. Agent happiness should be taken into consideration. Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility agent selects the action that make the most of the expected utility. A utility function maps a state onto a real number which describes the associated degree of happiness.