IS Week 10
This week in class we discussed and studied decision trees and their application in dealing with uncertainty in decision making. Decision trees are useful because the analysis of complex decisions with significant uncertainty can be confusing
for a variety of reasons. First, and most importantly, the consequence that will result from selecting any specified decision
alternative cannot be predicted with certainty. Not only this, but there are also often a large number
of different factors that must be taken into account when making a decision. Finally, a decision maker's attitude
toward risk taking can impact the relative desirability of different alternatives. Decision tree diagrams are read from left to right. The leftmost node in a
decision tree is called the root node. The branches emanating to the right
from a decision node represent the decisions that can be made from that node. Only one of these alternatives can be selected. The
small circles in the tree are called chance nodes. The number shown
in parentheses on each branch of a chance node is the probability that
the outcome shown on that branch will occur at the chance node. The
right end of branch through the tree is called an endpoint, and each
endpoint represents the final outcome of following a path from the root
node of the decision tree to that endpoint. A critical component of the analysis of decision trees is the use of expected value. This allows for sequential decision making, once you know what can be expected from each branch of a decision tree. With this more accurate understanding of each decision branch, decision making can be optimized.
https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052
https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052
Comments
Post a Comment