Arbori de decizie. Algoritmul IDE3

Слайд 2

entropy(p1, p2, .... pn) = -p1 log p1 - p2 log p2 ... - pn log pn

entropy(p1, p2, .... pn) = -p1 log p1 - p2 log p2

Слайд 3


This second law states that for any irreversible process, entropy always

This second law states that for any irreversible process, entropy always increases.
increases.
Entropy is a measure of disorder.
!!! Since virtually all natural processes are irreversble, the entropy law implies that the universe is "running down"

The entropy law is sometimes refered to as the second law of thermodynamics

Ludwig Boltzmann 
1844 - 1906  Viena
Austrian physicist and philosopher

Слайд 5

Entropy can be seen as a measure of the quality of energy:
Low

Entropy can be seen as a measure of the quality of energy:
entropy sources of energy are of high quality. Such energy sources have a high energy density.
High entropy sources of energy are closer to randomness and are therefore less available for use

Слайд 6

- Claude Shannon transferred some of these ideas to the world of

- Claude Shannon transferred some of these ideas to the world of
information processing. Information is associated by him with low entropy.
- Contrasted with information is "noise", randomness, high entropy. 

Claude Shannon 
1916 –2001
American mathematician, electrical engineer, and cryptographer known as "the father of information theory".

Entropy in Information Theory

Слайд 7

At the extreme of no information are random number.
Of course, data may

At the extreme of no information are random number. Of course, data
only look random.
But there may be hidden patterns, information in the data. The whole point of ML is to dig out the patterns. The descovered patterns are usually presented as rules or decision trees. Shannon's information theory can be used to construct decision trees.

Слайд 8

A collection of random numbers has maximum entropy

A collection of random numbers has maximum entropy

Слайд 9

 Shannon defined the entropy Η (Greek capital letter eta) of a discrete variable X with possible values {x1, ..., xn}

Shannon defined the entropy Η (Greek capital letter eta) of a discrete
and probability p as:
where b is 2 in case when information measure is the bit.

Entropy in Information Theory

Слайд 10

Presupunem evenimentul aruncării unui zar cu 6 fețe. Valorile variabilei X sunt {1,2,3,4,5,6} iar

Presupunem evenimentul aruncării unui zar cu 6 fețe. Valorile variabilei X sunt
probabilitățile obținerii oricărei valori sunt egale.
În acest caz entropia este:
bits.

Example

Слайд 13

Entropyoutlook([2,3], [4,0],[3,2]) = (5/14) * 0.971 + (4/14) * 0.0 +

Entropyoutlook([2,3], [4,0],[3,2]) = (5/14) * 0.971 + (4/14) * 0.0 + (5/14)
(5/14) * 0.971 = 0.693bits
InfoGain = 0.940 – 0.693 = 0.247 bits of information
There are 14 instances: 9 yes, 5 no
TotalEntropy([9,5]) = 0.940 bits.

Слайд 14

gain(outlook) = 0.940 - 0.693 = 0.247 bits of information.

gain(temperature) =

gain(outlook) = 0.940 - 0.693 = 0.247 bits of information. gain(temperature) =
0.029 bits
gain(humidity) = 0.152 bits
gain(windy) = 0.048 bits
Имя файла: Arbori-de-decizie.-Algoritmul-IDE3.pptx
Количество просмотров: 37
Количество скачиваний: 0