martes, 20 de noviembre de 2012

Sistemas inteligentes



Reporte 3

Instrucciones: Ingresa a http://en.wikipedia.org/wiki/Perceptron y modifica el código para el caso de “and”. Modifica los pesos iniciales, la tasa de aprendizaje y el umbral.

Como ya se menciono en las instrucciones de arriba proporcionadas por nuestra instructora, lo que hicimos en este reporte fue partir del programa publicado en la liga de wikipedia (Un perceptrón aprende a ejecutar un binario NAND) y modificarlo de tal forma que el sistema este enfocado hacia el AND.

Código Original de la liga de Wikipedia:
threshold = 0.5
learning_rate = 0.1
weights = [0, 0, 0]
training_set = [((1, 0, 0), 1), ((1, 0, 1), 1), ((1, 1, 0), 1), ((1, 1, 1), 0)]
 
def sum_function(values):
    return sum(value * weight for value, weight in zip(values, weights))
 
while True:
    print '-' * 60
    error_count = 0
    for input_vector, desired_output in training_set:
        print weights
        result = 1 if sum_function(input_vector) > threshold else 0
        error = desired_output - result
        if error != 0:
            error_count += 1
            for index, value in enumerate(input_vector):
                weights[index] += learning_rate * error * value
    if error_count == 0:
        break


Que al Correrlo muestra los siguientes resultados:

------------------------------------------------------------
[0, 0, 0]
[0.1, 0.0, 0.0]
[0.2, 0.0, 0.1]
[0.30000000000000004, 0.1, 0.1]
------------------------------------------------------------
[0.30000000000000004, 0.1, 0.1]
[0.4, 0.1, 0.1]
[0.5, 0.1, 0.2]
[0.5, 0.1, 0.2]
------------------------------------------------------------
[0.4, 0.0, 0.1]
[0.5, 0.0, 0.1]
[0.5, 0.0, 0.1]
[0.6, 0.1, 0.1]
------------------------------------------------------------
[0.5, 0.0, 0.0]
[0.6, 0.0, 0.0]
[0.6, 0.0, 0.0]
[0.6, 0.0, 0.0]
------------------------------------------------------------
[0.5, -0.1, -0.1]
[0.6, -0.1, -0.1]
[0.7, -0.1, 0.0]
[0.7, -0.1, 0.0]
------------------------------------------------------------
[0.6, -0.2, -0.1]
[0.6, -0.2, -0.1]
[0.7, -0.2, 0.0]
[0.7999999999999999, -0.1, 0.0]
------------------------------------------------------------
[0.7, -0.2, -0.1]
[0.7, -0.2, -0.1]
[0.7, -0.2, -0.1]
[0.7999999999999999, -0.1, -0.1]
------------------------------------------------------------
[0.7, -0.2, -0.2]
[0.7, -0.2, -0.2]
[0.7999999999999999, -0.2, -0.1]
[0.7999999999999999, -0.2, -0.1]
------------------------------------------------------------
[0.7999999999999999, -0.2, -0.1]
[0.7999999999999999, -0.2, -0.1]
[0.7999999999999999, -0.2, -0.1]
[0.7999999999999999, -0.2, -0.1]


A continuación, nuestro programa modificado, y aplicado para un AND:
threshold = 1.5
learning_rate = 0.8
weights = [2, 3, 3]
training_set = [((1, 0, 0), 0), ((1, 0, 1), 0), ((1, 1, 0), 0), ((1, 1, 1), 1)]
 
def sum_function(values):
    return sum(value * weight for value, weight in zip(values, weights))
 
while True:
    print '-' * 60
    error_count = 0
    for input_vector, desired_output in training_set:
        print weights
        result = 1 if sum_function(input_vector) > threshold else 0
        error = desired_output - result
        if error != 0:
            error_count += 1
            for index, value in enumerate(input_vector):
                weights[index] += learning_rate * error * value
    if error_count == 0:
        break


Y a la ves los resultados arrojados:

[2, 3, 3]
[1.2, 3.0, 3.0]
[0.3999999999999999, 3.0, 2.2]
[-0.40000000000000013, 2.2, 2.2]
------------------------------------------------------------
[-0.40000000000000013, 2.2, 2.2]
[-0.40000000000000013, 2.2, 2.2]
[-1.2000000000000002, 2.2, 1.4000000000000001]
[-1.2000000000000002, 2.2, 1.4000000000000001]
------------------------------------------------------------
[-1.2000000000000002, 2.2, 1.4000000000000001]
[-1.2000000000000002, 2.2, 1.4000000000000001]
[-1.2000000000000002, 2.2, 1.4000000000000001]
[-1.2000000000000002, 2.2, 1.4000000000000001]

No hay comentarios:

Publicar un comentario