Skip to content

Instantly share code, notes, and snippets.

@mcapodici
Created April 10, 2023 02:06
Show Gist options
  • Save mcapodici/47e9142915ed207d9ac0476d3ba194d9 to your computer and use it in GitHub Desktop.
Save mcapodici/47e9142915ed207d9ac0476d3ba194d9 to your computer and use it in GitHub Desktop.
IRIS Linear Regression using NumPy
import numpy as np
from numpy import genfromtxt
iris = genfromtxt("IRIS.csv", delimiter=",", skip_header=True)
training_examples = iris[:, list(range(3))]
targets = iris[:, -2]
bias_column = np.ones((training_examples.shape[0], 1))
inputs = np.hstack((bias_column, training_examples))
weights = np.zeros((4,))
weights[2] += 0.1
# Give predictions for all value
def predict(inputs, weights):
return (np.dot(inputs, weights)).flatten()
# Calculate the loss of predictions
def loss(predictions, targets):
return np.sum((predictions - targets) ** 2)
def gradient(rate, inputs, predictions, targets):
return rate * np.dot(inputs.transpose(), predictions - targets) / targets.size
for i in range(1000):
prediction = predict(inputs, weights)
current_loss = loss(predict(inputs, weights), targets)
print(current_loss)
delta = gradient(0.01, inputs, predict(inputs, weights), targets)
weights -= delta.transpose()
checker = np.hstack((inputs, prediction[:, None], targets[:, None]))
np.set_printoptions(precision=6)
print(checker)
@mcapodici
Copy link
Author

This is a quick hack of a linear regression, based on what I read in https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/readings/L02%20Linear%20Regression.pdf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment