### by python pls programming exercise 3 so now hopefully you have a vector that points 5153535

By Python pls!

PROGRAMMING EXERCISE 3 So, now (hopefully you have a vector that points in the same directions as the data. In every direction, the data has variance. In some directions, the variance is very small, in others very large. But what does this mean? Well, currently we're describing the data in 3 dimensions described by the following 3 columns: [100] 0 1 0 0 0 1 Each column is a vector. The 1 in each column represents a step of one along the dimension represented by that row. For data we collect, each “feature is a dimension. But we can describe data more efficiently, compressing the data to make it easier for algorithms to learn. This is what we took the first step towards above, when we found the direction of greatest variance for the dataset! TASK 3.1: Project the dataset X onto your updated version of the direction of greatest variance, to get X, the compressed version of X HINT: . Remember X is a 100 by 3 matrix, and vis a 3 dimensional vector. How do you project 100 examples onto a 3 dimensional vector? • Remember, v is a unit vector. It should already have a length of 1. In [ ] V – new v X_c = #YOUR CODE HERE Great. You now have a 100 dimensional vector of coefficients. I bet you know exactly how useful this is, but in case you don't here's rundown. represents a datapoint. When you multiply that number by v. you get the 3 dimensional representation of that datapoint back but only in Each number in X terms of v). In other words, each number is a compressed version of a 3 dimensional point. To uncompress it, multiply each number by v again! TASK 3.2: Create a 100 by 3 vector, each row of which is v” (v transposed). Then piecewise multiply this vector using with X, to get the uncompressed version of Xc! HINT: . You can stack 100 copies of v7 using the tile function: • Piecewise multiplication of vectors is done using. vi 12 gives you element 1 of v times element 1 of v. element 2 of vi times element 2 of vy etc… • If you're stuck, you can use a loop instead of stacking v to do this with tile. NOTE: X_uncompressed isn't quite the same as X. This is because what we ve done is an example of lossy compression! This lossy compression saved of the space, but more importantly any algorithm has to only learn from data as complex (though complexity is much more complex than this in reality). That's enough to make many problems salvageable and manageable! As you'll leam later, “dimensionality reduction” as this is called is a critical part of machine learning. In [ ]: V_stack = #YOUR CODE BERE x_uncompressed = #PLOTTING CODE – IGNORE ! origin = [0], [0], [0] placeholder – np. array([[[0], [0], [0]]]) fig = plt.figure(figsize=(16, 10)) ax – fig.add_subplot(111, projection-'30') ax.scatter(X_uncompressed:, 01, X_uncompressed[:, 11, X_uncompressed[, 21, alpha=0.45) ax.scatter(X[:, 0], X[1,1], X[:, 2], alpha-0.15, color='red') ax.set_xlim(-1.5, 1.5) ax.set_ylim(-1.5, 1.5) ax.setzlim(-1.5, 1.5) ax.set xlabel('x') ax.set ylabel('Y') ax.aet_zlabel('2') print('RED = original x') print('BLUE – your compressed and uncompressed x') plt.show()