Lecture 27 Positive Definite and its Graphical analysis

  1. How to check whether it is positive definite.

Determinant > 0

Eigenvalue > 0

pivot: pivot and pivot divided by a (for the second pviot, multiply together to become determinant)

x^T A x > 0

Semi definite when it is equal to 0

2. 2D example first

Find the pivot and determinate first.

Validate x^T *A *x:

Why Model-Free Reinforcement Learning Failed.

Model-Free Reinforcement Learning

Model-Free Reinforcement Learning(MFRL) become popular again after the combination of convolutional neural network and Q learning algorithm. The idea behind MFRL is using the Bellman equation to model the temporal relation between current and future.

However, the data efficiency of MFRL is a huge concern. There are two reasons that are causing such inefficiency. The first reason is the Bellman equation tends to overestimate the Q values of the future. The agent becomes an optimistic, happy chap. In other words, the estimation of future Q values is inaccurate.

The second reason for inefficiency is that the signal provided by the…

The learning process is a statistic process, inferencing is probabilistic. The training samples are sampled from unknown probability distributions Pr(X) and Pr(Y). The result of NN gives out the estimated distribution Pr(Y|X). Integrate out X to get Y: Pr(Y)=∫ Pr(Y|X)Pr(X) dx.

When I started to crack machine learning, I read the book “Pattern Recognition and Machine Learning”. Compared with lots of blogs online, that book was particularly hard for me. One of the reason is that it involves lots of probabilities. People like me from a computer science background would be scared by all the mathematics.

So here I want…

Lecture 24: Markov and Fourier.

  1. Markov matrice:

All entries ≥ 0

All columns add to 1.

Eigenvalue = 1 is one of the eigenvalues. All eigenvalue ≤ 1 to guarantee to have a steady-state.

2. Find the steady-state of the Markov matrix. The power of the matrix. Similar to what happened before. We will need to find the eigenvalue and eigenvector.

In the last lecture, for a differential equation, the steady-state criteria were eigenvalue =0. This time, we take the power of the matrix. So steady-state should be something related to 1. And all other eigenvalues ≤1.

Lecture 21. Eigen Value and Eigen Vector.

  1. What a matrix do? It multiplies to a vector, and give out another vector. It is like a function.
  2. The eigenvectors are the vectors that went through the matrix but still remain the same direction. A*x parallel to x.
  3. Ax = λx, λ can be zero or negative. Ax = 0: Nullspace. X is the eigenvector, λ is the eigenvalue.
  4. If A is singular, then λ=0 is the eigenvalue.
  5. Ax=λx, Elimination doesn’t work here, there are two unknowns, x and λ.
  6. From the projection perspective, if a vector x is projected into some plane, and the direction of that vector…

Lecture 18. Determinant.

  1. Det(A), |A|, determinant has signs +,-.
  2. Invertibility. Det(A) > 0 , invertible.
  3. Sometimes, determinant feels like the area.
  4. First Property: Det(I) = 1
  5. Second Property: Row exchanging reverse the sign of the determinant.
  6. A permutation matrix is a row exchanging of I. So the determinant of Permutation matrices is +-1.
  7. How to compute in 2D

8. Third property: The linear operation for the first row. Not Det(A+B) = Det(A) + Det(B).

Lecture 16 Projection Matrices and Least Square

  1. b in column space: project something already in the column space.

b is perpendicular to the column space: b is projected to be a point, a 0 vector.

N(A^T) is perpendicular to the column space.

Pb = A(A^T A)^-1 A^T b . A^T * N(A^T) = 0

b in column space: Ax = b

Pb = A(A^T A)^-1 A^T * Ax = Ax = b

2. e is another projection of b into the N(A^T)

Lecture 12. An application in Physics: Represent Graph with Matrix.

0. An Application for Chemistry. Equation -> Matrix.

  1. 4 Nodes, 5 Edges.

Overall Graph:

Lecture 9. Independence, Basis, and Dimension with Nullspace.

  1. A with m rows, n columns, m <<n. Means we have more unknown variables than data. There are non-zero solutions to Ax = 0. Because we have more free columns. And free variables.
  2. Independent vector: c1 * x1 + c2 * x3 + …

no c will give 0, all c will give non-zero c1 * x1 + c2 * x3 + … = 0 ,

except for all c=0.

In other words, to reach origin, all vectors need to contribute something, otherwise, the endpoint will not be the origin. …

Lecture 7 Ax= 0, Nullspace

  1. Elimination of A. Rank = Number of Pivot in the Matrix.

2. Pivot column, and Free column. Free columns don’t create a new dimension. Free columns can be represented by pivot columns. So in x2 and x4 can be assigned with any number.

3. For each free column corresponding x, let the x be 0 or 1 to generate special solutions. And the solution space is the linear combination of these special solutions.

Number of Columns — Number of pivot = Number of Free columns.

Solution =a * Free Column 1 + b * Free Column 2 + ……


Ting Qiao

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store