Believe it or not, I had to do this project for school. Unfortunately, I lost the backpropegation code, but I later developed an application ( in Java ) to handle Graham-Schmidt orthogonalization. It was one of my earlier works, so it's a little messy, but here's the logic: ( I'm sure you'll have no trouble converting it to C++ code, but give a shout if something in my code is unclear )
//orthogonalizes the vectors of a matrix
public double [][] GrahamSchmidt(double [][] ardat,
int row, int column)
{
double temp1,temp2;
double [] temp3;
temp3= new double[column];
double [][] V;
V= new double[row][column];
for (int n=0; n<row; n++) //loops all entries to N = row
{
temp1=0;
temp2=0; //resets temp
//variables to 0
for (int reset=0; reset<column;reset++)
temp3[reset]=0;
for(int p=n; p>=0; p--)
{
if (p==0)
//V=calculations
for (int Vset=0; Vset<column; Vset++)
{
V[n][Vset]=ardat[n][Vset]-temp3[Vset];
}
if (temp2 == 0)
{
for (int loop3=0; loop3<column; loop3++)
{
temp3[loop3]+=0;
}
}
else
{
for (int loop3=0; loop3<column; loop3++)
{
//X* V(p-1)
//------------- * V(p-1)
//V(p-1)*V(p-1)
temp3[loop3]+=(temp1/temp2)*V[p-1][loop3];
} } // cuts else
} } //cuts current vector calculations
} //cuts all vectors to N loop
return V; //returns ortogonalized matrix
} //cuts method
hopefully it didn't get messed up with my editing and indentation correction.
Anyway, I have a question for you now; I forget what you must do for backpropegation after orthogonalizing a matrix. I remember something about multiplying the matrix by a weight vector or whatever, but I was hoping you could list all of the steps to fill me in.
Thanks for helping !
I'am myself a beginner with "Neural Networks".
Here are some very useful links on "Backpropation Neural Nets".
you probably will find what you are looking for in those links.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.