5

Writing your own linear algebra matrix library in C

 2 years ago
source link: https://www.andreinc.net/2021/01/20/writing-your-own-linear-algebra-matrix-library-in-c
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Table of ContentsPermalink

DisclaimerPermalink

This article is in an early stage. If you see any error in the code or in the mathematical formulas, don’t hesitate to contact me or to create a PR directly. The article source is on github, link here.

Math, mathPermalink

Linear algebra is the branch of mathematics focused on:

  • linear equations: a1∗x1+...+an∗xn=b
  • linear maps: (x1,...,xn)→a1∗x1+...+an∗xn
    • and their representation in vector spaces through matrices;

A finite set of linear equations with a finite set of variables is called a system of linear equations or a linear system.

Linear systems are a fundamental part of linear algebra. Historically speaking, the math behind matrix theory has been developed for solving such systems. In the modern context, many real-world problems may be interpreted in terms of matrices and linear systems.

To see the connection between matrices and linear systems of equations, let’s look at the following example:

2x1+x2+3x3=12x1+6x2+8x3=36x1+8x3+18x3=5

For this linear system composed of 3 linear equations we can associate the matrix A[3x3]:

A=[2132686818]

And the vector of solutions B[3x1]:

B=[135]

We can now define our system in terms of matrices as A∗x=B, or:

[2132686818]∗[x1x2x3]=[135]

Or, we can use this notation, S:

S=(2131268368185)

The interesting aspect is that performing simple row operations (swap rows, add rows, multiply rows with scalars) on S we can transform the system into an equivalent form:

For example, after applying the following row operations on S:

−1∗Row1+Row2

−3∗Row1+Row3

−1∗Row1+Row3

14∗Row3

12∗Row1

15∗Row2

S becomes:

S=(1123212011250010)

Notice how our M matrix became an upper diagonal matrix (all elements under the first diagonal are 0). The more advanced algorithms we will implement in our C library are all about creating equivalent matrices that are upper or lower diagonal.

After all the basic row transformations on S matrix are performed, our initial linear system “morphs” into an equivalent system that is trivial to solve by means of substitution:

x1+12x2+32x3=12x2+x3=25x3=0

Because M is an upper diagonal matrix, we can instantly determine the value of x3=0; then we substitute x3 in the second equation to find x2=25 and so on.

From a computational perspective, solving linear systems of equations and having around upper/lower diagonal matrices in place is essential.

The libraryPermalink

Now, let’s say you are one of the few Engineering/Computer Science students that are passionate about linear algebra, numerical analysis and writing code in a low-level language. Or you are just curious about what’s behind the lu(A) method in Matlab. Or you are passionate about A.I., and you know you cannot learn A.I. algorithms without a good foundation in linear algebra.

I believe the best exercise you can do is to try to write your (own) Matrix library in a low-level programming language (like C, C++ or even D).

This tutorial is precisely this, a step-by-step explanation of writing a C Matrix library that implements the “basic” and “not-so-basic” numerical analysis algorithms that will allow us in the end to solve linear systems of equations.

All code in this tutorial is available on GitHub in the repository called neat-matrix-library.

To clone it (using GitHub CLI):

gh repo clone nomemory/neat-matrix-library

The code is not meant to be “efficient”, but relatively easy to follow and understand.

The tutorial assumes that the reader can write C code, understand pointers and dynamic memory allocation, and is familiar with the C standard library.

The data: nml_matrixPermalink

The first step is to model the data our library will work with: “matrices”. So we are going to define our first struct, called nml_mat which models a matrix:

typedef struct nml_mat_s {
  unsigned int num_rows;
  unsigned int num_cols;
  double **data;
  int is_square;
} nml_mat;

The properties of this struct have self-explanatory names:

  • unsigned int num_rows - represents the number of rows of the matrix. 0 is not an acceptable value;
  • unsigned int num_cols - represents the number of columns of the matrix. 0 is not an acceptable value;
  • double **data - is “multi-dimensional array” that stores data in rows and columns;
  • int is_square - is a "boolean" value that determines if the matrix is square (has the same number of rows and columns) or not.

From a performance perspective, it’s better to keep the matrix elements in a double* using the conversion:

data[i][j] <=> array[i * m + j]

To better understand how to store multi-dimensional arrays in linear storage please refer to this StackOverflow question, or read the wikipedia article on the topic.

Even if it might sound like a “controversial” decision, for the sake of simplicity, we will use the double ** multi-dimensional storage.

Allocating / De-allocating memory for the nml_mat matrixPermalink

Unlike “higher-level” programming languages (Java, Python, etc.), that manage memory allocation for you, in C, you need to explicitly ask for memory and explicitly free the memory once you no longer need it.

In this regard, the next step is to create “constructor-like” and “destructor-like” functions for the nml_mat struct defined above. There’s an unwritten rule that says: “Every malloc() has its personal free()”.

// Constructor-like 
// Allocates memory for a new matrix
// All elements in the matrix are 0.0
nml_mat *nml_mat_new(unsigned int num_rows, unsigned int num_cols);

// Destructor-like
// De-allocates the memory for the matrix
void nml_mat_free(nml_mat *matrix);

Implementing the nml_mat_new() is quite straightforward:

nml_mat *nml_mat_new(unsigned int num_rows, unsigned int num_cols) {
  if (num_rows == 0) {
    NML_ERROR(INVALID_ROWS);
    return NULL;
  }
  if (num_cols == 0) {
    NML_ERROR(INVALID_COLS);
    return NULL;
  }
  nml_mat *m = calloc(1, sizeof(*m));
  NP_CHECK(m);
  m->num_rows = num_rows;
  m->num_cols = num_cols;
  m->is_square = (num_rows == num_cols) ? 1 : 0;
  m->data = calloc(m->num_rows, sizeof(*m->data));
  NP_CHECK(m->data);
  int i;
  for(i = 0; i < m->num_rows; ++i) {
    m->data[i] = calloc(m->num_cols, sizeof(**m->data));
    NP_CHECK(m->data[i]);
  }
  return m;
}

Notes:

  • NML_ERROR, NP_CHECK are macros defined in nml_util.h.
  • NML_ERROR() or NML_FERROR() are logging utils, that helps us print error message on stderr;
  • NP_CHECK checks if the newly allocated memory chunk is not NULL. In case it’s NULL it aborts the program.

Explanation:

  1. First step is to check if num_rows == 0 or num_cols == 0. If they are, we consider the input as invalid, and we print on stderr an error. Afterwards NULL is returned;
  2. This line: nml_mat *m = calloc(1, sizeof(*m)) allocates memory for 1 (one) nml_mat structure;
  3. For a multi-dimensional array (double**), we allocate memory in two steps:
    • m->data = calloc(m->num_rows, sizeof(*m->data)) - this allocates memory for the column array;
    • Then, we allocate memory for each row. By using calloc() the data is initialized with 0.0.

Freeing the matrix is even more straightforward. The implementation for nml_mat_free():

void nml_mat_free(nml_mat *matrix) {
  int i;
  for(i = 0; i < matrix->num_rows; ++i) {
    free(matrix->data[i]);
  }
  free(matrix->data);
  free(matrix);
}

It’s important to note, that given the multidimensional nature of double** data, we need to:

  • de-allocate each row individually: free(matrix->data[i]);
  • then the column vector: free(matrix->data);
  • and lastly the actual struct: free(matrix).

At this point, it’s a good idea to add more methods to help the potential user of the library to create various nml_mat structs, with various properties.

Method Description nml_mat_rnd() A method to create a random matrix. nml_mat_sqr() A method to create a square matrix with elements 0.0. nml_mat_eye() A method to create an identity matrix. nml_mat_cp() A method to copy the content of a matrix into another matrix. nml_mat_fromfile() A method to read the matrix from a FILE.

Creating a random matrixPermalink

Writing a method like nml_mat_rnd() it’s easy, once we have nml_mat_new() in place:

nml_mat *nml_mat_rnd(unsigned int num_rows, unsigned int num_cols, double min, double max) {
  nml_mat *r = nml_mat_new(num_rows, num_cols);
  int i, j;
  for(i = 0; i < num_rows; i++) {
    for(j = 0; j < num_cols; j++) {
      r->data[i][j] = nml_rand_interval(min, max);
    }
  }
  return r;
}

The input params min and max represent the interval boundaries in which the random numbers are being generated.

The nml_rand_interval(min, max), the method responsible for generating the random value, looks like this:

#define	RAND_MAX	0x7fffffff
double nml_rand_interval(double min, double max) {
  double d;
  d = (double) rand() / ((double) RAND_MAX + 1);
  return (min + d * (max - min));
}

Creating a square matrixPermalink

A square matrix has the same number of columns and rows.

For example, A is square 3x3 matrix:

A=1.02.03.00.02.03.02.01.09.0

B is not a square matrix:

B=1.02.03.00.02.03.0

Implementing this is as simple as calling the existing nml_mat_new() function with rows=cols:

nml_mat *nml_mat_sqr(unsigned int size) {
  return nml_mat_new(size, size);
}

Similarly, you can write a method that is generating a random square matrix:

nml_mat *nml_mat_sqr_rnd(unsigned int size, double min, double max) {
  return nml_mat_rnd(size, size, min, max);
} 

Creating an identity matrixPermalink

An identity matrix is square (NxN) matrix that has 1.0 on the first diagonal, and 0.0 elsewhere:

In=(11111100⋯0010⋯0001⋯0⋮⋱000⋯1⏟n columns)}n rows

A matrix multiplied with its inverse is equal to the identity matrix: A−1∗A=A∗A−1=I

From a programming perspective, the first diagonal represents the series of matrix elements for which the indexes i and j are equal (i==j).

i represents the row index, while j represents the column index.

Having said this, our method looks like this:

nml_mat *nml_mat_eye(unsigned int size) {
  nml_mat *r = nml_mat_new(size, size);
  int i;
  for(i = 0; i < r->num_rows; i++) {
    r->data[i][i] = 1.0;
  }
  return r;
}

To find out the reasons why the identity method is named eye() please read this math exchange post.

Creating a matrix from a FILEPermalink

Instead of having to write something like the code bellow to set the elements of the matrix:

nml_mat *m = ...
m->data[0][0] = 1.0;
m->data[1][0] = 2.0;
m->data[2][0] = 4.0;
// etc. 

It’s more convenient to allow the user of our library to be able to read the matrix elements from an input text file.

The input file should be formatted in a certain way, e.g.:

matrix01.data

4 5
0.0     1.0     2.0     5.0     3.0
3.0     8.0     9.0     1.0     4.0
2.0     3.0     7.0     1.0     1.0
0.0     0.0     4.0     3.0     8.0
  • The first row of the file 4 5 represents the numbers of rows (=4) and columns (=5);
  • The rest of the rows are the elements (20) of the matrix.

The C code that is able to read this file is:

nml_mat *nml_mat_fromfilef(FILE *f) {
  int i, j;
  unsigned int num_rows = 0, num_cols = 0;
  fscanf(f, "%d", &num_rows);
  fscanf(f, "%d", &num_cols);
  nml_mat *r = nml_mat_new(num_rows, num_cols);
  for(i = 0; i < r->num_rows; i++) {
    for(j = 0; j < num_cols; j++) {
      fscanf(f, "%lf\t", &r->data[i][j]);
    }
  }
  return r;
} 

Where:

  • fscanf(f, "%d", &num_rows) and fscanf(f, "%d", &num_cols) reads the first line;
  • The fscanf(f, "%lf\t", &r->data[i][j]) line inside the for loops read the remaining elements of the matrix.

This method can also be used to read user input from the keyboard, by calling the method like this:

nml_mat_fromfilef(stdin);

Matrix methodsPermalink

Checking for equalityPermalink

It will be nice for the users of our library to be able to compare two matrices and see if they are equal, meaning they have the same number of rows and columns and identical elements.

In practice, it’s good to be able to check if two matrices are “almost equal”, meaning that their elements are “almost equal” within an accpetable tolerance.

Writing a method like this it’s trivial. We basically have to iterate over all the elements and check for their equality:

// Checks if two matrices have the same dimesions
int nml_mat_eqdim(nml_mat *m1, nml_mat *m2) {
  return (m1->num_cols == m2->num_cols) &&
          (m1->num_rows == m2->num_rows);
}

// Checks if two matrices have the same dimensions, and the elements
// are all equal to each other with a given tolerance;
// For exact equality use tolerance = 0.0
int nml_mat_eq(nml_mat *m1, nml_mat *m2, double tolerance) {
  if (!nml_mat_eqdim(m1, m2)) {
    return 0;
  }
  int i, j;
  for(i = 0; i < m1->num_rows; i++) {
    for(j = 0; j < m1->num_cols; j++) {
      if (fabs(m1->data[i][j] - m2->data[i][j]) > tolerance) {
        return 0;
      }
    }
  }
  return 1;
}

fabs(x) returns the absolute value of x: |x|.

Printing the matrixPermalink

Printing the matrix is trivial.

We just need to iterate through all the elements and use fprintf() to show the matrix in stdout:

void nml_mat_print(nml_mat *matrix) {
  nml_mat_printf(matrix, "%lf\t\t");
}

// Prints the matrix on the stdout (with a custom formatting for elements)
void nml_mat_printf(nml_mat *matrix, const char *d_fmt) {
  int i, j;
  fprintf(stdout, "\n");
  for(i = 0; i < matrix->num_rows; ++i) {
    for(j = 0; j < matrix->num_cols; ++j) {
      fprintf(stdout, d_fmt, matrix->data[i][j]);
    }
    fprintf(stdout, "\n");
  }
  fprintf(stdout, "\n");
} 

Accessing and modifying matrix elementsPermalink

Retrieving / Selecting a columnPermalink

Some advanced numerical analysis algorithms (e.g.: QR decomposition) are working extensively on columns, so it’s a good idea to be pro-active about it and create a method that selects/retrieves a column from a given matrix.

We will define this method as:

nml_mat *nml_mat_col_get(nml_mat *m, unsigned int col)

From a mathematical perspective calling our method on given matrix retrieves another column matrix:

nml_mat_col_get([1.02.03.00.02.03.02.01.09.0],1)=[2.02.01.0]

The C code for nml_mat_col_get looks like this:

nml_mat *nml_mat_col_get(nml_mat *m, unsigned int col) {
  if (col >= m->num_cols) {
    NML_FERROR(CANNOT_GET_COLUMN, col, m->num_cols);
    return NULL;
  }
  nml_mat *r = nml_mat_new(m->num_rows, 1);
  int j;
  for(j = 0; j < r->num_rows; j++) {
    r->data[j][0] = m->data[j][col];
  }
  return r;
} 

Observations:

  • The resulting matrix has only one column: nml_mat *r = nml_mat_new(m->num_rows, 1);
  • We copy all elements from column [col] to the only column of the resulting matrix [0]: r->data[j][0] = m->data[j][col].

Retrieving / Selecting a rowPermalink

To keep the API “consistent” we can write a similar method for retrieving a row:

nml_mat *nml_mat_row_get(nml_mat *m, unsigned int row)

This will work similar to the nml_mat_col_get(...), but instead of retrieving a column we will retrieve a row:

nml_mat_row_get([1.02.03.00.02.03.02.01.09.0],1)=[0.02.03.0]

The C implementation for this method looks like this:

nml_mat *nml_mat_row_get(nml_mat *m, unsigned int row) {
  if (row >= m->num_rows) {
    NML_FERROR(CANNOT_GET_ROW, row, m->num_rows);
    return NULL;
  }
  nml_mat *r = nml_mat_new(1, m->num_cols);
  memcpy(r->data[0], m->data[row], m->num_cols * sizeof(*r->data[0]));
  return r;
} 

This time we write the code differently. Given the fact the memory per row is contiguous we can make use of memcpy().

No loops are needed this time. This line of code is enough to achieve what we want:

memcpy(r->data[0], m->data[row], m->num_cols * sizeof(*r->data[0]));

At this point we’ve created a new row matrix from the initial one (m).

Setting valuesPermalink

To set the element of the matrix to a given value, we can simply access the data field of the nml_mat* struct:

nml_mat *m = ...
m->data[i][j] = 2.0; 

In addition, we can write helper methods to:

  • Set all the elements to a given value: void nml_mat_all_set(nml_mat *matrix, double value)
  • Set all the elements of the fist diagonal to a given value: int nml_mat_diag_set(nml_mat *m, double value)

The C code for those two is somewhat trivial:

// Sets all elements of a matrix to a given value
void nml_mat_all_set(nml_mat *matrix, double value) {
  int i, j;
  for(i = 0; i < matrix->num_rows; i++) {
    for(j = 0; j < matrix->num_cols; j++) {
      matrix->data[i][j] = value;
    }
  }
}

// Sets all elements of the matrix to given value
int nml_mat_diag_set(nml_mat *m, double value) {
  if (!m->is_square) {
    NML_FERROR(CANNOT_SET_DIAG, value);
    return 0;
  }
  int i;
  for(i = 0; i < m->num_rows; i++) {
    m->data[i][i] = value;
  }
  return 1;
} 

Multiplying a row with a scalarPermalink

Multiplying a row in the matrix (nml_mat) can be useful when implementing more numerical analysis advanced algorithms.

The idea is simple, we will define a method with the following signature:

int nml_mat_row_mult_r(nml_mat *m, unsigned int row, double num); 

This method will work directly through reference on the matrix m. That’s where the _r stands for.

From a mathematical perspective this method will do the following:

nml_mat_row_mult_r([1.02.03.00.02.03.02.01.09.0],1, 2.0)=[1.02.03.00.04.06.02.01.09.0]

The code implementation looks like this:

int nml_mat_row_mult_r(nml_mat *m, unsigned int row, double num) {
  if (row>= m->num_rows) {
    NML_FERROR(CANNOT_ROW_MULTIPLY, row, m->num_rows);
    return 0;
  }
  int i;
  for(i=0; i < m->num_cols; i++) {
    m->data[row][i] *= num;
  }
  return 1;
}

Notice how we select the row: m->data[row][i] *= num, where i = 0 .. m->num_cols.

An alternative method, that instead of referencing m will retrieve a new nml_mat *r, can be written like this:

nml_mat *nml_mat_row_mult(nml_mat *m, unsigned int row, double num) {
  nml_mat *r = nml_mat_cp(m);
  if (!nml_mat_row_mult_r(r, row, num)) {
    nml_mat_free(r);
    return NULL;
  }
  return r;
} 

Notice how the _r ending has dropped. This is a common pattern in C.

Multiplying a column with a scalarPermalink

Multiplying a column is also quite similar with what we described above.

We are going to end-up with two methods:

  • int nml_mat_col_mult_r(nml_mat *m, unsigned int col, double num)
    • This will modify the matrix m through reference;
  • nml_mat *nml_mat_col_mult(nml_mat *m, unsigned int col, double num)
    • This will return a new nml_mat *r matrix

From a math perspective:

nml_mat_col_mult_r([1.02.03.00.02.03.02.01.09.0],0, 2.0)=[2.02.03.00.02.03.04.01.09.0]

The C code for both of the methods looks like this:

nml_mat *nml_mat_col_mult(nml_mat *m, unsigned int col, double num) {
  nml_mat *r = nml_mat_cp(m);
  if (!nml_mat_col_mult_r(r, col, num)) {
    nml_mat_free(r);
    return NULL;
  }
  return r;
}

int nml_mat_col_mult_r(nml_mat *m, unsigned int col, double num) {
  if (col>=m->num_cols) {
    NML_FERROR(CANNOT_COL_MULTIPLY, col, m->num_cols);
    return 0;
  }
  int i;
  for(i = 0; i < m->num_rows; i++) {
    m->data[i][col] *= num;
  }
  return 1;
} 

Notice how we select the column: m->data[i][col] *= num , where i = 0 .. m->num_rows.

Adding two rowsPermalink

The ability to add one row to another, is am important method used later for the implementation of more advanced algorithms: LUP Decomposition, Row Echelon Form, Reduced Row Echelon Form, etc.

In addition, before adding one row to another we should also offer the possibility to multiply the row with a given scalar.

We define the following method(s):

// We add all elements from row 'row' to row 'where'. 
// The elements from row 'row' are muliplied using the 'multiplier'
//
// This one works through reference, modifying the `m` matrix; 
int nml_mat_row_addrow_r(nml_mat *m, unsigned int where, 
unsigned int row, double multiplier);

// We add all elements from row 'row' to row 'where'. 
// The elements from row 'row' are muliplied using the 'multiplier'
//
// This one returns a new matrix, `nml_mat *r`, after the row addition is performed    
* nml_mat *nml_mat_row_addrow(nml_mat *m, unsigned int where, unsigned int row, 
double multiplier);

To better visualise:

nml_mat_row_addrow_r([1.02.03.00.02.04.02.01.09.0], 0, 1, 0.5)=[1.0+0∗0.52.0+2.0∗0.53+4.0∗0.50.02.03.04.01.09.0]=[1.03.05.00.02.03.04.01.09.0]

The corresponding C code for the two methods is:

nml_mat *nml_mat_row_addrow(nml_mat *m, unsigned int where, 
unsigned int row, double multiplier) {
  nml_mat *r = nml_mat_cp(m);
  if (!nml_mat_row_addrow_r(m, where, row, multiplier)) {
    nml_mat_free(r);
    return NULL;
  }
  return r;
}

int nml_mat_row_addrow_r(nml_mat *m, unsigned int where, 
unsigned int row, double multiplier) {

  if (where >= m->num_rows || row >= m->num_rows) {
    NML_FERROR(CANNOT_ADD_TO_ROW, multiplier, row, where, m->num_rows);
    return 0;
  }
  int i = 0;
  for(i = 0; i < m->num_cols; i++) {
    m->data[where][i] += multiplier * m->data[row][i];
  }
  return 1;
} 

Notice how we are selecting the rows: m->data[where][i] += multiplier * m->data[row][i] , where i = 0 .. i < m->num_cols.

In case it’s not obvious, if the user simply wants to add two rows, without any multiplication, the multiplier should be kept as 1.0.

Multiplying the matrix with a scalarPermalink

The mathematical formula for multiplying the matrix with a scalar is simple:

s∗[a01a02...a0na11a12...a1n............am1am2...amn]=[s∗a01s∗a02...s∗a0ns∗a11s∗a12...s∗a1n............s∗am1s∗am2...s∗amn]

So, just like the formula, the code equivalent is simple:

nml_mat *nml_mat_smult(nml_mat *m, double num) {
  nml_mat *r = nml_mat_cp(m);
  nml_mat_smult_r(r, num);
  return r;
}

int nml_mat_smult_r(nml_mat *m, double num) {
  int i, j;
  for(i = 0; i < m->num_rows; i++) {
    for(j = 0; j < m->num_cols; j++) {
      m->data[i][j] *= num;
    }
  }
  return 1;
} 

For each element m->data[i][j] we perform the multiplication with the scalar num: m->data[i][j] *= num where i = 0 .. num_rows and j = 0 .. num_cols.

Modifying the nml_mat internal structurePermalink

The next set of functionalities we can write to help our ~potential~ library users to modify the nml_mat matrix structure are:

  • Remove a columns and rows and return a new matrix;
  • Swap rows inside a given matrix;
  • Swap columns inside a given matrix;
  • Concatenate vertically and horizontally two matrices;

Removing a columnPermalink

Removing a column from aM[n x m] matrix, involves the creation of a new [n x (m-1)] matrix.

The method signature looks like this:

nml_mat *nml_mat_col_rem(nml_mat *m, unsigned int column);

For this particular use-case it would be overkill to try to create a “by-reference” (_r) version of the method.

Calling the nml_mat_col_rem on a matrix yields the following results:

nml_mat_col_rem([1.02.03.00.02.04.02.01.09.0], 1)=[1.03.00.04.02.09.0]

The code implementation:

nml_mat *nml_mat_col_rem(nml_mat *m, unsigned int column) {
  if(column >= m->num_cols) {
    NML_FERROR(CANNOT_REMOVE_COLUMN, column, m->num_cols);
    return NULL;
  }
  nml_mat *r = nml_mat_new(m->num_rows, m->num_cols-1);
  int i, j, k;
  for(i = 0; i < m->num_rows; i++) {
    for(j = 0, k=0; j < m->num_cols; j++) {
      if (column!=j) {
        r->data[i][k++] = m->data[i][j];
      }
    }
  }
  return r;
}

Observations:

  • The resulting r matrix has the number of columns m->num_cols-1;
  • We keep a separate column index for the r matrix that we name k;
  • When copying the elements from m to r we skip the column column by adding this condition (column!=j):
    • Then we increment k, using k++ inside the r->data[i][k++] statement;
    • From this moment onwards k-j == 1, meaning k and j are no longer in sync, because we’ve skipped the column.

Removing a rowPermalink

Removing a column from aM[n x m] matrix, involves the creation of a new [(n-1) x m] matrix.

The method signature looks like this:

nml_mat *nml_mat_row_rem(nml_mat *m, unsigned int row);

Calling the nml_mat_row_rem on a matrix yields the following results:

nml_mat_row_rem([1.02.03.00.02.04.02.01.09.0], 1)=[1.02.03.02.01.09.0]

The code implementation:

nml_mat *nml_mat_row_rem(nml_mat *m, unsigned int row) {
  if (row >= m->num_rows) {
    NML_FERROR(CANNOT_REMOVE_ROW, row, m->num_rows);
    return NULL;
  }
  nml_mat *r = nml_mat_new(m->num_rows-1, m->num_cols);
  int i, j, k;
  for(i = 0, k = 0; i < m->num_rows; i++) {
    if (row!=i) {
      for(j = 0; j < m->num_cols; j++) {
        r->data[k][j] = m->data[i][j];
      }
      k++;
    }
  }
  return r;
}

Observations:

  • The resulting matrix r has the same number of columns as m (m->num_cols), but a smaller number of rows (r->num_rows);
  • We keep a separate row index k for the resulting matrix ‘r’;
  • Initially k is in sync with i, as long as (row!=i);
  • When row==i, k is no longer incremented, so the sync is lost and i - k == 1. This is how the row gets skipped.

Swapping RowsPermalink

This functionality will prove useful later when we re going to implement the Row Echelon Form and LU Decomposition algorithms.

In this case we can define two methods:

// Returns a new matrix with row1 and row2 swapped
nml_mat *nml_mat_row_swap(nml_mat *m, unsigned int row1, unsigned int row2);

// Modifies the existing matrix m, by swapping the two rows row1 and row2
int nml_mat_row_swap_r(nml_mat *m, unsigned int row1, unsigned int row2);

Visually the method works like this:

nml_mat_row_swap_r([1.02.03.00.02.04.02.01.09.0], 0, 1)=[0.02.04.01.02.03.02.01.09.0]

The C implementation makes use of the fact that rows are contiguous memory blocks can be swapped without having to access each element of the rows in particular:

int nml_mat_row_swap_r(nml_mat *m, unsigned int row1, unsigned int row2) {
  if (row1 >= m->num_rows || row2 >= m->num_rows) {
    NML_FERROR(CANNOT_SWAP_ROWS, row1, row2, m->num_rows);
    return 0;
  }
  double *tmp = m->data[row2];
  m->data[row2] = m->data[row1];
  m->data[row1] = tmp;
  return 1;
} 

As for the nml_mat_row_swap(..) this can be written by re-using nml_mat_row_swap_r(...):

nml_mat *nml_mat_row_swap(nml_mat *m, unsigned int row1, unsigned int row2) {
  nml_mat *r = nml_mat_cp(m);
  if (!nml_mat_row_swap_r(r, row1, row2)) {
    nml_mat_free(r);
    return NULL;
  }
  return r;
} 

Swapping columnsPermalink

This functionality might not be as useful as the previous one (nml_mat_row_swap(...)), but for sake of having a robust API for our ~potential~ users, we will implement it.

We define again two methods, one that is returning a new nml_mat matrix, and one that operates on the given on:

nml_mat *nml_mat_col_swap(nml_mat *m, unsigned int col1, unsigned int col2);
int nml_mat_col_swap_r(nml_mat *m, unsigned int col1, unsigned int col2); 

Visually the two methods are working like this:

nml_mat_col_swap_r([1.02.03.00.02.04.02.01.09.0], 0, 1)=[2.01.04.02.00.03.01.02.09.0]

Compared to the previous two functions (for swapping rows) the code is slightly different. Columns are not contiguous blocks of memory, so we will need to swap each element one by one:

int nml_mat_col_swap_r(nml_mat *m, unsigned int col1, unsigned int col2) {
  if (col1 >= m->num_cols || col2 >= m->num_rows) {
    NML_FERROR(CANNOT_SWAP_ROWS, col1, col2, m->num_cols);
    return 0;
  }
  double tmp;
  int j;
  for(j = 0; j < m->num_rows; j++) {
    tmp = m->data[j][col1];
    m->data[j][col1] = m->data[j][col2];
    m->data[j][col2] = tmp;
  }
  return 1;
} 

Writing the nml_mat_col_swap(...) version of the method will simply re-use the previous “_r” one:

nml_mat *nml_mat_col_swap(nml_mat *m, unsigned int col1, unsigned int col2) {
  nml_mat *r = nml_mat_cp(m);
  if (!nml_mat_col_swap_r(r, col1, col2)) {
    nml_mat_free(r);
    return NULL;
  }
  return r;
} 

Horizontal Concatenation of two matricesPermalink

This functionality is probably not very useful from a “scientific” point of view, but it’s a nice exercise we can solve, and a neat “utility” we can add to the library.

We would like to write a function, that takes a variable number of matrices (nml_mat**) and returns a new matrix that represents the horizontal concatenation of those matrices.

It’s important that all the input matrices have the same number of columns, otherwise the horizontal concatenation won’t work.

Our C function will have the following signature:

nml_mat *nml_mat_cath(unsigned int mnum, nml_mat **marr); 

Where:

  • unsigned int mnum - represents the total number of matrices we want to (horizontally) concatenate;
  • nml_mat **marr - are the matrices we want to (horizontally) concatenate;

Visually, the function works like this:

A=[1.02.03.00.02.04.02.01.09.0]B=[4.00.09.0]C=[3.0−1.01.02.00.0−5.0]

Calling the method nml_mat_cath(3, **[A, B, C]) will yield the following result:

[1.02.03.00.02.04.02.01.09.04.00.09.03.0−1.01.02.00.0−5.0]

The corresponding C code for this implementation looks like this:

nml_mat *nml_mat_cath(unsigned int mnum, nml_mat **marr) {
  if (0==mnum) {
    // No matrices, nothing to return
    return NULL;
  }
  if (1==mnum) {
    // We just return the one matrix supplied as the first param
    // no need for additional logic
    return nml_mat_cp(marr[0]);
  }
  // We calculate the total number of columns to know how to allocate memory
  // for the resulting matrix
  int i,j,k,offset;
  unsigned int lrow, ncols;
  lrow = marr[0]->num_rows;
  ncols = marr[0]->num_cols;
  for(k = 1; k < mnum; k++) {
    if (NULL == marr[k]) {
      NML_FERROR(INCONSITENT_ARRAY, k, mnum);
      return NULL;
    }
    if (lrow != marr[k]->num_rows) {
      NML_FERROR(CANNOT_CONCATENATE_H, lrow, marr[k]->num_rows);
      return NULL;
    }
    ncols+=marr[k]->num_cols;
  }
  // At this point we know how the resulting matrix looks like,
  // we allocate memory for it accordingly
  nml_mat *r = nml_mat_new(lrow, ncols);
  for(i = 0; i < r->num_rows; i++) {
    k = 0;
    offset = 0;
    for(j = 0; j < r->num_cols; j++) {
      // If the column index of marr[k] overflows
      if (j-offset == marr[k]->num_cols) {
        offset += marr[k]->num_cols;
        // We jump to the next matrix in the array
        k++;
      }
      r->data[i][j] = marr[k]->data[i][j - offset];
    }
  }
  return r;
}

Observations:

  • i, j are used to iterate over the resulting matrix (r);
  • k is the index of the current we are concatenating;
  • offset is useful to determine we need to jump to next matrix that needs concatenation.

Vertical concatenationPermalink

Just like the horizontal concatenation, this functionality is not very useful for the more complex algorithms we are going to implement later in this tutorial. But, for the sake of our ~potential~ library users, and because it’s a nice exercise we will implement it.

The main idea is to write a function, that takes a variable number of matrices (nml_mat**) and returns a new matrix that represents the vertical concatenation of those matrices.

It’s important that all the input matrices have the same number of rows, otherwise the vertical concatenation won’t work.

The method signature looks like:

nml_mat *nml_mat_catv(unsigned int mnum, nml_mat **marr);

Where:

  • unsigned int mnum - represents the total number of matrices we want to (horizontally) concatenate;
  • nml_mat **marr - are the matrices we want to (horizontally) concatenate;

Visually the method works like this:

A=[1.02.03.00.02.04.0]B=[4.00.09.02.01.09.0]

Calling nml_mat_catv(2, **[A, B]) will return the following result:

[1.02.03.04.00.09.00.02.04.02.01.09.0]

The code implementation looks like this:

// Concatenates a variable number of matrices into one.
// The concentation is done vertically this means the matrices need to have
// the same number of columns, while the number of rows is allowed to
// be variable
nml_mat *nml_mat_catv(unsigned int mnum, nml_mat **marr) {
  if (0 == mnum) {
    return NULL;
  }
  if (1 == mnum) {
    return nml_mat_cp(marr[0]);
  }
  // We check to see if the matrices have the same number of columns
  int lcol, i, j, k, offset;
  unsigned int numrows;
  nml_mat *r;
  lcol = marr[0]->num_cols;
  numrows = 0;
  for(i = 0; i < mnum; i++) {
    if (NULL==marr[i]) {
      NML_FERROR(INCONSITENT_ARRAY, i, mnum);
      return NULL;
    }
    if (lcol != marr[i]->num_cols) {
      NML_FERROR(CANNOT_CONCATENATE_V,lcol,marr[i]->num_cols);
      return NULL;
    }
    // In the same time we calculate the resulting matrix number of rows
    numrows+=marr[i]->num_rows;
  }
  // At this point we know the dimensions of the resulting Matrix
  r = nml_mat_new(numrows, lcol);
  // We start copying the values one by one
  for(j = 0; j < r->num_cols; j++) {
    offset = 0;
    k = 0;
    for(i = 0; i < r->num_rows; i++) {
      if (i - offset == marr[k]->num_rows) {
        offset += marr[k]->num_rows;
        k++;
      }
      r->data[i][j] = marr[k]->data[i-offset][j];
    }
  }
  nml_mat_print(r);
  return r;
}

Observations:

  • i, j are used to iterate over the resulting matrix (r);
  • Compared to our previous method (nml_mat_cath(..)) this time we start by iterating though the columns;
  • k is the index of the current matrix we are concatenating;
  • offset is useful to determine we need to jump to next matrix that needs concatenation

Basic Matrix OperationsPermalink

Add two matricesPermalink

From a mathematical perspective the formula for adding two matrices A and B is quit simple:

[a01a02...a0na11a12...a1n............am1am2...amn]+[b01b02...b0nb11b12...b1n............bm1bm2...bmn]=[a01+b01a02+b02...a0n+b0na11+b11a12+b12...a1n+b1n............am1+bm1am2+bm2...amn+bmn]

Basically each element from the first matrix gets added with the corresponding element from the second matrix.

The corresponding C code is straightforward:

nml_mat *nml_mat_add(nml_mat *m1, nml_mat *m2) {
  nml_mat *r = nml_mat_cp(m1);
  if (!nml_mat_add_r(r, m2)) {
    nml_mat_free(r);
    return NULL;
  }
  return r;
}

int nml_mat_add_r(nml_mat *m1, nml_mat *m2) {
  if (!nml_mat_eqdim(m1, m2)) {
    NML_ERROR(CANNOT_ADD);
    return 0;
  }
  int i, j;
  for(i = 0; i < m1->num_rows; i++) {
    for(j = 0; j < m2->num_rows; j++) {
      m1->data[i][j] += m2->data[i][j];
    }
  }
  return 1;
}

Subtracting two matricesPermalink

This is very similar with to the addition, except this time each element from m2 is subtracted from the corresponding element from m1:

nml_mat *nml_mat_sub(nml_mat *m1, nml_mat *m2) {
  nml_mat *r = nml_mat_cp(m2);
  if (!nml_mat_sub_r(r, m2)) {
    nml_mat_free(r);
    return NULL;
  }
  return r;
}

int nml_mat_sub_r(nml_mat *m1, nml_mat *m2) {
  if (!nml_mat_eqdim(m1, m2)) {
    NML_ERROR(CANNOT_SUBTRACT);
    return 0;
  }
  int i, j;
  for(i = 0; i < m1->num_rows; i++) {
    for(j = 0; j < m2->num_cols; j++) {
      m1->data[i][j] -= m2->data[i][j];
    }
  }
  return 1;
} 

Multiplying two matricesPermalink

Having a matrix A[m x n] and a matrix B[n x p]:

A=[a11a12...a1na21a22...a2n............am1am2...amn],B=[b11b12...b1pb11b12...b2p............bn1bn2...bnp]

We define the product A∗B=C, where C [m x p]:

C=[c11c12...c1pc11c12...c2p............cm1cm2...cmp]

Where:

cij=ai1∗b1j+ai2∗b2j+...+ain∗bnj=∑nk=1aik∗bkj, for i→1..m and j→1..p.

The product A∗B is defined if and only if the number of columns in A equals the number of rows in B, which is n.

The resulting product matrix will then “inherit” the number of rows from A and the number of columns from B.

The formula is more easy to digest if we follow a simple example:

A=[123004]B=[232115]

A∗B exists because A[2x3] and B[3x2]. The resulting matrix C will be 2x2.

C=[1∗2+2∗2+3∗11∗3+2∗1+3∗50∗2+0∗2+4∗10∗3+0∗1+4∗5]=[920420]

The naive C implementation for this algorithm looks like:

nml_mat *nml_mat_dot(nml_mat *m1, nml_mat *m2) {
  if (!(m1->num_cols == m2->num_rows)) {
    NML_ERROR(CANNOT_MULITPLY);
    return NULL;
  }
  int i, j, k;
  nml_mat *r = nml_mat_new(m1->num_rows, m2->num_cols);
  for(i = 0; i < r->num_rows; i++) {
    for(j = 0; j < r->num_cols; j++) {
      for(k = 0; k < m1->num_cols; k++) {
        r->data[i][j] += m1->data[i][k] * m2->data[k][j];
      }
    }
  }
  return r;
}

Better algorithms exists for matrix multiplications, if you want to find out more please check this wikipedia article.

Row Echelon FormPermalink

A matrix A is in Row Echelon Form if it has the shape resulting from a Gaussian Elimination.

Additionally, the matrix A is in Row Echelon form if:

  • The first non-zero element for each row is exactly 1.0;
  • Rows with all 0.0 elements are bellow rows that have at least one non-zero element.
  • Each leading entry (pivot) is in a column to the right of the leading entry in the previous row.

All the bellow matrices are examples of matrices that have been “morphed” into Row Echelon Form:

A=[123400130001]B=[1234001300010000]C=[12010000]

Every matrix can be transformed into a Row Echelon, by using elementary row operations:

  • Interchanging (swapping) two rows. See nml_mat_row_swap_r(...) implemented before;
  • Multiply each element in a row by a non-zero number (scalar multiplication of rows). See nml_mat_row_mult_r(...) implemented before;
  • Multiply a row by a non-zero number and add the result to another row (row addition). See nml_mat_row_addrow_r(...) implemented before;

The algorithm to transform the matrix in a Row Echelon Form is as follows:

  1. Find the “pivot”, the first non-zero entry from the first column of the matrix;
    • If the column has only zero elements, jump to the next column;
  2. Interchange rows, moving the pivot row to become the first row;
  3. Multiply each element in the pivot by the inverse of the pivot 1pivot so that the pivot equals 1.0;
  4. Add multiplies of the pivot row to each of the pivot rows, so every element in the pivot column will equal 0.0.
  5. Continue the process until there are no more pivots to process.

Note: A matrix can have multiple Row Echelon Forms, but you will see in the next chapter, there’s only one Reduced Row Echelon Form.

ExamplePermalink

Let’s take for example the following matrix, A[3x3]. REF(A) is also a 3x3 matrix. The transitions are:

A=[012121278]→A1=[121012278]→A2=[121012036]→Aref=[121012000]

A→A1: We found out that the first non-zero element of the first column[0] is 1 on row[1] so we’ve swapped row[0] with row[1]. Using our code this means:

nml_mat_row_swap_r([012121278], 0, 1)=[121012278]

A1→A2: For A1 we’ve multiplied each element of row[0] with -2 and added the result to row[2]. Using our code this means:

nml_mat_row_addrow_r([121012278], 2, 0, -2.0)=[121012036]

A2→Aref: For A2 we’ve multiplied row[1] with -3 and added the result to row[2]. Using the code this means:

nml_mat_row_addrow_r([121012036], 2, 1, -2.0)=[121012000]

Code implementationPermalink

Most of the bits and pieces for assembling the algorithm, were already implemented before: nml_mat_row_swap_r(...), nml_mat_row_mult_r(...), nml_mat_row_addrow_r(...).

What we are missing at this moment to assemble the algorithm is to write the “pivot function”. We are going to call this: _nml_mat_pivotidx(...):

// Finds the first non-zero element on the col column, under the row row.
// This is used to determine the pivot in gauss Elimination
// If not pivot is found, returns -1
int _nml_mat_pivotidx(nml_mat *m, unsigned int col, unsigned int row) {
  // No validations are made, this is an API Method
  int i;
  for(i = row; i < m->num_rows; i++) {
    if (fabs(m->data[i][col]) > NML_MIN_COEF) {
      return i;
    }
  }
  return -1;
}

At this point the algorithm is straight-forward to implement:

// Retrieves the matrix in Row Echelon form using Gauss Elimination
nml_mat *nml_mat_ref(nml_mat *m) {
  nml_mat *r = nml_mat_cp(m);
  int i, j, k, pivot;
  j = 0, i = 0;
  // We iterate until we exhaust the columns and the rows
  while(j < r->num_cols && i < r->num_cols) {
    // Find the pivot - the first non-zero entry in the first column of the matrix
    pivot = _nml_mat_pivotidx(r, j, i);
    if (pivot<0) {
      // All elements on the column are zeros
      // We move to the next column without doing anything
      j++;
      continue;
    }
    // We interchange rows moving the pivot to the first row that doesn't have
    // already a pivot in place
    if (pivot!=i) {
      nml_mat_row_swap_r(r, i, pivot);
    }
    // Multiply each element in the pivot row by the inverse of the pivot
    nml_mat_row_mult_r(r, i, 1/r->data[i][j]);
    // We add multiplies of the pivot so every element on the column equals 0
    for(k = i+1; k < r->num_rows; k++) {
      if (fabs(r->data[k][j]) > NML_MIN_COEF) {
        nml_mat_row_addrow_r(r, k, i, -(r->data[k][j]));
      } 
    }
    i++;
    j++;
  }
  return r;
} 

1/r->data[i][j] might pose a risk. If r->data[i][j] becomes very small, (like, 0.0000…01), we might overflow when multiplying witg 1/r->data[i][j]. In this regard I’ve introduced a “guard” value called NML_MIN_COEF.

We consider every number smaller than NML_MIN_COEF to be 0.0. That’s why we perform this additional check: if (fabs(r->data[k][j]) > NML_MIN_COEF) in our algorithm.

Reduced Row Echelon FormPermalink

A matrix A is in Reduced Row Echelon Form, Arref if all the conditions of being in Row Echelon Form are satisfied, and the leading entry in each row is the only non-zero entry in this column.

For example the following matrices are in Reduced Row Echelon Form (Rref):

A=[120000100001]B=[1200001000010000]C=[10010000]

Compared to the previous algorithm, an additional step is performed:

  1. We identify the last row having a pivot equal to 1;
  2. We mark this as the pivot row;
  3. We add multiplies of the pivot row to each of it’s upper rows, until every element above it remains 0.0;
  4. We repeat the process from bottom-up.

Note: A matrix has only RREF form, but can have many REF forms.

Code ImplementationPermalink

To make the algorithm more stable from a “computational” perspective we will change the “pivoting method” used above. We introduce a new one:

// Find the max element from the column "col" under the row "row"
// This is needed to pivot in Gauss-Jordan elimination
// If pivot is not found, return -1
int _nml_mat_pivotmaxidx(nml_mat *m, unsigned int col, unsigned int row) {
  int i, maxi;
  double micol;
  double max = fabs(m->data[row][col]);
  maxi = row;
  for(i = row; i < m->num_rows; i++) {
    micol = fabs(m->data[i][col]);
    if (micol>max) {
      max = micol;
      maxi = i;
    }
  }
  return (max < NML_MIN_COEF) ? -1 : maxi;
} 

Compared to the previous one, this one will return the biggest element on the column under row row. This will be picked as pivot.

The C code:

// Retrieves the matrix in Row Echelon form using Gauss Elimination
nml_mat *nml_mat_ref(nml_mat *m) {
  nml_mat *r = nml_mat_cp(m);
  int i, j, k, pivot;
  j = 0, i = 0;
  while(j < r->num_cols && i < r->num_cols) {
    // Find the pivot - the first non-zero entry in the first column of the matrix
    pivot = _nml_mat_pivotidx(r, j, i);
    if (pivot<0) {
      // All elements on the column are zeros
      // We move to the next column without doing anything
      j++;
      continue;
    }
    // We interchange rows moving the pivot to the first row that doesn't have
    // already a pivot in place
    if (pivot!=i) {
      nml_mat_row_swap_r(r, i, pivot);
    }
    // Multiply each element in the pivot row by the inverse of the pivot
    nml_mat_row_mult_r(r, i, 1/r->data[i][j]);
    // We add multiplies of the pivot so every element on the column equals 0
    for(k = i+1; k < r->num_rows; k++) {
      if (fabs(r->data[k][j]) > NML_MIN_COEF) {
        nml_mat_row_addrow_r(r, k, i, -(r->data[k][j]));
      } 
    }
    i++;
    j++;
  }
  return r;
} 

LU(P) DecompositionPermalink

LU Decomposition, also named LU Factorisation refers to the factorisation of a matrix A, into two factors L and U.

Normally the factorisation looks like this:

[a11a12a13a21a22a23a31a32a33]=[100l2110l31l321]∗[u11u12u130u22u2300u33]

In practice, however, this type of factorisation might fail to materialise without swapping various rows of A during the computation. In this case, we need to introduce into the equation a new matrix P where we keep track of all the row changes that are happening during the LU process.

Thus, the decomposition is called LU Factorisation with partial pivoting, and the new equation becomes:

P∗A=L∗U

Where:

  • P represents any valid (row) permutation of the Identity I matrix, and it’s computed during the process;
  • L is a lower diagonal matrix, with all the elements of the first diagonal ==1;
  • U is an upper diagonal matrix.

There’s another factorisation where not only the rows are pivoted, but also columns, this is called LU Factorisation with full pivoting but we are not going to implement this.

If the A matrix is square (nxn), it can always be decomposed like : P∗A=L∗U.

To compute the LU(P) decomposition we will need to basically implement a modified version of the Gauss Elimination algorithm (see Row Echelon Form). This is probably the most popular implementation, and it requires around 23∗n3 floating point operations.

Other algorithms involve direct recursion or randomization. We are not going to implement those versions.

Computing the P∗A=L∗U decomposition of matrix A is instrumental for computing the determinant of matrix A, the inverse of matrix A and solving linear systems of equations.

The LU(P) algorithm as an examplePermalink

LU(P) factorisation (or decomposition) can be obtained by adjusting the idea of Gaussian Elimination (see Row Echelon Form and Reduced Row Echelon Form).

The algorithm starts like this:

  • We allocate memory for the L, U, P matrices
    • L starts as zero matrix;
    • P is the identity matrix;
    • U is an exact copy of A;
  • We start iterating the matrix U by columns
    • For each column we look for the pivot value (the biggest value of the column in absolute)
      • If needed we swap the corresponding rows in U, L and P, so that the pivot is position on the first diagonal;
      • If no swap is needed we start creating zeroes on the column by the means of row addition. Rowx+multiplier∗Rowy.
        • We record the multiplier in matrix L
    • We repeat for every column until U has only zero elements under the first diagonal.

Let’s look at the decomposition for a matrix A[3x3]:

P=[100010001],A=[21544−4131],L=[000000000],U=[21544−4131]

  • Step1: Because 4>2, we swap Row0 with Row1. After this row operation we have:

P=[010100001],L=[000000000],U=[44−4215131]

  • Step 2: We want to start creating zeroes on the first column. So we apply the following operation, Row1−(12)Row0. We record the multiplier 12 in L[1][0], and we compute the basic row operation on U:

P=[010100001],L=[0001200000],U=[44−40−17131]

  • Step 3: We continue to create zeroes on the first column, by applying: Row2−14Row0. We record the multiplier 14 in L[2][0], and we compute the row operation on U:

P=[010100001],L=[00012001400],U=[44−40−17022]

  • Step 4: We’ve finished with the first column, we skip to the next one. Because -1<2 we swap Row1 with Row2. The idea is to always have the biggest pivot. P, L and U are affected by this swap:

P=[010001100],L=[00014001200],U=[44−40220−17]

  • Step 5: We want to create the last 0.0 on the second column. In this regard we apply Row2−(−12)∗Row1:

P=[010001100],L=[000140012−120],U=[44−4022008]

  • Step 6: We modify L by adding 1s on the first diagonal.

In conclusion, the P∗A=L∗U factorization of A looks like:

[010001100]∗[21544−4131]=[100141012−121]∗[44−4022008]

There’s also a video with this example, link here.

Code implementationPermalink

The best way to model the results of the LU(P) computation is to create a struct called nml_mat_lup containing references to all three resulting matrices: L, U, P.

typedef struct nml_mat_lup_s {
  nml_mat *L;
  nml_mat *U;
  nml_mat *P;
  unsigned int num_permutations;
} nml_mat_lup;

The property num_permutations records the number of row permutations we’ve done during the factorization process. This value it’s useful when computing the determinant of the matrix, so it’s better to track it now.

To reduce memory consumption, the two matrices L and U can be kept in single matrix LU that looks like this:

[u11u12u13l21u22u23l31l32u33]

In our implementation, for simplicity and readability, we will keep them separated.

Following the same recipe as for nml_mat we are going to write “constructor-like”/”destructor-like” methods for managing the memory allocation for a nml_mat_lup structure.

nml_mat_lup *nml_mat_lup_new(nml_mat *L, nml_mat *U, nml_mat *P, unsigned int num_permutations) {
  nml_mat_lup *r = malloc(sizeof(*r));
  NP_CHECK(r);
  r->L = L;
  r->U = U;
  r->P = P;
  r->num_permutations = num_permutations;
  return r;
}

void nml_mat_lup_free(nml_mat_lup* lu) {
  nml_mat_free(lu->P);
  nml_mat_free(lu->L);
  nml_mat_free(lu->U);
  free(lu);
} 

The code that is performing the factorization:

nml_mat_lup *nml_mat_lup_solve(nml_mat *m) {
  if (!m->is_square) {
    NML_FERROR(CANNOT_LU_MATRIX_SQUARE, m->num_rows, m-> num_cols);
    return NULL;
  }
  nml_mat *L = nml_mat_new(m->num_rows, m->num_rows);
  nml_mat *U = nml_mat_cp(m);
  nml_mat *P = nml_mat_eye(m->num_rows);

  int j,i, pivot;
  unsigned int num_permutations = 0;
  double mult;

  for(j = 0; j < U->num_cols; j++) {
    // Retrieves the row with the biggest element for column (j)
    pivot = _nml_mat_absmaxr(U, j);
    if (fabs(U->data[pivot][j]) < NML_MIN_COEF) {
      NML_ERROR(CANNOT_LU_MATRIX_DEGENERATE);
      return NULL;
    }
    if (pivot!=j) {
      // Pivots LU and P accordingly to the rule
      nml_mat_row_swap_r(U, j, pivot);
      nml_mat_row_swap_r(L, j, pivot);
      nml_mat_row_swap_r(P, j, pivot);
      // Keep the number of permutations to easily calculate the
      // determinant sign afterwards
      num_permutations++;
    }
    for(i = j+1; i < U->num_rows; i++) {
      mult = U->data[i][j] / U->data[j][j];
      // Building the U upper rows
      nml_mat_row_addrow_r(U, i, j, -mult);
      // Store the multiplier in L
      L->data[i][j] = mult;
    }
  }
  nml_mat_diag_set(L, 1.0);

  return nml_mat_lup_new(L, U, P, num_permutations);
} 

Solving linear systems of equationsPermalink

Forward substitutionPermalink

Forward substitution is the process of solving linear systems of equations L∗x=B if L is a lower diagonal coefficient matrix.

[l110...0l21l22...0............lm1lm2...lmm]∗[x1x2...xm]=[b1b2...bm]

In this case the resulting formulas for x1,x2,...,xm are:

x1=b1l11x2=b2−l21∗x1l22...xm=bm−∑m−1i=1lmi∗xilmm

Using the mathematical formulas, the code is easy to write:

// Forward substitution algorithm
// Solves the linear system L * x = b
//
// L is lower triangular matrix of size NxN
// B is column matrix of size Nx1
// x is the solution column matrix of size Nx1
//
// Note: In case L is not a lower triangular matrix, the algorithm will try to
// select only the lower triangular part of the matrix L and solve the system
// with it.
//
// Note: In case any of the diagonal elements (L[i][i]) are 0 the system cannot
// be solved
//
// Note: This function is usually used with an L matrix from a LU decomposition
nml_mat *nml_ls_solvefwd(nml_mat *L, nml_mat *b) {
  nml_mat* x = nml_mat_new(L->num_cols, 1);
  int i,j;
  double tmp;
  for(i = 0; i < L->num_cols; i++) {
    tmp = b->data[i][0];
    for(j = 0; j < i ; j++) {
      tmp -= L->data[i][j] * x->data[j][0];
    }
    x->data[i][0] = tmp / L->data[i][i];
  }
  return x;
}

Backward substitutionPermalink

Backward substitution is the process of solving linear systems of equations U∗x=Y if U is an upper diagonal coefficient matrix.

[u11u12...u1m0u22...u2m............00...umm]∗[x1x2...xm]=[y1y2...ym]

Similar to the example above, the code implementation is straight-forward after we compute the mathematical formula:

// Back substition algorithm
// Solves the linear system U *x = b
//
// U is an upper triangular matrix of size NxN
// B is a column matrix of size Nx1
// x is the solution column matrix of size Nx1
//
// Note in case U is not an upper triangular matrix, the algorithm will try to
// select only the upper triangular part of the matrix U and solve the system
// with it
//
// Note: In case any of the diagonal elements (U[i][i]) are 0 the system cannot
// be solved
nml_mat *nml_ls_solvebck(nml_mat *U, nml_mat *b) {
  nml_mat *x = nml_mat_new(U->num_cols, 1);
  int i = U->num_cols, j;
  double tmp;
  while(i-->0) {
    tmp = b->data[i][0];
    for(j = i; j < U->num_cols; j++) {
      tmp -= U->data[i][j] * x->data[j][0];
    }
    x->data[i][0] = tmp / U->data[i][i];
  }
  return x;
} 

Solving linear systems using LU(P) decompositionPermalink

Knowing the P∗A=L∗U factorisation of a matrix allows us to solve a liniar system of equations in the form: A∗x=B by using a combination of backward and forward substitution.

A∗x=B<=>P∗A∗x=P∗b<=>L∗U∗x=P∗b

To make use of the previous two algorithms (nml_ls_solvebck(...), nml_ls_solvefwd(...)) we can introduce an auxiliary system of equations: y=U∗x.

If y=U∗x => L∗y=P∗b. This means that we need to solve two systems of equations:

  • L∗y=P∗b - forward substition, because L is lower triangular matrix;
  • U∗x=y - backward substitution, becuase U is an upper triangular matrix.

Translating this into code is simple:

nml_mat *nml_ls_solve(nml_mat_lup *lu, nml_mat* b) {
  if (lu->U->num_rows != b->num_rows || b->num_cols != 1) {
    NML_FERROR(CANNOT_SOLVE_LIN_SYS_INVALID_B,
      b->num_rows,
      b->num_cols,
      lu->U->num_rows,
      1);
      return NULL;
  }
  nml_mat *Pb = nml_mat_dot(lu->P, b);

  // We solve L*y = P*b using forward substition
  nml_mat *y = nml_ls_solvefwd(lu->L, Pb);

  // We solve U*x=y
  nml_mat *x = nml_ls_solvebck(lu->U, y);

  nml_mat_free(y);
  nml_mat_free(Pb);
  return x;
} 

Calculating the inverse of the matrix using LU(P) decompositionPermalink

A matrix A is called invertible, if there exists a matrix A−1 so that A∗A−1=I.

We call A−1 the inverse of the matrix A. The equality A∗A−1=I in matrix form looks like:

[A11A12A13A21A22A23A31A32A33]∗[A−111A−112A−113A−121A−122A−123A−131A−132A−133]=[100010001]

To find A−1ij we solve 3 systems of linear equations in the form:

{[A11A12A13A21A22A23A31A32A33]∗[A−111A−121A−131]=[100][A11A12A13A21A22A23A31A32A33]∗[A−112A−122A−132]=[010][A11A12A13A21A22A23A31A32A33]∗[A−113A−123A−133]=[001]→{A∗A−1col1=Icol1A∗A−1col2=Icol2A∗A−1col3=Icol3

This means we need to solve actually three systems of type: A∗xi=Bi where xi represents the column i of A−1, and Bi represents the column i of I.

Having said this, the C code implementing the algorithm is as follows:

// Calculates the inverse of a matrix
nml_mat *nml_mat_inv(nml_mat_lup *lup) {
  unsigned n = lup->L->num_cols;
  nml_mat *r = nml_mat_sqr(n);
  nml_mat *I = nml_mat_eye(lup->U->num_rows);
  nml_mat *invx;
  nml_mat *Ix;
  int i,j;
  for(j =0; j < n; j++) {
    Ix = nml_mat_col_get(I, j);
    invx = nml_ls_solve(lup, Ix);
    for(i = 0; i < invx->num_rows; i++) {
      r->data[i][j] = invx->data[i][0];
    }
    nml_mat_free(invx);
    nml_mat_free(Ix);
  }
  nml_mat_free(I);
  return r;
} 

Calculating the determinant of the matrix using LU(P) decompositionPermalink

The determinant of the product of two matrices is the product of their determinants. After LU(P) decomposition we have P∗A=L∗U=>det(P)∗det(A)=det(L)∗det(U).

Because P is a permutation of I:

det(P)=(−1)n=f(n){1,n is even−1,n is odd

n represents the total number of permutations of I. This value is already computed and stored in nml_mat_lup->num_permutations.

L is a lower triangular matrix. The determinant of a lower triangular matrix is the product of its diagonal elements =>det(L)=1.

U is an upper triangular matrix. The determinant of an upper triangular matrix is the product of its diagonal elements =>det(U)=∏Ni=1Uii.

So, our initial relationship: det(P)∗det(A)=det(L)∗det(P)<=>det(A)=∏Ni=1Uiif(num_permutations).

This means that the determinant of matrix A is actually the product of the diagonal elements of matrix U multiplied with (-1) in case the number of row permutations is odd.

Putting this into code is as simple as:

// After the LU(P) factorisation the determinant can be easily calculated
// by multiplying the main diagonal of matrix U with the sign.
// the sign is -1 if the number of permutations is odd
// the sign is +1 if the number of permutations is even
double nml_mat_det(nml_mat_lup* lup) {
  int k;
  int sign = (lup->num_permutations%2==0) ? 1 : -1;
  nml_mat *U = lup->U;
  double product = 1.0;
  for(k = 0; k < U->num_rows; k++) {
    product *= U->data[k][k];
  }
  return product * sign;
}

QR DecompositionPermalink

Any square matrix A can be decomposed as: A=Q∗R where Q is an orthogonal matrix, and R is an upper triangular matrix.

A matrix is orthogonal if its columns are orthogonal unit vectors, meaning: Q∗QT=QT∗Q=I. Where QT is the transpose of Q. QT=Q−1 for an orthogonal matrix.

The process of computing A=Q∗R is called the Gram–Schmidt algorithm.

If LU(P) factorization works mainly by applying basic matrix operations on rows, the QR decomposition is more focused on columns.

So we consider:

A=[|||a1a2a3|||], where ai are the column vectors of AQ=[|||q1q2q3|||], where qi are the column vectors of Q

Because all the vectors in matrix Q should have length 1, we have to normalize the columns in A.

This gives the following formulas:

{q1=a1‖a1‖q2=a⊥2‖a⊥2‖ where, a⊥2=a2−⟨a2,q1⟩∗q1q3=a⊥3‖a⊥3‖ where, a⊥3=a3−⟨a3,q1⟩∗q1−⟨a3,q2⟩∗q2

Basically our QR decomposition looks like this:

[|||a1a2a3|||]=[|||a1‖a1‖a⊥2‖a⊥2‖a⊥3‖a⊥3‖|||]∗[‖a1‖⟨a2,q1⟩⟨a3,q1⟩0‖a⊥2‖⟨a3,q2⟩00‖a⊥3‖]

The formulas can be generalized for every nxn matrix.

In case you are wondering what represents the ⟨ai,qj⟩ notation. This is called the dot product of two vectors, and it’s calculated like this:

a=[a1,a2,a3,...,an]b=[b1,b2,b3,...,bn]⟨a,b⟩=a1∗b1+a2∗b2+...+an∗bn=∑ai∗bi

From a code perspective this can be implemented as:

// Useful for QR decomposition
// Represents the (dot) product of two vectors:
// vector1 = m1col column from m1
// vector2 = m2col column from m2
double nml_vect_dot(nml_mat *m1, unsigned int m1col, nml_mat *m2, unsigned m2col) {
  if (m1->num_rows!=m2->num_rows) {
    NML_FERROR(CANNOT_VECT_DOT_DIMENSIONS, m1->num_rows, m2->num_rows);
  }
  if (m1col >= m1->num_cols) {
    NML_FERROR(CANNOT_GET_COLUMN, m1col, m1->num_cols);
  }
  if (m2col >= m2->num_cols) {
    NML_FERROR(CANNOT_GET_COLUMN, m2col, m2->num_cols);
  }
  int i;
  double dot = 0.0;
  for(i = 0; i < m1->num_rows; i++) {
    dot += m1->data[i][m1col] * m2->data[i][m2col];
  }
  return dot;
} 

In case you are wondering what represents the ‖ai‖ this is called the L2 Euclidean norm and it’s computed by the formula:

‖a‖=√a21+a22+...+a2n

From a code perspective this can be implemented as:

// Calculates the l2 norm for a colum in the matrix
double nml_mat_col_l2norm(nml_mat *m, unsigned int col) {
  if (col >= m->num_cols) {
    NML_FERROR(CANNOT_COLUMN_L2NORM, col, m->num_cols);
  }
  double doublesum = 0.0;
  int i;
  for(i = 0; i < m->num_rows; i++) {
    doublesum += (m->data[i][col]*m->data[i][col]);
  }
  return sqrt(doublesum);
}

// Calculates the l2norm for each column
// Keeps results into 1 row matrix
nml_mat *nml_mat_l2norm(nml_mat *m) {
  int i, j;
  nml_mat *r = nml_mat_new(1, m->num_cols);
  double square_sum;
  for(j = 0; j < m->num_cols; j++) {
    square_sum = 0.0;
    for(i = 0; i < m->num_rows; i++) {
      square_sum+=m->data[i][j]*m->data[i][j];
    }
    r->data[0][j] = sqrt(square_sum);
  }
  return r;
} 

The code for the process of normalization is:

nt nml_mat_normalize_r(nml_mat *m) {
  nml_mat *l2norms = nml_mat_l2norm(m);
  int j;
  for(j = 0; j < m->num_cols; j++) {
    if (l2norms->data[0][j] < NML_MIN_COEF) {
      NML_FERROR(VECTOR_J_DEGENERATE, j);
      nml_mat_free(l2norms);
      return 0;
    }
    nml_mat_col_mult_r(m, j, 1/l2norms->data[0][j]);
  }
  nml_mat_free(l2norms);
  return 1;
}

nml_mat_qr *nml_mat_qr_new() {
  nml_mat_qr *qr = malloc(sizeof(*qr));
  NP_CHECK(qr);
  return qr;
}

And the code for the QR algorithm described by this relantionship:

[|||a1a2a3|||]=[|||a1‖a1‖a⊥2‖a⊥2‖a⊥3‖a⊥3‖|||]∗[‖a1‖⟨a2,q1⟩⟨a3,q1⟩0‖a⊥2‖⟨a3,q2⟩00‖a⊥3‖]

nml_mat_qr *nml_mat_qr_solve(nml_mat *m) {

  nml_mat_qr *qr = nml_mat_qr_new();
  nml_mat *Q = nml_mat_cp(m);
  nml_mat *R = nml_mat_new(m->num_rows, m->num_cols);

  int j, k;
  double l2norm;
  double rkj;
  nml_mat *aj;
  nml_mat *qk;
  for(j=0; j < m->num_cols; j++) {    
    rkj = 0.0;
    aj = nml_mat_col_get(m, j);
    for(k = 0; k < j; k++) {
       rkj = nml_vect_dot(m, j, Q, k);
       R->data[k][j] = rkj;
       qk = nml_mat_col_get(Q, k);
       nml_mat_col_mult_r(qk, 0, rkj);
       nml_mat_sub_r(aj, qk);
       nml_mat_free(qk);
    }
    for(k = 0; k < Q->num_rows; k++) {
      Q->data[k][j] = aj->data[k][0];
    }
    l2norm = nml_mat_col_l2norm(Q, j);
    nml_mat_col_mult_r(Q, j, 1/l2norm);
    R->data[j][j] = l2norm;
    nml_mat_free(aj);
  }
  qr->Q = Q;
  qr->R = R;
  return qr;
} 

ReferencesPermalink

  • https://stattrek.com/matrix-algebra/echelon-transform.aspx?tutorial=matrix
  • http://lampx.tugraz.at/~hadley/num/ch2/2.3a.php
  • https://www.youtube.com/watch?v=FAnNBw7d0vg
  • https://en.wikipedia.org/wiki/Norm_(mathematics)
  • https://en.wikipedia.org/wiki/Dot_product

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK