这是用户在 2024-8-15 13:06 为 https://docs.rs/nalgebra/latest/nalgebra/base/struct.Matrix.html 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

Struct nalgebra::base::Matrix
结构 nalgebra::base::Matrix

source ·
来源 -
#[repr(C)]
pub struct Matrix<T, R, C, S> { pub data: S, /* private fields */ }
Expand description

The most generic column-major matrix (and vector) type.
最通用的列主矩阵(和向量)类型。

§Methods summary 方法概要

Because Matrix is the most generic types used as a common representation of all matrices and vectors of nalgebra this documentation page contains every single matrix/vector-related method. In order to make browsing this page simpler, the next subsections contain direct links to groups of methods related to a specific topic.
由于 Matrix 是最通用的类型,可作为 nalgebra 中所有矩阵和矢量的通用表示,因此本文档页包含了所有与矩阵/矢量相关的方法。为了使浏览本页更简便,接下来的子节包含与特定主题相关的方法组的直接链接。

§Vector and matrix construction
向量和矩阵构建
§Computer graphics utilities for transformations
用于转换的计算机制图实用程序
§Common math operations 常见数学运算
§Statistics 统计资料
§Iteration, map, and fold 迭代、映射和折叠
§Vector and matrix views 矢量和矩阵视图
§In-place modification of a single matrix or vector
就地修改单个矩阵或向量
§Vector and matrix size modification
修改矢量和矩阵大小
§Matrix decomposition 矩阵分解
§Vector basis computation 矢量基础计算

§Type parameters 类型参数

The generic Matrix type has four type parameters:
通用 Matrix 类型有四个类型参数:

  • T: for the matrix components scalar type.
    T:用于矩阵分量标量类型。
  • R: for the matrix number of rows.
    R:表示矩阵的行数。
  • C: for the matrix number of columns.
    C:表示矩阵的列数。
  • S: for the matrix data storage, i.e., the buffer that actually contains the matrix components.
    S:用于矩阵数据存储,即实际包含矩阵成分的缓冲区。

The matrix dimensions parameters R and C can either be: 

  • type-level unsigned integer constants (e.g. U1, U124) from the nalgebra:: root module. All numbers from 0 to 127 are defined that way. 
  • type-level unsigned integer constants (e.g. U1024, U10000) from the typenum:: crate. Using those, you will not get error messages as nice as for numbers smaller than 128 defined on the nalgebra:: module. 
  • the special value Dyn from the nalgebra:: root module. This indicates that the specified dimension is not known at compile-time. Note that this will generally imply that the matrix data storage S performs a dynamic allocation and contains extra metadata for the matrix shape. 

Note that mixing Dyn with type-level unsigned integers is allowed. Actually, a dynamically-sized column vector should be represented as a Matrix<T, Dyn, U1, S> (given some concrete types for T and a compatible data storage type S). 

Fields 字段§

§data: S

The data storage that contains all the matrix components. Disappointed?
包含所有矩阵组件的数据存储器。失望吗?

Well, if you came here to see how you can access the matrix components, you may be in luck: you can access the individual components of all vectors with compile-time dimensions <= 6 using field notation like this: vec.x, vec.y, vec.z, vec.w, vec.a, vec.b. Reference and assignation work too:
如果您是来查看如何访问矩阵分量的,那么您可能很幸运:您可以使用如下字段符号访问编译时维度小于等于 6 的所有向量的各个分量:vec.x, vec.y, vec.z, vec.w, vec.a, vec.b.引用和赋值也可以使用:

let mut vec = Vector3::new(1.0, 2.0, 3.0);
vec.x = 10.0;
vec.y += 30.0;
assert_eq!(vec.x, 10.0);
assert_eq!(vec.y + 100.0, 132.0);

Similarly, for matrices with compile-time dimensions <= 6, you can use field notation like this: mat.m11, mat.m42, etc. The first digit identifies the row to address and the second digit identifies the column to address. So mat.m13 identifies the component at the first row and third column (note that the count of rows and columns start at 1 instead of 0 here. This is so we match the mathematical notation).
同样,对于编译时维数 <= 6 的矩阵,您可以使用类似的字段符号:mat.m11, mat.m42, 等等。第一位数字表示要寻址的行,第二位数字表示要寻址的列。因此,mat.m13 标识了第一行和第三列的组件(注意,这里的行数和列数都从 1 开始,而不是 0。这是为了与数学符号相匹配)。

For all matrices and vectors, independently from their size, individual components can be accessed and modified using indexing: vec[20], mat[(20, 19)]. Here the indexing starts at 0 as you would expect.
对于所有矩阵和向量,无论其大小如何,都可以使用索引访问和修改单个组件:vec[20], mat[(20, 19)].正如您所期望的那样,这里的索引从 0 开始。

Implementations§

source§

impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S>

§Dot/scalar product

source

pub fn dot<R2: Dim, C2: Dim, SB>(&self, rhs: &Matrix<T, R2, C2, SB>) -> T
where SB: RawStorage<T, R2, C2>, ShapeConstraint: DimEq<R, R2> + DimEq<C, C2>,

The dot product between two vectors or matrices (seen as vectors).

This is equal to self.transpose() * rhs. For the sesquilinear complex dot product, use self.dotc(rhs).

Note that this is not the matrix multiplication as in, e.g., numpy. For matrix multiplication, use one of: .gemm, .mul_to, .mul, the * operator.

§Example
let vec1 = Vector3::new(1.0, 2.0, 3.0);
let vec2 = Vector3::new(0.1, 0.2, 0.3);
assert_eq!(vec1.dot(&vec2), 1.4);

let mat1 = Matrix2x3::new(1.0, 2.0, 3.0,
                          4.0, 5.0, 6.0);
let mat2 = Matrix2x3::new(0.1, 0.2, 0.3,
                          0.4, 0.5, 0.6);
assert_eq!(mat1.dot(&mat2), 9.1);
source

pub fn dotc<R2: Dim, C2: Dim, SB>(&self, rhs: &Matrix<T, R2, C2, SB>) -> T
where T: SimdComplexField, SB: RawStorage<T, R2, C2>, ShapeConstraint: DimEq<R, R2> + DimEq<C, C2>,

The conjugate-linear dot product between two vectors or matrices (seen as vectors).

This is equal to self.adjoint() * rhs. For real vectors, this is identical to self.dot(&rhs). Note that this is not the matrix multiplication as in, e.g., numpy. For matrix multiplication, use one of: .gemm, .mul_to, .mul, the * operator.

§Example
let vec1 = Vector2::new(Complex::new(1.0, 2.0), Complex::new(3.0, 4.0));
let vec2 = Vector2::new(Complex::new(0.4, 0.3), Complex::new(0.2, 0.1));
assert_eq!(vec1.dotc(&vec2), Complex::new(2.0, -1.0));

// Note that for complex vectors, we generally have:
// vec1.dotc(&vec2) != vec2.dot(&vec2)
assert_ne!(vec1.dotc(&vec2), vec1.dot(&vec2));
source

pub fn tr_dot<R2: Dim, C2: Dim, SB>(&self, rhs: &Matrix<T, R2, C2, SB>) -> T
where SB: RawStorage<T, R2, C2>, ShapeConstraint: DimEq<C, R2> + DimEq<R, C2>,

The dot product between the transpose of self and rhs.

§Example
let vec1 = Vector3::new(1.0, 2.0, 3.0);
let vec2 = RowVector3::new(0.1, 0.2, 0.3);
assert_eq!(vec1.tr_dot(&vec2), 1.4);

let mat1 = Matrix2x3::new(1.0, 2.0, 3.0,
                          4.0, 5.0, 6.0);
let mat2 = Matrix3x2::new(0.1, 0.4,
                          0.2, 0.5,
                          0.3, 0.6);
assert_eq!(mat1.tr_dot(&mat2), 9.1);
source§

impl<T, D: Dim, S> Matrix<T, D, Const<1>, S>

§BLAS functions

source

pub fn axcpy<D2: Dim, SB>(&mut self, a: T, x: &Vector<T, D2, SB>, c: T, b: T)
where SB: Storage<T, D2>, ShapeConstraint: DimEq<D, D2>,

Computes self = a * x * c + b * self.

If b is zero, self is never read from.

§Example
let mut vec1 = Vector3::new(1.0, 2.0, 3.0);
let vec2 = Vector3::new(0.1, 0.2, 0.3);
vec1.axcpy(5.0, &vec2, 2.0, 5.0);
assert_eq!(vec1, Vector3::new(6.0, 12.0, 18.0));
source

pub fn axpy<D2: Dim, SB>(&mut self, a: T, x: &Vector<T, D2, SB>, b: T)
where T: One, SB: Storage<T, D2>, ShapeConstraint: DimEq<D, D2>,

Computes self = a * x + b * self.

If b is zero, self is never read from.

§Example
let mut vec1 = Vector3::new(1.0, 2.0, 3.0);
let vec2 = Vector3::new(0.1, 0.2, 0.3);
vec1.axpy(10.0, &vec2, 5.0);
assert_eq!(vec1, Vector3::new(6.0, 12.0, 18.0));
source

pub fn gemv<R2: Dim, C2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, a: &Matrix<T, R2, C2, SB>, x: &Vector<T, D3, SC>, beta: T, )
where T: One, SB: Storage<T, R2, C2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<D, R2> + AreMultipliable<R2, C2, D3, U1>,

Computes self = alpha * a * x + beta * self, where a is a matrix, x a vector, and alpha, beta two scalars.

If beta is zero, self is never read.

§Example
let mut vec1 = Vector2::new(1.0, 2.0);
let vec2 = Vector2::new(0.1, 0.2);
let mat = Matrix2::new(1.0, 2.0,
                       3.0, 4.0);
vec1.gemv(10.0, &mat, &vec2, 5.0);
assert_eq!(vec1, Vector2::new(10.0, 21.0));
source

pub fn sygemv<D2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, a: &SquareMatrix<T, D2, SB>, x: &Vector<T, D3, SC>, beta: T, )
where T: One, SB: Storage<T, D2, D2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<D, D2> + AreMultipliable<D2, D2, D3, U1>,

Computes self = alpha * a * x + beta * self, where a is a symmetric matrix, x a vector, and alpha, beta two scalars.

For hermitian matrices, use .hegemv instead. If beta is zero, self is never read. If self is read, only its lower-triangular part (including the diagonal) is actually read.

§Examples
let mat = Matrix2::new(1.0, 2.0,
                       2.0, 4.0);
let mut vec1 = Vector2::new(1.0, 2.0);
let vec2 = Vector2::new(0.1, 0.2);
vec1.sygemv(10.0, &mat, &vec2, 5.0);
assert_eq!(vec1, Vector2::new(10.0, 20.0));


// The matrix upper-triangular elements can be garbage because it is never
// read by this method. Therefore, it is not necessary for the caller to
// fill the matrix struct upper-triangle.
let mat = Matrix2::new(1.0, 9999999.9999999,
                       2.0, 4.0);
let mut vec1 = Vector2::new(1.0, 2.0);
vec1.sygemv(10.0, &mat, &vec2, 5.0);
assert_eq!(vec1, Vector2::new(10.0, 20.0));
source

pub fn hegemv<D2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, a: &SquareMatrix<T, D2, SB>, x: &Vector<T, D3, SC>, beta: T, )
where T: SimdComplexField, SB: Storage<T, D2, D2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<D, D2> + AreMultipliable<D2, D2, D3, U1>,

Computes self = alpha * a * x + beta * self, where a is an hermitian matrix, x a vector, and alpha, beta two scalars.

If beta is zero, self is never read. If self is read, only its lower-triangular part (including the diagonal) is actually read.

§Examples
let mat = Matrix2::new(Complex::new(1.0, 0.0), Complex::new(2.0, -0.1),
                       Complex::new(2.0, 1.0), Complex::new(4.0, 0.0));
let mut vec1 = Vector2::new(Complex::new(1.0, 2.0), Complex::new(3.0, 4.0));
let vec2 = Vector2::new(Complex::new(0.1, 0.2), Complex::new(0.3, 0.4));
vec1.sygemv(Complex::new(10.0, 20.0), &mat, &vec2, Complex::new(5.0, 15.0));
assert_eq!(vec1, Vector2::new(Complex::new(-48.0, 44.0), Complex::new(-75.0, 110.0)));


// The matrix upper-triangular elements can be garbage because it is never
// read by this method. Therefore, it is not necessary for the caller to
// fill the matrix struct upper-triangle.

let mat = Matrix2::new(Complex::new(1.0, 0.0), Complex::new(99999999.9, 999999999.9),
                       Complex::new(2.0, 1.0), Complex::new(4.0, 0.0));
let mut vec1 = Vector2::new(Complex::new(1.0, 2.0), Complex::new(3.0, 4.0));
let vec2 = Vector2::new(Complex::new(0.1, 0.2), Complex::new(0.3, 0.4));
vec1.sygemv(Complex::new(10.0, 20.0), &mat, &vec2, Complex::new(5.0, 15.0));
assert_eq!(vec1, Vector2::new(Complex::new(-48.0, 44.0), Complex::new(-75.0, 110.0)));
source

pub fn gemv_tr<R2: Dim, C2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, a: &Matrix<T, R2, C2, SB>, x: &Vector<T, D3, SC>, beta: T, )
where T: One, SB: Storage<T, R2, C2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<D, C2> + AreMultipliable<C2, R2, D3, U1>,

Computes self = alpha * a.transpose() * x + beta * self, where a is a matrix, x a vector, and alpha, beta two scalars.

If beta is zero, self is never read.

§Example
let mat = Matrix2::new(1.0, 3.0,
                       2.0, 4.0);
let mut vec1 = Vector2::new(1.0, 2.0);
let vec2 = Vector2::new(0.1, 0.2);
let expected = mat.transpose() * vec2 * 10.0 + vec1 * 5.0;

vec1.gemv_tr(10.0, &mat, &vec2, 5.0);
assert_eq!(vec1, expected);
source

pub fn gemv_ad<R2: Dim, C2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, a: &Matrix<T, R2, C2, SB>, x: &Vector<T, D3, SC>, beta: T, )
where T: SimdComplexField, SB: Storage<T, R2, C2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<D, C2> + AreMultipliable<C2, R2, D3, U1>,

Computes self = alpha * a.adjoint() * x + beta * self, where a is a matrix, x a vector, and alpha, beta two scalars.

For real matrices, this is the same as .gemv_tr. If beta is zero, self is never read.

§Example
let mat = Matrix2::new(Complex::new(1.0, 2.0), Complex::new(3.0, 4.0),
                       Complex::new(5.0, 6.0), Complex::new(7.0, 8.0));
let mut vec1 = Vector2::new(Complex::new(1.0, 2.0), Complex::new(3.0, 4.0));
let vec2 = Vector2::new(Complex::new(0.1, 0.2), Complex::new(0.3, 0.4));
let expected = mat.adjoint() * vec2 * Complex::new(10.0, 20.0) + vec1 * Complex::new(5.0, 15.0);

vec1.gemv_ad(Complex::new(10.0, 20.0), &mat, &vec2, Complex::new(5.0, 15.0));
assert_eq!(vec1, expected);
source§

impl<T, R1: Dim, C1: Dim, S: StorageMut<T, R1, C1>> Matrix<T, R1, C1, S>

source

pub fn ger<D2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, x: &Vector<T, D2, SB>, y: &Vector<T, D3, SC>, beta: T, )
where T: One, SB: Storage<T, D2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<R1, D2> + DimEq<C1, D3>,

Computes self = alpha * x * y.transpose() + beta * self.

If beta is zero, self is never read.

§Example
let mut mat = Matrix2x3::repeat(4.0);
let vec1 = Vector2::new(1.0, 2.0);
let vec2 = Vector3::new(0.1, 0.2, 0.3);
let expected = vec1 * vec2.transpose() * 10.0 + mat * 5.0;

mat.ger(10.0, &vec1, &vec2, 5.0);
assert_eq!(mat, expected);
source

pub fn gerc<D2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, x: &Vector<T, D2, SB>, y: &Vector<T, D3, SC>, beta: T, )
where T: SimdComplexField, SB: Storage<T, D2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<R1, D2> + DimEq<C1, D3>,

Computes self = alpha * x * y.adjoint() + beta * self.

If beta is zero, self is never read.

§Example
let mut mat = Matrix2x3::repeat(Complex::new(4.0, 5.0));
let vec1 = Vector2::new(Complex::new(1.0, 2.0), Complex::new(3.0, 4.0));
let vec2 = Vector3::new(Complex::new(0.6, 0.5), Complex::new(0.4, 0.5), Complex::new(0.2, 0.1));
let expected = vec1 * vec2.adjoint() * Complex::new(10.0, 20.0) + mat * Complex::new(5.0, 15.0);

mat.gerc(Complex::new(10.0, 20.0), &vec1, &vec2, Complex::new(5.0, 15.0));
assert_eq!(mat, expected);
source

pub fn gemm<R2: Dim, C2: Dim, R3: Dim, C3: Dim, SB, SC>( &mut self, alpha: T, a: &Matrix<T, R2, C2, SB>, b: &Matrix<T, R3, C3, SC>, beta: T, )
where T: One, SB: Storage<T, R2, C2>, SC: Storage<T, R3, C3>, ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C3> + AreMultipliable<R2, C2, R3, C3>,

Computes self = alpha * a * b + beta * self, where a, b, self are matrices. alpha and beta are scalar.

If beta is zero, self is never read.

§Example
let mut mat1 = Matrix2x4::identity();
let mat2 = Matrix2x3::new(1.0, 2.0, 3.0,
                          4.0, 5.0, 6.0);
let mat3 = Matrix3x4::new(0.1, 0.2, 0.3, 0.4,
                          0.5, 0.6, 0.7, 0.8,
                          0.9, 1.0, 1.1, 1.2);
let expected = mat2 * mat3 * 10.0 + mat1 * 5.0;

mat1.gemm(10.0, &mat2, &mat3, 5.0);
assert_relative_eq!(mat1, expected);
source

pub fn gemm_tr<R2: Dim, C2: Dim, R3: Dim, C3: Dim, SB, SC>( &mut self, alpha: T, a: &Matrix<T, R2, C2, SB>, b: &Matrix<T, R3, C3, SC>, beta: T, )
where T: One, SB: Storage<T, R2, C2>, SC: Storage<T, R3, C3>, ShapeConstraint: SameNumberOfRows<R1, C2> + SameNumberOfColumns<C1, C3> + AreMultipliable<C2, R2, R3, C3>,

Computes self = alpha * a.transpose() * b + beta * self, where a, b, self are matrices. alpha and beta are scalar.

If beta is zero, self is never read.

§Example
let mut mat1 = Matrix2x4::identity();
let mat2 = Matrix3x2::new(1.0, 4.0,
                          2.0, 5.0,
                          3.0, 6.0);
let mat3 = Matrix3x4::new(0.1, 0.2, 0.3, 0.4,
                          0.5, 0.6, 0.7, 0.8,
                          0.9, 1.0, 1.1, 1.2);
let expected = mat2.transpose() * mat3 * 10.0 + mat1 * 5.0;

mat1.gemm_tr(10.0, &mat2, &mat3, 5.0);
assert_eq!(mat1, expected);
source

pub fn gemm_ad<R2: Dim, C2: Dim, R3: Dim, C3: Dim, SB, SC>( &mut self, alpha: T, a: &Matrix<T, R2, C2, SB>, b: &Matrix<T, R3, C3, SC>, beta: T, )
where T: SimdComplexField, SB: Storage<T, R2, C2>, SC: Storage<T, R3, C3>, ShapeConstraint: SameNumberOfRows<R1, C2> + SameNumberOfColumns<C1, C3> + AreMultipliable<C2, R2, R3, C3>,

Computes self = alpha * a.adjoint() * b + beta * self, where a, b, self are matrices. alpha and beta are scalar.

If beta is zero, self is never read.

§Example
let mut mat1 = Matrix2x4::identity();
let mat2 = Matrix3x2::new(Complex::new(1.0, 4.0), Complex::new(7.0, 8.0),
                          Complex::new(2.0, 5.0), Complex::new(9.0, 10.0),
                          Complex::new(3.0, 6.0), Complex::new(11.0, 12.0));
let mat3 = Matrix3x4::new(Complex::new(0.1, 1.3), Complex::new(0.2, 1.4), Complex::new(0.3, 1.5), Complex::new(0.4, 1.6),
                          Complex::new(0.5, 1.7), Complex::new(0.6, 1.8), Complex::new(0.7, 1.9), Complex::new(0.8, 2.0),
                          Complex::new(0.9, 2.1), Complex::new(1.0, 2.2), Complex::new(1.1, 2.3), Complex::new(1.2, 2.4));
let expected = mat2.adjoint() * mat3 * Complex::new(10.0, 20.0) + mat1 * Complex::new(5.0, 15.0);

mat1.gemm_ad(Complex::new(10.0, 20.0), &mat2, &mat3, Complex::new(5.0, 15.0));
assert_eq!(mat1, expected);
source§

impl<T, R1: Dim, C1: Dim, S: StorageMut<T, R1, C1>> Matrix<T, R1, C1, S>

source

pub fn ger_symm<D2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, x: &Vector<T, D2, SB>, y: &Vector<T, D3, SC>, beta: T, )
where T: One, SB: Storage<T, D2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<R1, D2> + DimEq<C1, D3>,

👎Deprecated: This is renamed syger to match the original BLAS terminology.

Computes self = alpha * x * y.transpose() + beta * self, where self is a symmetric matrix.

If beta is zero, self is never read. The result is symmetric. Only the lower-triangular (including the diagonal) part of self is read/written.

§Example
let mut mat = Matrix2::identity();
let vec1 = Vector2::new(1.0, 2.0);
let vec2 = Vector2::new(0.1, 0.2);
let expected = vec1 * vec2.transpose() * 10.0 + mat * 5.0;
mat.m12 = 99999.99999; // This component is on the upper-triangular part and will not be read/written.

mat.ger_symm(10.0, &vec1, &vec2, 5.0);
assert_eq!(mat.lower_triangle(), expected.lower_triangle());
assert_eq!(mat.m12, 99999.99999); // This was untouched.
source

pub fn syger<D2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, x: &Vector<T, D2, SB>, y: &Vector<T, D3, SC>, beta: T, )
where T: One, SB: Storage<T, D2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<R1, D2> + DimEq<C1, D3>,

Computes self = alpha * x * y.transpose() + beta * self, where self is a symmetric matrix.

For hermitian complex matrices, use .hegerc instead. If beta is zero, self is never read. The result is symmetric. Only the lower-triangular (including the diagonal) part of self is read/written.

§Example
let mut mat = Matrix2::identity();
let vec1 = Vector2::new(1.0, 2.0);
let vec2 = Vector2::new(0.1, 0.2);
let expected = vec1 * vec2.transpose() * 10.0 + mat * 5.0;
mat.m12 = 99999.99999; // This component is on the upper-triangular part and will not be read/written.

mat.syger(10.0, &vec1, &vec2, 5.0);
assert_eq!(mat.lower_triangle(), expected.lower_triangle());
assert_eq!(mat.m12, 99999.99999); // This was untouched.
source

pub fn hegerc<D2: Dim, D3: Dim, SB, SC>( &mut self, alpha: T, x: &Vector<T, D2, SB>, y: &Vector<T, D3, SC>, beta: T, )
where T: SimdComplexField, SB: Storage<T, D2>, SC: Storage<T, D3>, ShapeConstraint: DimEq<R1, D2> + DimEq<C1, D3>,

Computes self = alpha * x * y.adjoint() + beta * self, where self is an hermitian matrix.

If beta is zero, self is never read. The result is symmetric. Only the lower-triangular (including the diagonal) part of self is read/written.

§Example
let mut mat = Matrix2::identity();
let vec1 = Vector2::new(Complex::new(1.0, 3.0), Complex::new(2.0, 4.0));
let vec2 = Vector2::new(Complex::new(0.2, 0.4), Complex::new(0.1, 0.3));
let expected = vec1 * vec2.adjoint() * Complex::new(10.0, 20.0) + mat * Complex::new(5.0, 15.0);
mat.m12 = Complex::new(99999.99999, 88888.88888); // This component is on the upper-triangular part and will not be read/written.

mat.hegerc(Complex::new(10.0, 20.0), &vec1, &vec2, Complex::new(5.0, 15.0));
assert_eq!(mat.lower_triangle(), expected.lower_triangle());
assert_eq!(mat.m12, Complex::new(99999.99999, 88888.88888)); // This was untouched.
source§

impl<T, D1: Dim, S: StorageMut<T, D1, D1>> Matrix<T, D1, D1, S>

source

pub fn quadform_tr_with_workspace<D2, S2, R3, C3, S3, D4, S4>( &mut self, work: &mut Vector<T, D2, S2>, alpha: T, lhs: &Matrix<T, R3, C3, S3>, mid: &SquareMatrix<T, D4, S4>, beta: T, )
where D2: Dim, R3: Dim, C3: Dim, D4: Dim, S2: StorageMut<T, D2>, S3: Storage<T, R3, C3>, S4: Storage<T, D4, D4>, ShapeConstraint: DimEq<D1, D2> + DimEq<D1, R3> + DimEq<D2, R3> + DimEq<C3, D4>,

Computes the quadratic form self = alpha * lhs * mid * lhs.transpose() + beta * self.

This uses the provided workspace work to avoid allocations for intermediate results.

§Example
// Note that all those would also work with statically-sized matrices.
// We use DMatrix/DVector since that's the only case where pre-allocating the
// workspace is actually useful (assuming the same workspace is re-used for
// several computations) because it avoids repeated dynamic allocations.
let mut mat = DMatrix::identity(2, 2);
let lhs = DMatrix::from_row_slice(2, 3, &[1.0, 2.0, 3.0,
                                          4.0, 5.0, 6.0]);
let mid = DMatrix::from_row_slice(3, 3, &[0.1, 0.2, 0.3,
                                          0.5, 0.6, 0.7,
                                          0.9, 1.0, 1.1]);
// The random shows that values on the workspace do not
// matter as they will be overwritten.
let mut workspace = DVector::new_random(2);
let expected = &lhs * &mid * lhs.transpose() * 10.0 + &mat * 5.0;

mat.quadform_tr_with_workspace(&mut workspace, 10.0, &lhs, &mid, 5.0);
assert_relative_eq!(mat, expected);
source

pub fn quadform_tr<R3, C3, S3, D4, S4>( &mut self, alpha: T, lhs: &Matrix<T, R3, C3, S3>, mid: &SquareMatrix<T, D4, S4>, beta: T, )
where R3: Dim, C3: Dim, D4: Dim, S3: Storage<T, R3, C3>, S4: Storage<T, D4, D4>, ShapeConstraint: DimEq<D1, D1> + DimEq<D1, R3> + DimEq<C3, D4>, DefaultAllocator: Allocator<D1>,

Computes the quadratic form self = alpha * lhs * mid * lhs.transpose() + beta * self.

This allocates a workspace vector of dimension D1 for intermediate results. If D1 is a type-level integer, then the allocation is performed on the stack. Use .quadform_tr_with_workspace(...) instead to avoid allocations.

§Example
let mut mat = Matrix2::identity();
let lhs = Matrix2x3::new(1.0, 2.0, 3.0,
                         4.0, 5.0, 6.0);
let mid = Matrix3::new(0.1, 0.2, 0.3,
                       0.5, 0.6, 0.7,
                       0.9, 1.0, 1.1);
let expected = lhs * mid * lhs.transpose() * 10.0 + mat * 5.0;

mat.quadform_tr(10.0, &lhs, &mid, 5.0);
assert_relative_eq!(mat, expected);
source

pub fn quadform_with_workspace<D2, S2, D3, S3, R4, C4, S4>( &mut self, work: &mut Vector<T, D2, S2>, alpha: T, mid: &SquareMatrix<T, D3, S3>, rhs: &Matrix<T, R4, C4, S4>, beta: T, )
where D2: Dim, D3: Dim, R4: Dim, C4: Dim, S2: StorageMut<T, D2>, S3: Storage<T, D3, D3>, S4: Storage<T, R4, C4>, ShapeConstraint: DimEq<D3, R4> + DimEq<D1, C4> + DimEq<D2, D3> + AreMultipliable<C4, R4, D2, U1>,

Computes the quadratic form self = alpha * rhs.transpose() * mid * rhs + beta * self.

This uses the provided workspace work to avoid allocations for intermediate results.

§Example
// Note that all those would also work with statically-sized matrices.
// We use DMatrix/DVector since that's the only case where pre-allocating the
// workspace is actually useful (assuming the same workspace is re-used for
// several computations) because it avoids repeated dynamic allocations.
let mut mat = DMatrix::identity(2, 2);
let rhs = DMatrix::from_row_slice(3, 2, &[1.0, 2.0,
                                          3.0, 4.0,
                                          5.0, 6.0]);
let mid = DMatrix::from_row_slice(3, 3, &[0.1, 0.2, 0.3,
                                          0.5, 0.6, 0.7,
                                          0.9, 1.0, 1.1]);
// The random shows that values on the workspace do not
// matter as they will be overwritten.
let mut workspace = DVector::new_random(3);
let expected = rhs.transpose() * &mid * &rhs * 10.0 + &mat * 5.0;

mat.quadform_with_workspace(&mut workspace, 10.0, &mid, &rhs, 5.0);
assert_relative_eq!(mat, expected);
source

pub fn quadform<D2, S2, R3, C3, S3>( &mut self, alpha: T, mid: &SquareMatrix<T, D2, S2>, rhs: &Matrix<T, R3, C3, S3>, beta: T, )
where D2: Dim, R3: Dim, C3: Dim, S2: Storage<T, D2, D2>, S3: Storage<T, R3, C3>, ShapeConstraint: DimEq<D2, R3> + DimEq<D1, C3> + AreMultipliable<C3, R3, D2, U1>, DefaultAllocator: Allocator<D2>,

Computes the quadratic form self = alpha * rhs.transpose() * mid * rhs + beta * self.

This allocates a workspace vector of dimension D2 for intermediate results. If D2 is a type-level integer, then the allocation is performed on the stack. Use .quadform_with_workspace(...) instead to avoid allocations.

§Example
let mut mat = Matrix2::identity();
let rhs = Matrix3x2::new(1.0, 2.0,
                         3.0, 4.0,
                         5.0, 6.0);
let mid = Matrix3::new(0.1, 0.2, 0.3,
                       0.5, 0.6, 0.7,
                       0.9, 1.0, 1.1);
let expected = rhs.transpose() * mid * rhs * 10.0 + mat * 5.0;

mat.quadform(10.0, &mid, &rhs, 5.0);
assert_relative_eq!(mat, expected);
source§

impl<T, R: Dim, C: Dim, S> Matrix<T, R, C, S>
where T: Scalar + ClosedNeg, S: StorageMut<T, R, C>,

source

pub fn neg_mut(&mut self)

Negates self in-place.

source§

impl<T, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA>

source

pub fn add_to<R2: Dim, C2: Dim, SB, R3: Dim, C3: Dim, SC>( &self, rhs: &Matrix<T, R2, C2, SB>, out: &mut Matrix<T, R3, C3, SC>, )
where SB: Storage<T, R2, C2>, SC: StorageMut<T, R3, C3>, ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> + SameNumberOfRows<R1, R3> + SameNumberOfColumns<C1, C3>,

Equivalent to self + rhs but stores the result into out to avoid allocations.

source§

impl<T, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA>

source

pub fn sub_to<R2: Dim, C2: Dim, SB, R3: Dim, C3: Dim, SC>( &self, rhs: &Matrix<T, R2, C2, SB>, out: &mut Matrix<T, R3, C3, SC>, )
where SB: Storage<T, R2, C2>, SC: StorageMut<T, R3, C3>, ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> + SameNumberOfRows<R1, R3> + SameNumberOfColumns<C1, C3>,

Equivalent to self + rhs but stores the result into out to avoid allocations.

source§

impl<T, R1: Dim, C1: Dim, SA> Matrix<T, R1, C1, SA>
where T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign, SA: Storage<T, R1, C1>,

§Special multiplications.

source

pub fn tr_mul<R2: Dim, C2: Dim, SB>( &self, rhs: &Matrix<T, R2, C2, SB>, ) -> OMatrix<T, C1, C2>
where SB: Storage<T, R2, C2>, DefaultAllocator: Allocator<C1, C2>, ShapeConstraint: SameNumberOfRows<R1, R2>,

Equivalent to self.transpose() * rhs.

source

pub fn ad_mul<R2: Dim, C2: Dim, SB>( &self, rhs: &Matrix<T, R2, C2, SB>, ) -> OMatrix<T, C1, C2>

Equivalent to self.adjoint() * rhs.

source

pub fn tr_mul_to<R2: Dim, C2: Dim, SB, R3: Dim, C3: Dim, SC>( &self, rhs: &Matrix<T, R2, C2, SB>, out: &mut Matrix<T, R3, C3, SC>, )
where SB: Storage<T, R2, C2>, SC: StorageMut<T, R3, C3>, ShapeConstraint: SameNumberOfRows<R1, R2> + DimEq<C1, R3> + DimEq<C2, C3>,

Equivalent to self.transpose() * rhs but stores the result into out to avoid allocations.

source

pub fn ad_mul_to<R2: Dim, C2: Dim, SB, R3: Dim, C3: Dim, SC>( &self, rhs: &Matrix<T, R2, C2, SB>, out: &mut Matrix<T, R3, C3, SC>, )
where T: SimdComplexField, SB: Storage<T, R2, C2>, SC: StorageMut<T, R3, C3>, ShapeConstraint: SameNumberOfRows<R1, R2> + DimEq<C1, R3> + DimEq<C2, C3>,

Equivalent to self.adjoint() * rhs but stores the result into out to avoid allocations.

source

pub fn mul_to<R2: Dim, C2: Dim, SB, R3: Dim, C3: Dim, SC>( &self, rhs: &Matrix<T, R2, C2, SB>, out: &mut Matrix<T, R3, C3, SC>, )
where SB: Storage<T, R2, C2>, SC: StorageMut<T, R3, C3>, ShapeConstraint: SameNumberOfRows<R3, R1> + SameNumberOfColumns<C3, C2> + AreMultipliable<R1, C1, R2, C2>,

Equivalent to self * rhs but stores the result into out to avoid allocations.

source

pub fn kronecker<R2: Dim, C2: Dim, SB>( &self, rhs: &Matrix<T, R2, C2, SB>, ) -> OMatrix<T, DimProd<R1, R2>, DimProd<C1, C2>>
where T: ClosedMulAssign, R1: DimMul<R2>, C1: DimMul<C2>, SB: Storage<T, R2, C2>, DefaultAllocator: Allocator<DimProd<R1, R2>, DimProd<C1, C2>>,

The kronecker product of two matrices (aka. tensor product of the corresponding linear maps).

source§

impl<T, D: DimName> Matrix<T, D, D, <DefaultAllocator as Allocator<D, D>>::Buffer<T>>
where T: Scalar + Zero + One, DefaultAllocator: Allocator<D, D>,

§Translation and scaling in any dimension

source

pub fn new_scaling(scaling: T) -> Self

Creates a new homogeneous matrix that applies the same scaling factor on each dimension.

source

pub fn new_nonuniform_scaling<SB>( scaling: &Vector<T, DimNameDiff<D, U1>, SB>, ) -> Self
where D: DimNameSub<U1>, SB: Storage<T, DimNameDiff<D, U1>>,

Creates a new homogeneous matrix that applies a distinct scaling factor for each dimension.

source

pub fn new_translation<SB>( translation: &Vector<T, DimNameDiff<D, U1>, SB>, ) -> Self
where D: DimNameSub<U1>, SB: Storage<T, DimNameDiff<D, U1>>,

Creates a new homogeneous matrix that applies a pure translation.

source§

impl<T: RealField> Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>

§2D transformations as a Matrix3

source

pub fn new_rotation(angle: T) -> Self

Builds a 2 dimensional homogeneous rotation matrix from an angle in radian.

source

pub fn new_nonuniform_scaling_wrt_point( scaling: &Vector2<T>, pt: &Point2<T>, ) -> Self

Creates a new homogeneous matrix that applies a scaling factor for each dimension with respect to point.

Can be used to implement zoom_to functionality.

source§

impl<T: RealField> Matrix<T, Const<4>, Const<4>, ArrayStorage<T, 4, 4>>

§3D transformations as a Matrix4

source

pub fn new_rotation(axisangle: Vector3<T>) -> Self

Builds a 3D homogeneous rotation matrix from an axis and an angle (multiplied together).

Returns the identity matrix if the given argument is zero.

source

pub fn new_rotation_wrt_point(axisangle: Vector3<T>, pt: Point3<T>) -> Self

Builds a 3D homogeneous rotation matrix from an axis and an angle (multiplied together).

Returns the identity matrix if the given argument is zero.

source

pub fn new_nonuniform_scaling_wrt_point( scaling: &Vector3<T>, pt: &Point3<T>, ) -> Self

Creates a new homogeneous matrix that applies a scaling factor for each dimension with respect to point.

Can be used to implement zoom_to functionality.

source

pub fn from_scaled_axis(axisangle: Vector3<T>) -> Self

Builds a 3D homogeneous rotation matrix from an axis and an angle (multiplied together).

Returns the identity matrix if the given argument is zero. This is identical to Self::new_rotation.

source

pub fn from_euler_angles(roll: T, pitch: T, yaw: T) -> Self

Creates a new rotation from Euler angles.

The primitive rotations are applied in order: 1 roll − 2 pitch − 3 yaw.

source

pub fn from_axis_angle(axis: &Unit<Vector3<T>>, angle: T) -> Self

Builds a 3D homogeneous rotation matrix from an axis and a rotation angle.

source

pub fn new_orthographic( left: T, right: T, bottom: T, top: T, znear: T, zfar: T, ) -> Self

Creates a new homogeneous matrix for an orthographic projection.

source

pub fn new_perspective(aspect: T, fovy: T, znear: T, zfar: T) -> Self

Creates a new homogeneous matrix for a perspective projection.

source

pub fn face_towards( eye: &Point3<T>, target: &Point3<T>, up: &Vector3<T>, ) -> Self

Creates an isometry that corresponds to the local frame of an observer standing at the point eye and looking toward target.

It maps the view direction target - eye to the positive z axis and the origin to the eye.

source

pub fn new_observer_frame( eye: &Point3<T>, target: &Point3<T>, up: &Vector3<T>, ) -> Self

👎Deprecated: renamed to face_towards

Deprecated: Use Matrix4::face_towards instead.

source

pub fn look_at_rh(eye: &Point3<T>, target: &Point3<T>, up: &Vector3<T>) -> Self

Builds a right-handed look-at view matrix.

source

pub fn look_at_lh(eye: &Point3<T>, target: &Point3<T>, up: &Vector3<T>) -> Self

Builds a left-handed look-at view matrix.

source§

impl<T: Scalar + Zero + One + ClosedMulAssign + ClosedAddAssign, D: DimName, S: Storage<T, D, D>> Matrix<T, D, D, S>

§Append/prepend translation and scaling

source

pub fn append_scaling(&self, scaling: T) -> OMatrix<T, D, D>

Computes the transformation equal to self followed by an uniform scaling factor.

source

pub fn prepend_scaling(&self, scaling: T) -> OMatrix<T, D, D>

Computes the transformation equal to an uniform scaling factor followed by self.

source

pub fn append_nonuniform_scaling<SB>( &self, scaling: &Vector<T, DimNameDiff<D, U1>, SB>, ) -> OMatrix<T, D, D>
where D: DimNameSub<U1>, SB: Storage<T, DimNameDiff<D, U1>>, DefaultAllocator: Allocator<D, D>,

Computes the transformation equal to self followed by a non-uniform scaling factor.

source

pub fn prepend_nonuniform_scaling<SB>( &self, scaling: &Vector<T, DimNameDiff<D, U1>, SB>, ) -> OMatrix<T, D, D>
where D: DimNameSub<U1>, SB: Storage<T, DimNameDiff<D, U1>>, DefaultAllocator: Allocator<D, D>,

Computes the transformation equal to a non-uniform scaling factor followed by self.

source

pub fn append_translation<SB>( &self, shift: &Vector<T, DimNameDiff<D, U1>, SB>, ) -> OMatrix<T, D, D>
where D: DimNameSub<U1>, SB: Storage<T, DimNameDiff<D, U1>>, DefaultAllocator: Allocator<D, D>,

Computes the transformation equal to self followed by a translation.

source

pub fn prepend_translation<SB>( &self, shift: &Vector<T, DimNameDiff<D, U1>, SB>, ) -> OMatrix<T, D, D>

Computes the transformation equal to a translation followed by self.

source

pub fn append_scaling_mut(&mut self, scaling: T)
where S: StorageMut<T, D, D>, D: DimNameSub<U1>,

Computes in-place the transformation equal to self followed by an uniform scaling factor.

source

pub fn prepend_scaling_mut(&mut self, scaling: T)
where S: StorageMut<T, D, D>, D: DimNameSub<U1>,

Computes in-place the transformation equal to an uniform scaling factor followed by self.

source

pub fn append_nonuniform_scaling_mut<SB>( &mut self, scaling: &Vector<T, DimNameDiff<D, U1>, SB>, )
where S: StorageMut<T, D, D>, D: DimNameSub<U1>, SB: Storage<T, DimNameDiff<D, U1>>,

Computes in-place the transformation equal to self followed by a non-uniform scaling factor.

source

pub fn prepend_nonuniform_scaling_mut<SB>( &mut self, scaling: &Vector<T, DimNameDiff<D, U1>, SB>, )
where S: StorageMut<T, D, D>, D: DimNameSub<U1>, SB: Storage<T, DimNameDiff<D, U1>>,

Computes in-place the transformation equal to a non-uniform scaling factor followed by self.

source

pub fn append_translation_mut<SB>( &mut self, shift: &Vector<T, DimNameDiff<D, U1>, SB>, )
where S: StorageMut<T, D, D>, D: DimNameSub<U1>, SB: Storage<T, DimNameDiff<D, U1>>,

Computes the transformation equal to self followed by a translation.

source

pub fn prepend_translation_mut<SB>( &mut self, shift: &Vector<T, DimNameDiff<D, U1>, SB>, )

Computes the transformation equal to a translation followed by self.

source§

impl<T: RealField, D: DimNameSub<U1>, S: Storage<T, D, D>> Matrix<T, D, D, S>

§Transformation of vectors and points

source

pub fn transform_vector( &self, v: &OVector<T, DimNameDiff<D, U1>>, ) -> OVector<T, DimNameDiff<D, U1>>

Transforms the given vector, assuming the matrix self uses homogeneous coordinates.

source§

impl<T: RealField, S: Storage<T, Const<3>, Const<3>>> Matrix<T, Const<3>, Const<3>, S>

source

pub fn transform_point(&self, pt: &Point<T, 2>) -> Point<T, 2>

Transforms the given point, assuming the matrix self uses homogeneous coordinates.

source§

impl<T: RealField, S: Storage<T, Const<4>, Const<4>>> Matrix<T, Const<4>, Const<4>, S>

source

pub fn transform_point(&self, pt: &Point<T, 3>) -> Point<T, 3>

Transforms the given point, assuming the matrix self uses homogeneous coordinates.

source§

impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S>

source

pub fn abs(&self) -> OMatrix<T, R, C>

Computes the component-wise absolute value.

§Example
let a = Matrix2::new(0.0, 1.0,
                     -2.0, -3.0);
assert_eq!(a.abs(), Matrix2::new(0.0, 1.0, 2.0, 3.0))
source§

impl<T: Scalar, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA>

§Componentwise operations

source

pub fn component_mul<R2, C2, SB>( &self, rhs: &Matrix<T, R2, C2, SB>, ) -> MatrixSum<T, R1, C1, R2, C2>
where T: ClosedMulAssign, R2: Dim, C2: Dim, SB: Storage<T, R2, C2>, DefaultAllocator: SameShapeAllocator<R1, C1, R2, C2>, ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2>,

Componentwise matrix or vector multiplication.

§Example
let a = Matrix2::new(0.0, 1.0, 2.0, 3.0);
let b = Matrix2::new(4.0, 5.0, 6.0, 7.0);
let expected = Matrix2::new(0.0, 5.0, 12.0, 21.0);

assert_eq!(a.component_mul(&b), expected);
source

pub fn cmpy<R2, C2, SB, R3, C3, SC>( &mut self, alpha: T, a: &Matrix<T, R2, C2, SB>, b: &Matrix<T, R3, C3, SC>, beta: T, )
where T: ClosedMulAssign + Zero + Mul<T, Output = T> + Add<T, Output = T>, R2: Dim, C2: Dim, R3: Dim, C3: Dim, SA: StorageMut<T, R1, C1>, SB: Storage<T, R2, C2>, SC: Storage<T, R3, C3>, ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> + SameNumberOfRows<R1, R3> + SameNumberOfColumns<C1, C3>,

Computes componentwise self[i] = alpha * a[i] * b[i] + beta * self[i].

§Example
let mut m = Matrix2::new(0.0, 1.0, 2.0, 3.0);
let a = Matrix2::new(0.0, 1.0, 2.0, 3.0);
let b = Matrix2::new(4.0, 5.0, 6.0, 7.0);
let expected = (a.component_mul(&b) * 5.0) + m * 10.0;

m.cmpy(5.0, &a, &b, 10.0);
assert_eq!(m, expected);
source

pub fn component_mul_assign<R2, C2, SB>(&mut self, rhs: &Matrix<T, R2, C2, SB>)
where T: ClosedMulAssign, R2: Dim, C2: Dim, SA: StorageMut<T, R1, C1>, SB: Storage<T, R2, C2>, ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2>,

Inplace componentwise matrix or vector multiplication.

§Example
let mut a = Matrix2::new(0.0, 1.0, 2.0, 3.0);
let b = Matrix2::new(4.0, 5.0, 6.0, 7.0);
let expected = Matrix2::new(0.0, 5.0, 12.0, 21.0);

a.component_mul_assign(&b);

assert_eq!(a, expected);
source

pub fn component_mul_mut<R2, C2, SB>(&mut self, rhs: &Matrix<T, R2, C2, SB>)
where T: ClosedMulAssign, R2: Dim, C2: Dim, SA: StorageMut<T, R1, C1>, SB: Storage<T, R2, C2>, ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2>,

👎De