-
Notifications
You must be signed in to change notification settings - Fork 46
Description
Description
I encountered an unexpected behavior when passing a NumPy array to a C++ function bound with eigenpy, where the C++ function expects an Eigen::Tensor<double, 3, Eigen::ColMajor>. It seems that the tensor is interpreted as if it were RowMajor, without any warning or conversion.
Reproducible example
C++ code :
void print_tensor3d_col(const Eigen::Tensor<double, 3, Eigen::ColMajor>& tensor) {
for (int i = 0; i < tensor.dimension(0); ++i) {
std::cout << "Slice [" << i << "]:\n";
for (int j = 0; j < tensor.dimension(1); ++j) {
std::cout << " Row " << j << ": ";
for (int k = 0; k < tensor.dimension(2); ++k) {
std::cout << std::setw(8) << std::fixed << std::setprecision(7)
<< tensor(i, j, k) << " ";
}
std::cout << "\n";
}
std::cout << "\n";
}
}
void print_tensor3d(const Eigen::Tensor<double, 3, Eigen::RowMajor>& tensor) {
for (int i = 0; i < tensor.dimension(0); ++i) {
std::cout << "Slice [" << i << "]:\n";
for (int j = 0; j < tensor.dimension(1); ++j) {
std::cout << " Row " << j << ": ";
for (int k = 0; k < tensor.dimension(2); ++k) {
std::cout << std::setw(8) << std::fixed << std::setprecision(3)
<< tensor(i, j, k) << " ";
}
std::cout << "\n";
}
std::cout << "\n";
}
}
Eigen::Tensor<double, 3, Eigen::RowMajor> RowMajor(
Eigen::Tensor<double, 3, Eigen::RowMajor> rowmajor
){
std::cout << "coord 1, 1, 0 : " << rowmajor(1, 1, 0) << std::endl;
print_tensor3d(rowmajor);
return rowmajor;
}
Eigen::Tensor<double, 3, Eigen::ColMajor> ColMajor(
Eigen::Tensor<double, 3, Eigen::ColMajor> colmajor
){
std::cout << "coord 1, 1, 0 : " << colmajor(1, 1, 0) << std::endl;
print_tensor3d_col(colmajor);
return colmajor;
}
BOOST_PYTHON_MODULE(tartempion) {
eigenpy::enableEigenPy();
eigenpy::enableEigenPySpecific<Eigen::Tensor<double, 3, Eigen::RowMajor>>();
eigenpy::enableEigenPySpecific<Eigen::Tensor<double, 3, Eigen::ColMajor>>();
eigenpy::enableEigenPySpecific<Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>>();
bp::def("ColMajor", &ColMajor);
bp::def("RowMajor", &RowMajor);
}
Python test code
import numpy as np
import tartempion
shape = (2, 3, 4)
tensor = np.arange(1, np.prod(shape) + 1).reshape(shape)
print("Function ColMajor")
print(tartempion.ColMajor(tensor))
print("Function RowMajor")
print(tartempion.RowMajor(tensor))
Output
Function ColMajor
coord 1, 1, 0 : 4 # ❌ should be 17
...
Function RowMajor
coord 1, 1, 0 : 17 # ✅ correct
...
From this, I conclude that:
A NumPy tensor is blindly interpreted as if it were ColMajor in the C++ binding.
No warning or error is raised.
The behavior is silently incorrect and leads to wrong memory reads.
Interestingly, if I pass a ColMajor tensor to the C++ function and return it as-is (still as a ColMajor), then the returned NumPy array appears correct. This suggests that the tensor is misinterpreted as RowMajor upon input and again as RowMajor upon output — effectively applying the same incorrect layout assumption twice, which cancels the effect when simply returning the tensor untouched. However, as soon as you perform any operation or modification on the tensor inside the C++ function, the behavior becomes erratic and unpredictable, since you're working with a layout that doesn't match the actual memory representation.
Ideally:
One of the following should happen:
(a) A conversion is performed at the boundary from NumPy array to ColMajor Eigen::Tensor.
(b) An error is raised or the binding is rejected if the layout doesn't match.
At the very least, the issue should be documented clearly.