Open
Description
Hi,
I am beginner with uTensor and embedded C/C++. I have a little experience around Python and wanted to study development of intelligence at the edge by building models in Python and deploying on Cortex boards. @neil-tan helped me understand the basics and I used his tutorial to begin this understanding.
So passing the input data, wrapped in a WrappedRamTensor
works great the 1st time. When I try to provide another instance of input data and do a second pass - it gives me an error. What could I be doing wrong? Does input data tensor have to be thread-safe?
Output with the error
[1] First instance of prediction: For input 10.000
Input: 10.000 | Expected: 72.999 | Predicted: 71.871
[2] Second instance of prediction: For input 40.000
[Error] lib\uTensor\core\context.cpp:96 @push Tensor "Placeholder:0" not found
Source code
// A single value is being used so Tensor shape is {1, 1}
float input_data[1] = {10.0};
Tensor* input_x = new WrappedRamTensor<float>({1, 1}, (float*) &input_data);
// Value predicted by LR model
S_TENSOR pred_tensor;
float pred_value;
// Compute model value for comparison
float W = 6.968;
float B = 3.319;
float y;
// First pass: Constant value 10.0 and evaluate first time:
printf("\n [1] First instance of prediction: For input %4.3f", input_data[0]);
get_LR_model_ctx(ctx, input_x); // Pass the 'input' data tensor to the context
pred_tensor = ctx.get("y_pred:0"); // Get a reference to the 'output' tensor
ctx.eval(); // Trigger the inference engine
pred_value = *(pred_tensor->read<float>(0, 0)); // Get the result back
y = W * input_data[0] + B; // Expected output
printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
// Second pass: Change input data and re-evalaute:
input_data[0] = 40.0;
printf("\n\n [2] Second instance of prediction: For input %4.3f\n", input_data[0]);
get_LR_model_ctx(ctx, input_x); // Pass the 'input' data tensor to the context
pred_tensor = ctx.get("y_pred:0"); // Get a reference to the 'output' tensor
ctx.eval(); // Trigger the inference engine
pred_value = *(pred_tensor->read<float>(0, 0)); // Get the result back
y = W * input_data[0] + B; // Expected output
printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
printf("\n -------------------------------------------------------------------\n");
return 0;
}
Metadata
Metadata
Assignees
Labels
No labels