As far as I understand unique_ptr<T> is not supposed to have such a huge overhead.
What do I wrong?
size_t t = sizeof(DataHelper::SEQ_DATA); // t = 12
std::vector<std::vector<std::unique_ptr<DataHelper::SEQ_DATA>>> d(SEQ_00_SIZE + 1); // SEQ_00_SIZE = 4540
for (unsigned int i = 0; i < d.size(); ++i) {
for (unsigned int k = 0; k < 124668; ++k) {
std::unique_ptr<DataHelper::SEQ_DATA> sd = std::make_unique<DataHelper::SEQ_DATA>();
d[i].push_back(std::move(sd));
}
}
takes about ~21GB of ram.
std::vector<std::vector<DataHelper::SEQ_DATA>> d(SEQ_00_SIZE + 1);
for (unsigned int i = 0; i < d.size(); ++i) {
for (unsigned int k = 0; k < 124668; ++k) {
DataHelper::SEQ_DATA sd;
d[i].push_back(sd);
}
}
takes about ~6,5GB of ram.
Additional information:
struct SEQ_DATA {
uint16_t id = 0;
uint16_t label = 0;
float intensity = 0.0f;
float z = 0.0f;
};
I just want to have a single vector<vector<T>> which holds my 4540 * 124668 objects as efficient as possible. I read values from binary files. Since the number of elements within the binary files varies, I cannot initialize the inner vector with the correct number (i.e. 124668 is only true for the first file).
gcc 9.3.0, c++ 17
>Solution :
"std::unique_ptr doesn’t have huge overhead" means that it doesn’t have huge overhead compared to a bare pointer to dynamic allocation:
{
auto ptr = std::make_unique<T>();
}
// has comparable cost to, and has exception safety unlike:
{
T* ptr = new T();
delete ptr;
}
std::unique_ptr doesn’t make the cost of dynamic allocation cheaper.
I just want to have a single
vector<vector<T>>which holds my 4540 * 124668 objects as efficient as possible.
The most efficient way to store 4540 * 124668 objects is a flat array:
std::vector<DataHelper::SEQ_DATA> d(4540 * 124668);
(i.e. 124668 is only true for the first file).
If you don’t need all 124668 elements, then it may be a waste of memory to have the unused elements in the vector.