Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Most efficient way to split an array to write to different files

I’m reading double data in from a DAQ that has 2 channels. Data is stored in read_buf, the first 1000 samples are channel 0 and the second 1000 are channel 1. I have no control over this concatenation of channel data.

I have set up 2 files like so,

FILE *fptr_0;
if ((fptr_0 = fopen("channel_0.bin", "wb")) == NULL)
{
    printf("Error opening file.\n");
    exit(1);
}

FILE *fptr_1;
if ((fptr_1 = fopen("channel_1.bin", "wb")) == NULL)
{
    printf("Error opening file.\n");
    exit(1);
}

I would then like to split read_buf and send the first half to fptr_0 and the second to fptr_1. I can read out the first half but am flummoxed on how to read out the second half. How do I point fptr_1 to just the second half of read_buf?

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

Do I have to copy each half into a new array?

What I have so far which works for the first half followed by what I’m not getting,

double read_buf[2000];
result = DAQ_func(device, &status, read_buf);
fwrite(&read_buf, sizeof(double), (sizeof(read_buf) / sizeof(read_buf[0])) / 2, fptr_0);
fwrite( ??? , sizeof(double), (sizeof(read_buf) / sizeof(read_buf[0])) / 2, fptr_1);

>Solution :

??? is read_buf + 1000 (no &).

fwrite(read_buf + 1000, sizeof (double), (sizeof read_buf / sizeof read_buf[0]) / 2, fptr_1);

Or if you prefer &read_buf[1000] which is the same.

Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading