EmbDev.net

Forum: ARM programming with GCC/GNU tools Newlib printf + float = allocating lots of mem


von Ingmar B. (ingmarblonk)


Rate this post
useful
not useful
Hello,

I'm trying to get printf working in combination with some 
floats/doubles.
I use the CodeSourcery toolchain (Lite 2010q1-188 for ARM EABI) and I'm 
developing for an lpc1768.

Everything works fine. I implemented the needed syscalls, I'm able to 
use printf with integers. But when I try to print a float or double 
printf tries to allocate lots of memory.
It first allocates a few bytes (for internal housekeeping?). Immediately 
following 200+ bytes, following a block of 4K, following another 4K, 
then even 8K!

I thought it maybe was trying to check how much memory it could 
allocate, so I set errno to ENOMEM, but this resulted in a HardFault.

The strange thing is that the first printf stops allocating after the 
first/second 4K and just works perfect. But the second printf call needs 
way to much memory.

I also tried yagarto. Then it just allocates a little memory, but it's 
printing totally nonsense and doubles aren't working with yagarto.

Has someone an idea why it's allocating so much memory and/or knows how 
to prevent it from doing so.

(I don't know all the exact numbers right now, I'll post some example 
code tomorrow with some comments.)

Thanks in advance.

Regards,

Ingmar

von Martin T. (mthomas) (Moderator)


Rate this post
useful
not useful
Ingmar Blonk wrote:
> Hello,
>
> I'm trying to get printf working in combination with some
> floats/doubles.
> I use the CodeSourcery toolchain (Lite 2010q1-188 for ARM EABI) and I'm
> developing for an lpc1768.
>
> Everything works fine. I implemented the needed syscalls, I'm able to
> use printf with integers. But when I try to print a float or double
> printf tries to allocate lots of memory.
> It first allocates a few bytes (for internal housekeeping?). Immediately
> following 200+ bytes, following a block of 4K, following another 4K,
> then even 8K!

I guess you check requested memory-size in your sbrk. As far as I 
understand the malloc implementation in newlib requests memory in 4096 
bytes pages. So this should it should be "normal" that you see steps of 
"4k".

> I thought it maybe was trying to check how much memory it could
> allocate, so I set errno to ENOMEM, but this resulted in a HardFault.
>
> The strange thing is that the first printf stops allocating after the
> first/second 4K and just works perfect. But the second printf call needs
> way to much memory.

I guess the 2nd printf-call uses FP formating. If yes, this is also kind 
of "normal", newlib's FP format needs a lot of memory (program and RAM)

> I also tried yagarto. Then it just allocates a little memory, but it's
> printing totally nonsense and doubles aren't working with yagarto.

You may ask the Yagarto packager about this (Michael Fischer).

> Has someone an idea why it's allocating so much memory and/or knows how
> to prevent it from doing so.

If you look into newlib's source-code (file mallocr.c) you can see that 
with the SMALL_MEMORY macro defined the page-size is 128 bytes instead 
of 4096 bytes. Maybe printf with FP just needs little more than 4K so 
there  might be enough RAM for the FP formating when using a newlib 
configured and built with SMALL_MEMORY defined.

You may try to avoid stdio at all and use alternative code instead 
(search for rprintf, xprintf, dtostrf...)

Please log in before posting. Registration is free and takes only a minute.
Existing account
Do you have a Google/GoogleMail account? No registration required!
Log in with Google account
No account? Register here.