# Forum: ARM programming with GCC/GNU tools Generating reentrant code, safe assumption or not?

Rate this post
 0 ▲ useful ▼ not useful
Hello all,

I am porting an RTOS to the WinARM toolset and so far, everything is
working very well.  I have implemented the context save/restore and IRQ
handling without hitch (using Insight to verify the code) and am
preparing to start working on the target application.

I have some questions regarding overall re-entrancy when using the
WinARM toolset.

Given that I don't use any standard C/C++ library functions in my
application (ie: malloc/free/open/close/etc.) and generate all of my own
C/C++ code (coding for re-entrancy), are the following assumptions
correct?

1) C/C++ code generates re-entrant binary.

2) C++ exception handling is re-entrant (safe to use in multiple tasks).

3) Integer and floating point arithmetic is re-entrant.

If anyone can either confirm or deny, I would appreciate the feedback.

Cheers!

Dan

Rate this post
 0 ▲ useful ▼ not useful
> Given that I don't use any standard C/C++ library functions...
You may be being over-cautious. The Newlib documentation is clear upon
what is reentrant and what is not:
libc: http://sourceware.org/newlib/libc.html#SEC185
libm: http://sourceware.org/newlib/libm.html#SEC42

> 1) C/C++ code generates re-entrant binary.
Yes. Reentrancy is a feature of C/C++ not any particular compiler. Of
course this pre-supposes you have written the code to be reentrant. This
means taking care with static and global data; but in a multi-threaded
environment it also involves declaring shared data as volatile, and
ensuring non-atomic data accesses and shared resources are protected by
a mutual exclusion mechanism.

> 2) C++ exception handling is re-entrant (safe to use in multiple tasks).
I believe so, but C++ exception handling is by its nature
non-deterministic so should be used with care (or not at all) in
real-time systems.

> 3) Integer and floating point arithmetic is re-entrant.
Floating point is tricky. If you have floating point hardware (e.g. the
VFP on an LPC3800), you need to ensure that the floating point registers
are preserved in your context switch code - extending the context switch
time. One way of improving this situation is to flag tasks that use
floating point so the FP registers are only preserved or restored when
switching from or too an FP task.

Software floating point may emulate hardware FPU, so that real FP
instructions are compiled, but cause an unrecognised instruction
exception which a handler traps and emulates the instruction. The same
context switch code may work in this circumstance - I've never needed to
test the theory however.

If floating point is not implemented by emulation it is likely to be
reentrant, but you would need to confirm that by inspection of the
relevant GNU source.

IMO because the use of hardware, emulation, and software floating point
is compiler and/or compiler option dependent, your safest bet is to
ensure that floating point operations are only ever performed by a
operations for other threads - although it would be very slow for
primitive operations - this approach is best used where the server does
significant number crunching before returning a result - where the
operation takes significantly longer that the associated context switch
and IPC.

Note that software floating point is horribly slow - I think I estimated
from tests that it is in the order of 1 MFLOPS on a 200MHz ARM9 device
using single precision only. Division is especially slow as you might
expect. The actual performance is very dependent upon the nature of the
operations your code performs. Usually best avoided and re-coded where
possible using fixed-point (scaled integers); however what you gain in
performance you loose in precision and overflow management. Fixing these
issues generically (by creating or using a general fixed-point library
rather than hand optimising each operation) can be as slow as software
floating point. The choice depends on your application.

Integer operations are not a problem, these are performed using the

Clifford

Rate this post
 0 ▲ useful ▼ not useful
Thanks Clifford.  That was a very informative and helpful reply and I
appreciate  the time and effort taken.

re: Floating point. I am not yet sure what WinARM is generating for the
LPC2119 target (emulation or straight software), so I will have to dig
deeper to see what it is.  If it is emulation, my RTOS has user-hooks
for task start/save/load/stop that can be used to augment the context to
include the "virtual" FPU (in the case of emulation) or save globals if
the FP software requires them.  I have done this in the past with the
HC12 GCC toolset and it worked fine.  In any case, to do it with WinARM,
I will need to determine first which method is being used and then what
context (if needed) to save.

Your suggestion of creating a server to handle FP computation has merit
in cases where the bulk of computation can be localised in one server
(ex: a DSP filtering routine), but I have multiple filtering cases
operating at different periods, so I will need to ensure each task can
safely perform calculations.

The fixed-point approach is also a good idea and I already use it in
some cases where the dynamic range is small and well known.

Cheers!

Dan

Clifford Slocombe wrote:
>> Given that I don't use any standard C/C++ library functions...
> You may be being over-cautious. The Newlib documentation is clear upon
> what is reentrant and what is not:
> libc: http://sourceware.org/newlib/libc.html#SEC185
> libm: http://sourceware.org/newlib/libm.html#SEC42
>
>> 1) C/C++ code generates re-entrant binary.
> Yes. Reentrancy is a feature of C/C++ not any particular compiler. Of
> course this pre-supposes you have written the code to be reentrant. This
> means taking care with static and global data; but in a multi-threaded
> environment it also involves declaring shared data as volatile, and
> ensuring non-atomic data accesses and shared resources are protected by
> a mutual exclusion mechanism.
>
>> 2) C++ exception handling is re-entrant (safe to use in multiple tasks).
> I believe so, but C++ exception handling is by its nature
> non-deterministic so should be used with care (or not at all) in
> real-time systems.
>
>> 3) Integer and floating point arithmetic is re-entrant.
> Floating point is tricky. If you have floating point hardware (e.g. the
> VFP on an LPC3800), you need to ensure that the floating point registers
> are preserved in your context switch code - extending the context switch
> time. One way of improving this situation is to flag tasks that use
> floating point so the FP registers are only preserved or restored when
> switching from or too an FP task.
>
> Software floating point may emulate hardware FPU, so that real FP
> instructions are compiled, but cause an unrecognised instruction
> exception which a handler traps and emulates the instruction. The same
> context switch code may work in this circumstance - I've never needed to
> test the theory however.
>
> If floating point is not implemented by emulation it is likely to be
> reentrant, but you would need to confirm that by inspection of the
> relevant GNU source.
>
> IMO because the use of hardware, emulation, and software floating point
> is compiler and/or compiler option dependent, your safest bet is to
> ensure that floating point operations are only ever performed by a
> single thread. You could possibly have a server thread to perform
> operations for other threads - although it would be very slow for
> primitive operations - this approach is best used where the server does
> significant number crunching before returning a result - where the
> operation takes significantly longer that the associated context switch
> and IPC.
>
> Note that software floating point is horribly slow - I think I estimated
> from tests that it is in the order of 1 MFLOPS on a 200MHz ARM9 device
> using single precision only. Division is especially slow as you might
> expect. The actual performance is very dependent upon the nature of the
> operations your code performs. Usually best avoided and re-coded where
> possible using fixed-point (scaled integers); however what you gain in
> performance you loose in precision and overflow management. Fixing these
> issues generically (by creating or using a general fixed-point library
> rather than hand optimising each operation) can be as slow as software
> floating point. The choice depends on your application.
>
> Integer operations are not a problem, these are performed using the
>
> Clifford

Rate this post
 0 ▲ useful ▼ not useful
Dan Quinz wrote:
> Thanks Clifford.  That was a very informative and helpful reply and I
> appreciate  the time and effort taken.
>
You are welcome. You seem to know what you are doing here. The compiler
has a number of switches for controlling how floating point is handled.
http://gcc.gnu.org/onlinedocs/gcc-4.2.1/gcc/ARM-Options.html#ARM-Options,
however just to be sure I would step through a simple operation at the
assembler level. If you see co-processor instructions that when executed
cause an exception, then you are looking at emulation, if calls are
inserted (that don't in turn use the/ co-processor), then it is a
software implementation. I see no reason why a software implementation
should require static data.

Clifford

• $formula (LaTeX syntax)$