EmbDev.net

Forum: ARM programming with GCC/GNU tools Division Generate Undefined Exception Interrupt


von Bhavin T. (Company: Trinity ESPL) (bhavin)


Rate this post
useful
not useful
Hello, I m using YAGARTO Tool to compile my code for CPU-AT91SAM7X512. 
This tool provide ECLIPSE as editor and GCC VERSON 4.2.1 as compiler. I 
developed one project which is consuming code memory around 265KBytes 
and 33KBytes of RAM. When i m runing this code it works ok. Now i add 
other .c & .h files. But i m not calling any function or not using any 
variables out of these last added files. I just add these files in 
project and then compile those files. But i m not using any file out of 
last added files. Now when i run, succesfully build code on cpu it 
executes some part of code and than what its doing not able to 
understand. When i debug code i found that when it find devision 
operation, than cpu will go somewhere else and not run next remaining 
code. After adding .c & .h files my code size becomes around 353KBytes 
and RAM around 66KBytes. So, i want to know why my code is not running 
proper if i add others .c & .h files, even i m not using those files in 
my code ? I need those last added files but right now i just compile 
those files. I think there may be some MAKEFILE problem. Because right 
now i found that i just compile code not using added code. So, just 
compilation make this difference.
     In my MAKEFILE i provide option " -fno-common -O2 -g " for 
compilation.
     In devision operation i m deviding WORD(2 Byte Unsigned Data) by 
1000.0 and saving result in float type variable. Even i tried with 
adding casting for WORD to FLOAT. At time of executing division 
instruction it is generating Undefined Exeception Interrupt.

von Clifford S. (clifford)


Rate this post
useful
not useful
Bhavin Tailor wrote:
> I think there may be some MAKEFILE problem. Because right
> now i found that i just compile code not using added code. So, just
> compilation make this difference.

Blaming the makefile for a run-time error is not really applying Occam's 
razor is it? The chances are that there was always a bug in your 
original code but by adding additional files you have changed the memory 
context in which the code runs. For example the result of dereferencing 
a bad pointer for example depends on the content of the location it 
points to, by adding code you may have changed that content from 
something benign that appears to work, to something that now show 
symptoms.

>      In my MAKEFILE i provide option " -fno-common -O2 -g " for
> compilation.

The first thing to do if you are having code errors is switch off 
optimisation. You cannot sensibly debug code with optimisation switched 
on since the compiler will often perform out of order execution. It is 
even possible that you are blaming the divide operation when it has 
nothing to do with it. Debugging at source level with optimisation 
switched on makes no sense, there is no longer a one-to-one in order 
relationship between the machine instructions and source lines.

If you find that your code runs without optimisation, but fails with 
optimisation, it is almost certainly your code at fault and not the 
optimiser. Unsafe but valid code will often run with the simple parsing 
rules used by the normal compilation but not by optimisation, because 
the optimiser assumes that the code is good. Lack of volatile where 
needed is often the cause: 
http://www.embedded.com/columns/beginerscorner/9900209?_requestid=591212

You should also use the -Wall -Werror compiler options and fix all 
warnings. It will improve your code quality.

>      In devision operation i m deviding WORD(2 Byte Unsigned Data) by
> 1000.0 and saving result in float type variable. Even i tried with
> adding casting for WORD to FLOAT. At time of executing division
> instruction it is generating Undefined Exeception Interrupt.

By dividing by a double precision floating point value you are in fact 
pulling in a considerable chunk of library code. It is probably best to 
just avoid floating point on hardware that does not have an FPU. It 
isn't usually hard to avoid.

von ahmad (Guest)


Rate this post
useful
not useful
hello
I need to learn about arm but I don't have any data about it

von (prx) A. K. (prx)


Rate this post
useful
not useful
Clifford Slocombe wrote:
> By dividing by a double precision floating point value you are in fact
> pulling in a considerable chunk of library code.

Likely not a big deal with a 512KB device. In those dimensions, you may 
want to avoid thinking in terms of Mega8. ;-)

However it could get interesting to cram 353KB optimized code in a 512KB 
device when compiled unoptimized.

: Edited by User
von Clifford S. (clifford)


Rate this post
useful
not useful
ahmad wrote:
> hello
> I need to learn about arm but I don't have any data about it

So you posted to an unrelated five year old thread!?  You truely have 
much to learn.

von (prx) A. K. (prx)


Rate this post
useful
not useful
oops...

von Clifford S. (clifford)


Rate this post
useful
not useful
A. K. wrote:
> Clifford Slocombe wrote:
> Likely not a big deal with a 512KB device. In those dimensions, you may
> want to avoid thinking in terms of Mega8. ;-)
You are making some bizarre (and frankly offensive) assumptions about my 
experience.  You are also assuming that a application will always have 
headroom in a 512Kb part - that is hardly true of all applications. 
Because of cost considerations on volume production, I have frequently 
been required to fit the smallest possible part - avoiding code bloat 
can have significant cost advantages.


There are many other reasons for avoiding FP:

Non-deterministic and time consuming on software implementations.
Hardware implementations not thread safe without RTOS support 
(uncommon).

von (prx) A. K. (prx)


Rate this post
useful
not useful
Didn't want to be offensive, sorry. Also I did not assume that your 
experience is limited to Mega8.

However there is a common assumption that FP code has to be avoided at 
all costs in microcontrollers, especially when the processor does not 
support it in hardware. However I've seen cases when fixed point 
replacement code was considerably more expensive and quite a bit harder 
to understand and maintain. The size of added library code also doesn't 
affect big devices the same way as small devices, especially when 
printf/scanf are not used or need not support FP.

Sure, there are cases when FP code is not well suited and realtime stuff 
with short timing constraints certainly is among them. Not all µC 
programs have those constraints though, so IMHO using FP code can be 
appropriate in some case and can be unwise in others.

: Edited by User
von Clifford S. (clifford)


Rate this post
useful
not useful
A. K. wrote:
> using FP code can be appropriate in some case and can be unwise in others.

I think we agree, however it takes experience to know when it is 
appropriate and when it is unnecessary.

Please log in before posting. Registration is free and takes only a minute.
Existing account
Do you have a Google/GoogleMail account? No registration required!
Log in with Google account
No account? Register here.