Hi, I have some doubts about the use of the conversions from std_logic_vector to signed/unsigned. I always use the conversion signed(...), unsigned(...), but when I try to use the conversions defined by the library (to_signed, to_unsigned), it doesn't work. Can someone explain to me why this happens? And why the conversion unsigned() and signed() works? Thank you.
What doesn't "work"? What libraries do you use: std_logic_arith or numeric_std?
Sorry, I think I wasn't clear enough. I'm using numeric_std to convert std_logic_vector to unsigned. For example: When I use "to_unsigned()" it doesn't work. But "unsigned()" works. Thank you
> For example: When I use "to_unsigned()" it doesn't work. > But "unsigned()" works. Of course: these are different functions with different operands. The to_unsigned() expects an integer as first operand and a vector length as second operand. Then it converts the integer to a unsigned vector of the desired length. The unsigned() cast converts nothing, it just takes the std_logic vector and interprets it as an unsigned vector. Try this: http://www.lothar-miller.de/s9y/categories/16-Numeric_Std Its German. But a picture says more than 1000 words... ;-)
Hi Lothar, thank you for your answer. Can you explain how
works? They are not defined as functions in numeric_std. Thank you.
See the definition of those types:
type UNSIGNED is array (NATURAL range <>) of STD_LOGIC;
type SIGNED is array (NATURAL range <>) of STD_LOGIC;
(from numeric_std) And the one for this here:
TYPE std_logic_vector IS ARRAY ( NATURAL RANGE <>) OF std_logic;
(from Std_Logic_1164) Then you will see, that the cast only tells the synthesizer to look at the very same bits in another way. This also works the other way: If you write "110001110001" then you don't know whether its signed, unsigned or a std_logic_vector (it also could be a bit_vector or somethin else). Then you must tell the synthesizer what you mean. This is done with a qualifier http://www.lothar-miller.de/s9y/archives/82-Qualifier.html