Hi everyone, I'm writing a code to simulate a 12-bit dac. The behavior is quite simple: A serial digital input receive datas ( 32-bits word with the 12 bits to convert + don't care bits + command) and the 12 bits word is converted every 32 clk. My first try was to create a signal data_input: std_logic_vector (31 downto 0) and convert only the 12 bits I want. But this not exactly the right thing. I thought a clk counter is necessary and others things but can't find out... Can you give me some advice to get over it please?
Basically you will need two clocks since the left side domain is totally analog and independent from the sampling domain. Practically you will work with ps resoultion in order to feed your DAC with data. Furthermore you will need a kind of anti aliasing filter for the incoming data stream to simulate aliasing which will occur in reality. Next steps are non linearity and noise.
angelo wrote: > I'm writing a code to simulate a 12-bit dac. > ... > and the 12 bits word is converted every 32 clk. Into what? > signal data_input: std_logic_vector (31 downto 0) a 32-bit input doesn't match > A serial digital input What input do you have? What output do you need? What is the actual problem?
I wrote this code in a first time:
library IEEE; use IEEE.std_logic_1164.all; use ieee.numeric_std.all; entity dac is generic (data_width : integer; SDI_width : integer --REF : real ); port( REF : in real; SDI : in std_logic_vector (SDI_width - 1 downto 0); SDO : out std_logic_vector(SDI_width - 1 downto 0); SCK : in std_logic; CLR : in std_logic; -- low level active CS_LD : in std_logic; -- '0': CS / '1': LD LDAC : in std_logic; -- low level active (always '1' here) Vout : out real ); end dac; architecture archi of dac is signal data : std_logic_vector(11 downto 0); signal data_real : real; begin data <= SDI(15 downto 4); P1: process(SCK, data) constant Tclk : time := 10 ns; begin data_real <= (real(to_integer(signed(data)))); SDO <= transport SDI after Tclk*32; if (CLR = '0' or CS_LD = '0') then Vout <= 0.0; elsif (SCK'event and SCK = '1') then Vout <= data_real/(2.0**data_width) * REF; end if; end process; end archi;
The problem is I have to connect the signal SDI to an 1 bit signal. Which is not possible if SDI is a std_logic_vector.
: Edited by Moderator
angelo wrote: > Which is not possible if SDI is a std_logic_vector. SDI is not a vector at all! It is a simple 1 bit std_logic input. And together with LD it is very clear what has to happen here: 1. LD is set to '0' 2. with SCLK bits from SDI are shifted into a 32 bit register in the DAC 3. with LD set to '1' the register is evaluated So it may look like this:
SDI : in std_logic; ... SCK : in std_logic; ... CS_LD : in std_logic; -- '0': CS / '1': LD : : signal inreg : std_logic_vector (31 downto 0); : : inreg <= inreg(30 downto 0) & SDI when rising_edge(SCLK) and CS = '0'; -- shift data in, assuming MSB first data <= inreg(15 downto 4) when rising_edge(CS_LD); -- transfer interesting part of vector to analog processing
Got the idea?
: Edited by Moderator