EmbDev.net

Forum: FPGA, VHDL & Verilog Source synchronous interface IO constraints


von St. D. (st_d)


Attached files:

Rate this post
useful
not useful
Hello dear FPGA experts,

we have connected a PSRAM device 
(http://www.issi.com/WW/pdf/66WVD4M16ALL.pdf) directly to an FPGA and I 
am developing a FIFO controller that implements burst accesses in source 
synchronous CRAM 2.0 format.
My VHDL code plays well with the PSRAM model at functional simulation. 
But, I am having timing problems later and I am not sure whether, I 
define the IO constraints correctly or not...

I would be glad if you can show me the correct way in order to determine 
the
- max. output delay
- min. output delay
- max. input delay
- min. input delay
values.

My current logic:

- max. & min. output delay
Output signals (CS, ADV, OE, DATA/ADDR etc.) are outputted half a clock 
cycle earlier than the interface clock (CLK_if). So, if the internal 
combinational delay to the ports (CLK_if and the others) are all more or 
less the same, setup and hold requirements of the PSRAM will be easily 
met. How can I constrain that (more or less the same delays)?
What I do currently:
I am defining a generated clock on CLK_if. and constraining the output 
signals (CS, DATA/ADDR, etc.) to have max. 3,5 ns delay (3 setup time 
(tSP) + 0,5 trace mismatch/safety margin) regarding the CLK_if. Min. 
delay should not be important, right? or should I use the tHD value?

- max. & min. input delay
Same generated clock on CLK_if. and constraining the input signals 
(DATA_in & WAIT) to have max. 7,5 ns delay (7 ns max. (tACLK, tKW) + 0,5 
trace mismatch/safety margin). Min. delay should not be important, 
right? or should I use the tKOH value?

Thanks...

: Edited by User
von Klakx (Guest)


Rate this post
useful
not useful
from my knowledge you need a min and a max delay for sufficient 
calculation. If I remember correctly a max delay is not sufficient.

if you constraint correctly then you should get a timing violation or 
the routing can successfully fix it.

maybe your architecture is not feasible for driving but at first.

to help you constraining then I need more information about your input 
and output logic. schematics are welcome. The current clocking scheme is 
still to diffuse for me.

von Sym (Guest)


Rate this post
useful
not useful
For a source-synchronous interface, you need:
Output: Definition of clock
min output delay wrt clock
max output delay wrt clock
(possible multicycle paths on transition between phase-shifted clocks)
Input: Definition of clock, definition of virtual clock
max input delay wrt virt. clock
min input delay wrt virt. clock
(possible multicycle paths, if using multiple clocks).

When using DDR registers, you need to specify even the edges (rise, 
fall) and possibly false-path some transitions (e.g. rise-fall, 
fall-rise between some registers).

All delays specified need to be calculated based on the numbers in the 
datasheet, clock jitter, clock period, and margins for the PCB. The 
assumption that all delays will be more or less equal should never be 
taken. There are some good guidelines from FPGA vendors on this topic, 
because it is a very common one, for instance Altera AN-433 application 
note.

von St. D. (st_d)


Rate this post
useful
not useful
Sym wrote:
> For a source-synchronous interface, you need:
> Output: Definition of clock
> min output delay wrt clock
> max output delay wrt clock
> (possible multicycle paths on transition between phase-shifted clocks)

with clock, you mean the internal clock. Output delay must be defined 
wrt the outputted (for FPGA, a kind of virtual) clock. Am I wrong?

Please log in before posting. Registration is free and takes only a minute.
Existing account
Do you have a Google/GoogleMail account? No registration required!
Log in with Google account
No account? Register here.