Forum: FPGA, VHDL & Verilog Improvement of DDR3 performance using Xilinx MIG

von Chris007 (Guest)

Attached files:

Rate this post
0 useful
not useful

I've generated a memory interface for a ML605 board using MIG 3.6 from 
ISE 12.3.
The system clock is 200 MHz and Burstwidth is configured to 8.
I've implemented custom logic for controlling the generated UI.
My task is to perform one 2x256 bit write and as many read commands as 
possible in a period of 100 ns.
The image attached depicts such a use case, one write to an address and 
three reads back to back (the read adresses vary, this was not 
implemented in the simulation where the pic was taken, but has no 
influence on the issue).
As demanded by the MIG documentation, one has to wait for app_rdy being 
asserted at the same time as app_en for a command being performed 
But obviously, my command pattern is not such one the UI likes best and 
therefore app_rdy is only asserted sporadically. So the whole 
write/tripleread action takes more than 100 ns.
If I only perform one write and one read alternating, the UI trains 
itself to that use case and asserts app_rdy nearly seamless after some 
µs. But as already mentioned, I need several reads after the write, and 
performance is too bad already when performing two reads after the 

Now my question: Which command pattern is more performant for such a use 
case? Shall I include waitstates between the read commands or something?

Many thanks in advance!


Entering an e-mail address is optional. If you want to receive reply notifications by e-mail, please log in.

Rules — please read before posting

  • Post long source code as attachment, not in the text
  • Posting advertisements is forbidden.

Formatting options

  • [c]C code[/c]
  • [avrasm]AVR assembler code[/avrasm]
  • [vhdl]VHDL code[/vhdl]
  • [code]code in other languages, ASCII drawings[/code]
  • [math]formula (LaTeX syntax)[/math]

Bild automatisch verkleinern, falls nötig
Note: the original post is older than 6 months. Please don't ask any new questions in this thread, but start a new one.