I have a question about blocking commands in an always block in verilog. For the simple code snippet below:
always @(posedge clk) begin address <= address + 1; end always @(address) begin data = data | 8'h04; end
My question is: How can I guarantee that the "data = data | 8'h04;" is completed before the "address" changes on the next positive edge of the clock? My solution (But with question about it):
always @(posedge clk) begin //Only increase the address if the computation is complete if(!computingFlag) begin address <= address + 1; end end always @(address) begin computingFlag = 1'b1; data = data | 8'h04; computingFlag = 1'b0 end
In the above I set a flag telling the address to not increment until the computation is done. Questions: 1) But is it guaranteed that the blocking statements will execute one after another? The next will never execute till the first is over? 2) What if the "data = data | 8'h04;" statment was a computation that completed so fast that the computing flag did not have enough time to actually rise and fall? I'm talking about the physical limitation of the hardware (it's speed). Or does the synthesizer guarantee that the "computingFlag" will get to steady state before the next line is implemented? 3) How can you tell if an always block will be executed one after another instead of synthesizing into combinational logic? Thank you for the insight Matt
guitardenver wrote: > > 3) How can you tell if an always block will be executed one after > another instead of synthesizing into combinational logic? > Easy answer: It will always be synthesized into combinational logic! On Hardware there ist no such thing as an execution engine which executes your code line by line.
Lattice User wrote: > guitardenver wrote: > >> >> 3) How can you tell if an always block will be executed one after >> another instead of synthesizing into combinational logic? >> > > Easy answer: It will always be synthesized into combinational logic! > > On Hardware there ist no such thing as an execution engine which > executes your code line by line. That makes sense. The resources i've been reading where a little confusing on that then. I'm guessing now that if I tried to synthesized that, it would give an error then because i'm assigning "computingFlag" to two different values at the same time. Any suggestions on how to solve the problem? How do I make sure that the computation is finished before it increments to the next address? To be more specific on the computation:
always @(posedge clk) begin address <= address + 1; end always @(address) begin ram[address] = ram[address] | 8'h04; end
The computation is simple but it could be replaced by anything that would take multiple clock cycles to complete.
guitardenver wrote: > How do I make sure that the computation is finished before it > increments to the next address? Your clock must be slow enough... In fact: have a look what you have in the real world: 1) logic gates (in FPGA represented by LUTs) and 2) flipflops And so your question must be: How can I make sure that the flipflop has valid input? And the answer is: The input must be stable (well) before the clock edge! > The computation is simple but it could be replaced by anything that > would take multiple clock cycles to complete. Then you have a "multi cycle path" and you must tell this to the synthesizer with according timing constraints. > How do I make sure that the computation is finished before it > increments to the next address? You have a look how long the computation lasts and then you add a counter to your design thats waits long enough by coounting some clock cycles... Or you break up the calculation and add some flipflops in your design (=register balancing) to generate a pipeline structure and deal up the slow logic against a calculation needing some clock cycles. With such a strategy you will have a completely synchronous design in which can get a new computation result with each clock. The only thing is that then each result is delayed several clocks.
Another attempt: I learned that when using blocking assignments that the right side of the statement for every statement is computed, and only when they are all realized, the data is then clocked in to the registers. Is this true? Is it garunteed or does it depend on what blocking statements are in the always block? If the above is true here is my next attempt at conceptualizing this. The address can only be incremented if "computingFlag" is 0, and I know that the Block 2 (labled below) will complete before the next clock cycle because it's such a small operation. The "computingFlag" will only be cleared once all the right side of the blocking statments in Block 3 are realized. This means the computation is completly finished before the result of both the computation and the computingFlag is written (clocked in) to the registers. Which means Block 1 will not increment the address until all of block 3 is completely finished. Is this a legitimate way to do this?
//Block 1 always @(posedge clk) begin //Only increase the address if the computation is complete if(!computingFlag) begin address <= address + 1; end end //Block 2 always @(address) begin computingFlag = 1'b1; end //Block 3 always @(computingFlag) begin if(computingFlag) begin computingFlag <= 1'b0; ram[address] <= ram[address] | 3'b100; end end
Sorry, I meant "nonBlocking" statements in the first paragraph not "blocking"
Well I might of answered my own question. The computation in Block three will be combinational logic that will be connected to the registers inputs. But the computation will still have to propagate to steady state before the next clock cycle when the results of block 3 are clocked in (the non-blocking part). The non-blocking statement does not and can not wait for all the logic to get to steady state before it clocks in the results. Which I guess is why never using non-blocking statements with combinational logic on the right side is good practice.
guitardenver wrote: > The non-blocking statement does not and can not wait for all the logic > to get to steady state before it clocks in the results. In real life nothing is "waiting" for anything to be "stable" or "steady". In real life theres some kind of logic in front of a flipflop. And behind that flipflop ist some kind of logic followed by a flipflop. And so on. Thats all. And that "computing_flag" in your example above is simply optimized to 0 and vanishes to nowhere. Just have a look for the RTL schematics after synthesis...
: Edited by Moderator
Forget about using blocking assignments for controlling the flow. IT DOES NOT WORK. Same goes for sensitivy lists on combininatorial always blocks. They are only there to aid the simulator and will be ignored by the synthesys tool. Some rule of thunb for creating synthesizable code which simulates correctly: 1. The sensitivity list for a sequential always block must only include one clock and optional one reset. You have to specify posedge or negedge on both signals. 2. The sensitivity list for a combinatorial always block must be complete and must not contain a posedge or negedge operator. 3. Use only non-blocking assignments in sequential always blocks. 4. Use only blocking assignments in combinatorial always blocks. 5. Don't use any signal in a combinatorial always block on both sides of the assignement operator. Not follwoing these rules will result in either failing to synthesize or worse creating code which behaves differently in simulation and on hardware. (so called simulation mismatch) This should solve your problem:
always @(posedge clk) begin address <= address + 1; ram[address] <= ram[address] | 3'b100; end
Try to understand why, read about RTL (Register Transfer Level) coding. Also consult the coding style guidelines of your toolchain. PS: to describe a real ram on a FPGA you need to read the coding style guildlines of your toolchain. The above will most likely fail and just create a large amount of flipflops.