Hi dear experts, when simulating a fairly simple design, ModelSim Pro ME (Microsemi Edition, 32-bit) memory usage steadily increases until it reaches 4GB limit and it crashes there.
# ** Fatal: (vsim-4) ****** Memory allocation failure. ***** # Attempting to allocate 131072 bytes # Please check your system for available memory and swap space. # ** Fatal: (vsim-4) ****** Memory allocation failure. ***** # Attempting to allocate 131072 bytes # Please check your system for available memory and swap space. # End time: 11:16:49 on Jan 24,2020, Elapsed time: 2:01:08 # Errors: 6, Warnings: 926 |
I am not sure whether this is a tool issue or if there is any memory leak in the testbench. The same testbench was not causing any crashes on Vivado simulator. How would a memory leak in a testbench look like? Is it normal for ModelSim's memory usage to steadily increase as simulation time increases or are all the logged signals saved/buffered in the wlf file constantly?
FPGA guy wrote: > Is it normal for ModelSim's memory usage to steadily increase as > simulation time increases or are all the logged signals saved/buffered > in the wlf file constantly? Yes. More simulation time (respectivly events) or more logged signals, both will increase the memory usage. I usually stop my simulations with the following construct:
process begin wait for 1000 ms; report "Simulation end."; report "stop hard." severity failure; wait; end process; |
> I am not sure whether this is a tool issue or if there is any memory > leak in the testbench. The same testbench was not causing any crashes on > Vivado simulator. Both simulators are diffrent. Running code in one of them means unfortunately nothing for the other, IMO. Duke
First: 4GB and 32 bit should ring a bell. You're simply running out of address space. To track down memory leaks, I'd recommend valgrind under Linux and a cross check against the GHDL simulator, if it's VHDL.
Duke Scarring wrote: > Yes. More simulation time (respectivly events) or more logged signals, > both will increase the memory usage. But, the simulator must do some paging and should not exceed the hard limits regarding RAM usage, right? Slowing down due to paging I would understand, but constantly increasing RAM usage until crash looks definitely like a memory leak... Strubi wrote: > First: 4GB and 32 bit should ring a bell. You're simply running out of > address space. > To track down memory leaks, I'd recommend valgrind under Linux and a > cross check against the GHDL simulator, if it's VHDL. Unfortunately, it is not purely VHDL, migrating everything to Linux would be pretty difficult, since Vendor-dependent BFM infrastructure is used. 4GB & 32bit rings a bell! My question was whether it is a tool bug or can someone create a memory leak at the testbench? It occurs also with some other example designs, so it is likely that the ModelSim Pro ME delivered within the Libero SoC 12.x has a bug.
FPGA guy wrote: > My question was whether it is a tool bug or > can someone create a memory leak at the testbench? If the same simulation work with vivado simulator than I would assume a tool bug. (But I think it is possible to generate testbenches which will burst the simulator limits...) Duke
FPGA guy wrote: > Is it normal for ModelSim's memory usage to steadily increase as > simulation time increases or are all the logged signals saved/buffered > in the wlf file constantly? Yes it can increase but not necessarily due to logged waveforms. I have simulations done with ModelSim Microsemi edition which run for 24 h and generated 9 GB of waveform data without using 9 GB of RAM. FPGA guy wrote: > My question was whether it is a tool bug or > can someone create a memory leak at the testbench? Yes, it is possible to write buggy testbenches that create all sorts of runtime problems. never had memory leaks before but some years ago it was easy to have testbenches that produced segfaults in ModelSim.
Christophz wrote: > Yes, it is possible to write buggy testbenches that create all sorts of > runtime problems. Buggy testbenches, I understand, but a memory leak? Can you please give a concrete example? Is there dynamic memory allocation mechanism for VHDL testbenches? Even if there is, we actually define all the variables/signals statically.
Not sure if it helps but you may give it a try. 1. You can tell Modelsim to write simulation output into a file and to limit the file size. Maybe then it does not hold all the data in RAM. On the other hand I don't know what happens when the file size reaches 4 GB for the 32-bit modelsim. I had the case that one simulation wrote a 100G file without using file size limit, but I dont remember the modelsim version. vsim -wlfopt -wlf some_file_name.wlf -wlfslim 200000 -wlfdeleteonquit testbench_name.vhd -wlf <...> defines the file -wlsflim limits the file size -wlfdeleteonquit removes the file when closing modelsim 2. If the above doesn't help, try limiting the log by only logging signals you really need. Instead of "log -r /*" (activated by default?), add the interesting signals using "add wave some_vhdl_module/*".
FPGA guy wrote: > Buggy testbenches, I understand, but a memory leak? Can you please give > a concrete example? No, not any more. It is long time ago. As a VHDL beginner Modelsim crashed quite often and I didn't understand how people can use such a tool. Nowadays my code is not that bad anymore and Modelsim does not crash anymore :-) I think I never had a memory leak but I definitely had several occasions where I got segmentation faults (illegal memory access, often a null pointer or an index pointing outside the array...). Like you sad there is no dynamic memory handling in VHDL and the strict typing in VHDL avoids illegal index but still I had segmentation faults during runtime...